Feb  2 05:52:36 np0005604943 kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Feb  2 05:52:36 np0005604943 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Feb  2 05:52:36 np0005604943 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb  2 05:52:36 np0005604943 kernel: BIOS-provided physical RAM map:
Feb  2 05:52:36 np0005604943 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb  2 05:52:36 np0005604943 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb  2 05:52:36 np0005604943 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb  2 05:52:36 np0005604943 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Feb  2 05:52:36 np0005604943 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Feb  2 05:52:36 np0005604943 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb  2 05:52:36 np0005604943 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb  2 05:52:36 np0005604943 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Feb  2 05:52:36 np0005604943 kernel: NX (Execute Disable) protection: active
Feb  2 05:52:36 np0005604943 kernel: APIC: Static calls initialized
Feb  2 05:52:36 np0005604943 kernel: SMBIOS 2.8 present.
Feb  2 05:52:36 np0005604943 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Feb  2 05:52:36 np0005604943 kernel: Hypervisor detected: KVM
Feb  2 05:52:36 np0005604943 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb  2 05:52:36 np0005604943 kernel: kvm-clock: using sched offset of 5218124504 cycles
Feb  2 05:52:36 np0005604943 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb  2 05:52:36 np0005604943 kernel: tsc: Detected 2799.998 MHz processor
Feb  2 05:52:36 np0005604943 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Feb  2 05:52:36 np0005604943 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb  2 05:52:36 np0005604943 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb  2 05:52:36 np0005604943 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Feb  2 05:52:36 np0005604943 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Feb  2 05:52:36 np0005604943 kernel: Using GB pages for direct mapping
Feb  2 05:52:36 np0005604943 kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Feb  2 05:52:36 np0005604943 kernel: ACPI: Early table checksum verification disabled
Feb  2 05:52:36 np0005604943 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Feb  2 05:52:36 np0005604943 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  2 05:52:36 np0005604943 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  2 05:52:36 np0005604943 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  2 05:52:36 np0005604943 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Feb  2 05:52:36 np0005604943 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  2 05:52:36 np0005604943 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  2 05:52:36 np0005604943 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Feb  2 05:52:36 np0005604943 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Feb  2 05:52:36 np0005604943 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Feb  2 05:52:36 np0005604943 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Feb  2 05:52:36 np0005604943 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Feb  2 05:52:36 np0005604943 kernel: No NUMA configuration found
Feb  2 05:52:36 np0005604943 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Feb  2 05:52:36 np0005604943 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Feb  2 05:52:36 np0005604943 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Feb  2 05:52:36 np0005604943 kernel: Zone ranges:
Feb  2 05:52:36 np0005604943 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb  2 05:52:36 np0005604943 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb  2 05:52:36 np0005604943 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Feb  2 05:52:36 np0005604943 kernel:  Device   empty
Feb  2 05:52:36 np0005604943 kernel: Movable zone start for each node
Feb  2 05:52:36 np0005604943 kernel: Early memory node ranges
Feb  2 05:52:36 np0005604943 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb  2 05:52:36 np0005604943 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Feb  2 05:52:36 np0005604943 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Feb  2 05:52:36 np0005604943 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Feb  2 05:52:36 np0005604943 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb  2 05:52:36 np0005604943 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb  2 05:52:36 np0005604943 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Feb  2 05:52:36 np0005604943 kernel: ACPI: PM-Timer IO Port: 0x608
Feb  2 05:52:36 np0005604943 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb  2 05:52:36 np0005604943 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb  2 05:52:36 np0005604943 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb  2 05:52:36 np0005604943 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb  2 05:52:36 np0005604943 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb  2 05:52:36 np0005604943 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb  2 05:52:36 np0005604943 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb  2 05:52:36 np0005604943 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb  2 05:52:36 np0005604943 kernel: TSC deadline timer available
Feb  2 05:52:36 np0005604943 kernel: CPU topo: Max. logical packages:   8
Feb  2 05:52:36 np0005604943 kernel: CPU topo: Max. logical dies:       8
Feb  2 05:52:36 np0005604943 kernel: CPU topo: Max. dies per package:   1
Feb  2 05:52:36 np0005604943 kernel: CPU topo: Max. threads per core:   1
Feb  2 05:52:36 np0005604943 kernel: CPU topo: Num. cores per package:     1
Feb  2 05:52:36 np0005604943 kernel: CPU topo: Num. threads per package:   1
Feb  2 05:52:36 np0005604943 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Feb  2 05:52:36 np0005604943 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb  2 05:52:36 np0005604943 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Feb  2 05:52:36 np0005604943 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Feb  2 05:52:36 np0005604943 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Feb  2 05:52:36 np0005604943 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Feb  2 05:52:36 np0005604943 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Feb  2 05:52:36 np0005604943 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Feb  2 05:52:36 np0005604943 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Feb  2 05:52:36 np0005604943 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Feb  2 05:52:36 np0005604943 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Feb  2 05:52:36 np0005604943 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Feb  2 05:52:36 np0005604943 kernel: Booting paravirtualized kernel on KVM
Feb  2 05:52:36 np0005604943 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb  2 05:52:36 np0005604943 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Feb  2 05:52:36 np0005604943 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Feb  2 05:52:36 np0005604943 kernel: kvm-guest: PV spinlocks disabled, no host support
Feb  2 05:52:36 np0005604943 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb  2 05:52:36 np0005604943 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Feb  2 05:52:36 np0005604943 kernel: random: crng init done
Feb  2 05:52:36 np0005604943 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb  2 05:52:36 np0005604943 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb  2 05:52:36 np0005604943 kernel: Fallback order for Node 0: 0 
Feb  2 05:52:36 np0005604943 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Feb  2 05:52:36 np0005604943 kernel: Policy zone: Normal
Feb  2 05:52:36 np0005604943 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb  2 05:52:36 np0005604943 kernel: software IO TLB: area num 8.
Feb  2 05:52:36 np0005604943 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Feb  2 05:52:36 np0005604943 kernel: ftrace: allocating 49438 entries in 194 pages
Feb  2 05:52:36 np0005604943 kernel: ftrace: allocated 194 pages with 3 groups
Feb  2 05:52:36 np0005604943 kernel: Dynamic Preempt: voluntary
Feb  2 05:52:36 np0005604943 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb  2 05:52:36 np0005604943 kernel: rcu: #011RCU event tracing is enabled.
Feb  2 05:52:36 np0005604943 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Feb  2 05:52:36 np0005604943 kernel: #011Trampoline variant of Tasks RCU enabled.
Feb  2 05:52:36 np0005604943 kernel: #011Rude variant of Tasks RCU enabled.
Feb  2 05:52:36 np0005604943 kernel: #011Tracing variant of Tasks RCU enabled.
Feb  2 05:52:36 np0005604943 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb  2 05:52:36 np0005604943 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Feb  2 05:52:36 np0005604943 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb  2 05:52:36 np0005604943 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb  2 05:52:36 np0005604943 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb  2 05:52:36 np0005604943 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Feb  2 05:52:36 np0005604943 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb  2 05:52:36 np0005604943 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Feb  2 05:52:36 np0005604943 kernel: Console: colour VGA+ 80x25
Feb  2 05:52:36 np0005604943 kernel: printk: console [ttyS0] enabled
Feb  2 05:52:36 np0005604943 kernel: ACPI: Core revision 20230331
Feb  2 05:52:36 np0005604943 kernel: APIC: Switch to symmetric I/O mode setup
Feb  2 05:52:36 np0005604943 kernel: x2apic enabled
Feb  2 05:52:36 np0005604943 kernel: APIC: Switched APIC routing to: physical x2apic
Feb  2 05:52:36 np0005604943 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb  2 05:52:36 np0005604943 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Feb  2 05:52:36 np0005604943 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb  2 05:52:36 np0005604943 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb  2 05:52:36 np0005604943 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb  2 05:52:36 np0005604943 kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Feb  2 05:52:36 np0005604943 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb  2 05:52:36 np0005604943 kernel: Spectre V2 : Mitigation: Retpolines
Feb  2 05:52:36 np0005604943 kernel: RETBleed: Mitigation: untrained return thunk
Feb  2 05:52:36 np0005604943 kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Feb  2 05:52:36 np0005604943 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb  2 05:52:36 np0005604943 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Feb  2 05:52:36 np0005604943 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb  2 05:52:36 np0005604943 kernel: active return thunk: retbleed_return_thunk
Feb  2 05:52:36 np0005604943 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb  2 05:52:36 np0005604943 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb  2 05:52:36 np0005604943 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb  2 05:52:36 np0005604943 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb  2 05:52:36 np0005604943 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb  2 05:52:36 np0005604943 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Feb  2 05:52:36 np0005604943 kernel: Freeing SMP alternatives memory: 40K
Feb  2 05:52:36 np0005604943 kernel: pid_max: default: 32768 minimum: 301
Feb  2 05:52:36 np0005604943 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Feb  2 05:52:36 np0005604943 kernel: landlock: Up and running.
Feb  2 05:52:36 np0005604943 kernel: Yama: becoming mindful.
Feb  2 05:52:36 np0005604943 kernel: SELinux:  Initializing.
Feb  2 05:52:36 np0005604943 kernel: LSM support for eBPF active
Feb  2 05:52:36 np0005604943 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb  2 05:52:36 np0005604943 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb  2 05:52:36 np0005604943 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb  2 05:52:36 np0005604943 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb  2 05:52:36 np0005604943 kernel: ... version:                0
Feb  2 05:52:36 np0005604943 kernel: ... bit width:              48
Feb  2 05:52:36 np0005604943 kernel: ... generic registers:      6
Feb  2 05:52:36 np0005604943 kernel: ... value mask:             0000ffffffffffff
Feb  2 05:52:36 np0005604943 kernel: ... max period:             00007fffffffffff
Feb  2 05:52:36 np0005604943 kernel: ... fixed-purpose events:   0
Feb  2 05:52:36 np0005604943 kernel: ... event mask:             000000000000003f
Feb  2 05:52:36 np0005604943 kernel: signal: max sigframe size: 1776
Feb  2 05:52:36 np0005604943 kernel: rcu: Hierarchical SRCU implementation.
Feb  2 05:52:36 np0005604943 kernel: rcu: #011Max phase no-delay instances is 400.
Feb  2 05:52:36 np0005604943 kernel: smp: Bringing up secondary CPUs ...
Feb  2 05:52:36 np0005604943 kernel: smpboot: x86: Booting SMP configuration:
Feb  2 05:52:36 np0005604943 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Feb  2 05:52:36 np0005604943 kernel: smp: Brought up 1 node, 8 CPUs
Feb  2 05:52:36 np0005604943 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Feb  2 05:52:36 np0005604943 kernel: node 0 deferred pages initialised in 9ms
Feb  2 05:52:36 np0005604943 kernel: Memory: 7763560K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618400K reserved, 0K cma-reserved)
Feb  2 05:52:36 np0005604943 kernel: devtmpfs: initialized
Feb  2 05:52:36 np0005604943 kernel: x86/mm: Memory block size: 128MB
Feb  2 05:52:36 np0005604943 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb  2 05:52:36 np0005604943 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Feb  2 05:52:36 np0005604943 kernel: pinctrl core: initialized pinctrl subsystem
Feb  2 05:52:36 np0005604943 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb  2 05:52:36 np0005604943 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Feb  2 05:52:36 np0005604943 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb  2 05:52:36 np0005604943 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb  2 05:52:36 np0005604943 kernel: audit: initializing netlink subsys (disabled)
Feb  2 05:52:36 np0005604943 kernel: audit: type=2000 audit(1770029555.393:1): state=initialized audit_enabled=0 res=1
Feb  2 05:52:36 np0005604943 kernel: thermal_sys: Registered thermal governor 'fair_share'
Feb  2 05:52:36 np0005604943 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb  2 05:52:36 np0005604943 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb  2 05:52:36 np0005604943 kernel: cpuidle: using governor menu
Feb  2 05:52:36 np0005604943 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb  2 05:52:36 np0005604943 kernel: PCI: Using configuration type 1 for base access
Feb  2 05:52:36 np0005604943 kernel: PCI: Using configuration type 1 for extended access
Feb  2 05:52:36 np0005604943 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb  2 05:52:36 np0005604943 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb  2 05:52:36 np0005604943 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb  2 05:52:36 np0005604943 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb  2 05:52:36 np0005604943 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb  2 05:52:36 np0005604943 kernel: Demotion targets for Node 0: null
Feb  2 05:52:36 np0005604943 kernel: cryptd: max_cpu_qlen set to 1000
Feb  2 05:52:36 np0005604943 kernel: ACPI: Added _OSI(Module Device)
Feb  2 05:52:36 np0005604943 kernel: ACPI: Added _OSI(Processor Device)
Feb  2 05:52:36 np0005604943 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb  2 05:52:36 np0005604943 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb  2 05:52:36 np0005604943 kernel: ACPI: Interpreter enabled
Feb  2 05:52:36 np0005604943 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Feb  2 05:52:36 np0005604943 kernel: ACPI: Using IOAPIC for interrupt routing
Feb  2 05:52:36 np0005604943 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb  2 05:52:36 np0005604943 kernel: PCI: Using E820 reservations for host bridge windows
Feb  2 05:52:36 np0005604943 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb  2 05:52:36 np0005604943 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb  2 05:52:36 np0005604943 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [3] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [4] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [5] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [6] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [7] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [8] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [9] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [10] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [11] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [12] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [13] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [14] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [15] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [16] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [17] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [18] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [19] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [20] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [21] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [22] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [23] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [24] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [25] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [26] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [27] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [28] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [29] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [30] registered
Feb  2 05:52:36 np0005604943 kernel: acpiphp: Slot [31] registered
Feb  2 05:52:36 np0005604943 kernel: PCI host bridge to bus 0000:00
Feb  2 05:52:36 np0005604943 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb  2 05:52:36 np0005604943 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb  2 05:52:36 np0005604943 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb  2 05:52:36 np0005604943 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Feb  2 05:52:36 np0005604943 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Feb  2 05:52:36 np0005604943 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Feb  2 05:52:36 np0005604943 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb  2 05:52:36 np0005604943 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb  2 05:52:36 np0005604943 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb  2 05:52:36 np0005604943 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb  2 05:52:36 np0005604943 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb  2 05:52:36 np0005604943 kernel: iommu: Default domain type: Translated
Feb  2 05:52:36 np0005604943 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb  2 05:52:36 np0005604943 kernel: SCSI subsystem initialized
Feb  2 05:52:36 np0005604943 kernel: ACPI: bus type USB registered
Feb  2 05:52:36 np0005604943 kernel: usbcore: registered new interface driver usbfs
Feb  2 05:52:36 np0005604943 kernel: usbcore: registered new interface driver hub
Feb  2 05:52:36 np0005604943 kernel: usbcore: registered new device driver usb
Feb  2 05:52:36 np0005604943 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb  2 05:52:36 np0005604943 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb  2 05:52:36 np0005604943 kernel: PTP clock support registered
Feb  2 05:52:36 np0005604943 kernel: EDAC MC: Ver: 3.0.0
Feb  2 05:52:36 np0005604943 kernel: NetLabel: Initializing
Feb  2 05:52:36 np0005604943 kernel: NetLabel:  domain hash size = 128
Feb  2 05:52:36 np0005604943 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Feb  2 05:52:36 np0005604943 kernel: NetLabel:  unlabeled traffic allowed by default
Feb  2 05:52:36 np0005604943 kernel: PCI: Using ACPI for IRQ routing
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb  2 05:52:36 np0005604943 kernel: vgaarb: loaded
Feb  2 05:52:36 np0005604943 kernel: clocksource: Switched to clocksource kvm-clock
Feb  2 05:52:36 np0005604943 kernel: VFS: Disk quotas dquot_6.6.0
Feb  2 05:52:36 np0005604943 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb  2 05:52:36 np0005604943 kernel: pnp: PnP ACPI init
Feb  2 05:52:36 np0005604943 kernel: pnp: PnP ACPI: found 5 devices
Feb  2 05:52:36 np0005604943 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb  2 05:52:36 np0005604943 kernel: NET: Registered PF_INET protocol family
Feb  2 05:52:36 np0005604943 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb  2 05:52:36 np0005604943 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb  2 05:52:36 np0005604943 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb  2 05:52:36 np0005604943 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb  2 05:52:36 np0005604943 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb  2 05:52:36 np0005604943 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb  2 05:52:36 np0005604943 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Feb  2 05:52:36 np0005604943 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb  2 05:52:36 np0005604943 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb  2 05:52:36 np0005604943 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb  2 05:52:36 np0005604943 kernel: NET: Registered PF_XDP protocol family
Feb  2 05:52:36 np0005604943 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb  2 05:52:36 np0005604943 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb  2 05:52:36 np0005604943 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb  2 05:52:36 np0005604943 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Feb  2 05:52:36 np0005604943 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb  2 05:52:36 np0005604943 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb  2 05:52:36 np0005604943 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 24570 usecs
Feb  2 05:52:36 np0005604943 kernel: PCI: CLS 0 bytes, default 64
Feb  2 05:52:36 np0005604943 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb  2 05:52:36 np0005604943 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Feb  2 05:52:36 np0005604943 kernel: Trying to unpack rootfs image as initramfs...
Feb  2 05:52:36 np0005604943 kernel: ACPI: bus type thunderbolt registered
Feb  2 05:52:36 np0005604943 kernel: Initialise system trusted keyrings
Feb  2 05:52:36 np0005604943 kernel: Key type blacklist registered
Feb  2 05:52:36 np0005604943 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Feb  2 05:52:36 np0005604943 kernel: zbud: loaded
Feb  2 05:52:36 np0005604943 kernel: integrity: Platform Keyring initialized
Feb  2 05:52:36 np0005604943 kernel: integrity: Machine keyring initialized
Feb  2 05:52:36 np0005604943 kernel: Freeing initrd memory: 88000K
Feb  2 05:52:36 np0005604943 kernel: NET: Registered PF_ALG protocol family
Feb  2 05:52:36 np0005604943 kernel: xor: automatically using best checksumming function   avx       
Feb  2 05:52:36 np0005604943 kernel: Key type asymmetric registered
Feb  2 05:52:36 np0005604943 kernel: Asymmetric key parser 'x509' registered
Feb  2 05:52:36 np0005604943 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Feb  2 05:52:36 np0005604943 kernel: io scheduler mq-deadline registered
Feb  2 05:52:36 np0005604943 kernel: io scheduler kyber registered
Feb  2 05:52:36 np0005604943 kernel: io scheduler bfq registered
Feb  2 05:52:36 np0005604943 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Feb  2 05:52:36 np0005604943 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Feb  2 05:52:36 np0005604943 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Feb  2 05:52:36 np0005604943 kernel: ACPI: button: Power Button [PWRF]
Feb  2 05:52:36 np0005604943 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb  2 05:52:36 np0005604943 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb  2 05:52:36 np0005604943 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb  2 05:52:36 np0005604943 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb  2 05:52:36 np0005604943 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb  2 05:52:36 np0005604943 kernel: Non-volatile memory driver v1.3
Feb  2 05:52:36 np0005604943 kernel: rdac: device handler registered
Feb  2 05:52:36 np0005604943 kernel: hp_sw: device handler registered
Feb  2 05:52:36 np0005604943 kernel: emc: device handler registered
Feb  2 05:52:36 np0005604943 kernel: alua: device handler registered
Feb  2 05:52:36 np0005604943 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb  2 05:52:36 np0005604943 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb  2 05:52:36 np0005604943 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb  2 05:52:36 np0005604943 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Feb  2 05:52:36 np0005604943 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Feb  2 05:52:36 np0005604943 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Feb  2 05:52:36 np0005604943 kernel: usb usb1: Product: UHCI Host Controller
Feb  2 05:52:36 np0005604943 kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Feb  2 05:52:36 np0005604943 kernel: usb usb1: SerialNumber: 0000:00:01.2
Feb  2 05:52:36 np0005604943 kernel: hub 1-0:1.0: USB hub found
Feb  2 05:52:36 np0005604943 kernel: hub 1-0:1.0: 2 ports detected
Feb  2 05:52:36 np0005604943 kernel: usbcore: registered new interface driver usbserial_generic
Feb  2 05:52:36 np0005604943 kernel: usbserial: USB Serial support registered for generic
Feb  2 05:52:36 np0005604943 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb  2 05:52:36 np0005604943 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb  2 05:52:36 np0005604943 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb  2 05:52:36 np0005604943 kernel: mousedev: PS/2 mouse device common for all mice
Feb  2 05:52:36 np0005604943 kernel: rtc_cmos 00:04: RTC can wake from S4
Feb  2 05:52:36 np0005604943 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Feb  2 05:52:36 np0005604943 kernel: rtc_cmos 00:04: registered as rtc0
Feb  2 05:52:36 np0005604943 kernel: rtc_cmos 00:04: setting system clock to 2026-02-02T10:52:35 UTC (1770029555)
Feb  2 05:52:36 np0005604943 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Feb  2 05:52:36 np0005604943 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Feb  2 05:52:36 np0005604943 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb  2 05:52:36 np0005604943 kernel: usbcore: registered new interface driver usbhid
Feb  2 05:52:36 np0005604943 kernel: usbhid: USB HID core driver
Feb  2 05:52:36 np0005604943 kernel: drop_monitor: Initializing network drop monitor service
Feb  2 05:52:36 np0005604943 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Feb  2 05:52:36 np0005604943 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Feb  2 05:52:36 np0005604943 kernel: Initializing XFRM netlink socket
Feb  2 05:52:36 np0005604943 kernel: NET: Registered PF_INET6 protocol family
Feb  2 05:52:36 np0005604943 kernel: Segment Routing with IPv6
Feb  2 05:52:36 np0005604943 kernel: NET: Registered PF_PACKET protocol family
Feb  2 05:52:36 np0005604943 kernel: mpls_gso: MPLS GSO support
Feb  2 05:52:36 np0005604943 kernel: IPI shorthand broadcast: enabled
Feb  2 05:52:36 np0005604943 kernel: AVX2 version of gcm_enc/dec engaged.
Feb  2 05:52:36 np0005604943 kernel: AES CTR mode by8 optimization enabled
Feb  2 05:52:36 np0005604943 kernel: sched_clock: Marking stable (951013126, 157001862)->(1212086316, -104071328)
Feb  2 05:52:36 np0005604943 kernel: registered taskstats version 1
Feb  2 05:52:36 np0005604943 kernel: Loading compiled-in X.509 certificates
Feb  2 05:52:36 np0005604943 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Feb  2 05:52:36 np0005604943 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Feb  2 05:52:36 np0005604943 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Feb  2 05:52:36 np0005604943 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Feb  2 05:52:36 np0005604943 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Feb  2 05:52:36 np0005604943 kernel: Demotion targets for Node 0: null
Feb  2 05:52:36 np0005604943 kernel: page_owner is disabled
Feb  2 05:52:36 np0005604943 kernel: Key type .fscrypt registered
Feb  2 05:52:36 np0005604943 kernel: Key type fscrypt-provisioning registered
Feb  2 05:52:36 np0005604943 kernel: Key type big_key registered
Feb  2 05:52:36 np0005604943 kernel: Key type encrypted registered
Feb  2 05:52:36 np0005604943 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb  2 05:52:36 np0005604943 kernel: Loading compiled-in module X.509 certificates
Feb  2 05:52:36 np0005604943 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Feb  2 05:52:36 np0005604943 kernel: ima: Allocated hash algorithm: sha256
Feb  2 05:52:36 np0005604943 kernel: ima: No architecture policies found
Feb  2 05:52:36 np0005604943 kernel: evm: Initialising EVM extended attributes:
Feb  2 05:52:36 np0005604943 kernel: evm: security.selinux
Feb  2 05:52:36 np0005604943 kernel: evm: security.SMACK64 (disabled)
Feb  2 05:52:36 np0005604943 kernel: evm: security.SMACK64EXEC (disabled)
Feb  2 05:52:36 np0005604943 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Feb  2 05:52:36 np0005604943 kernel: evm: security.SMACK64MMAP (disabled)
Feb  2 05:52:36 np0005604943 kernel: evm: security.apparmor (disabled)
Feb  2 05:52:36 np0005604943 kernel: evm: security.ima
Feb  2 05:52:36 np0005604943 kernel: evm: security.capability
Feb  2 05:52:36 np0005604943 kernel: evm: HMAC attrs: 0x1
Feb  2 05:52:36 np0005604943 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Feb  2 05:52:36 np0005604943 kernel: Running certificate verification RSA selftest
Feb  2 05:52:36 np0005604943 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Feb  2 05:52:36 np0005604943 kernel: Running certificate verification ECDSA selftest
Feb  2 05:52:36 np0005604943 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Feb  2 05:52:36 np0005604943 kernel: clk: Disabling unused clocks
Feb  2 05:52:36 np0005604943 kernel: Freeing unused decrypted memory: 2028K
Feb  2 05:52:36 np0005604943 kernel: Freeing unused kernel image (initmem) memory: 4196K
Feb  2 05:52:36 np0005604943 kernel: Write protecting the kernel read-only data: 30720k
Feb  2 05:52:36 np0005604943 kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Feb  2 05:52:36 np0005604943 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Feb  2 05:52:36 np0005604943 kernel: Run /init as init process
Feb  2 05:52:36 np0005604943 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  2 05:52:36 np0005604943 systemd: Detected virtualization kvm.
Feb  2 05:52:36 np0005604943 systemd: Detected architecture x86-64.
Feb  2 05:52:36 np0005604943 systemd: Running in initrd.
Feb  2 05:52:36 np0005604943 systemd: No hostname configured, using default hostname.
Feb  2 05:52:36 np0005604943 systemd: Hostname set to <localhost>.
Feb  2 05:52:36 np0005604943 systemd: Initializing machine ID from VM UUID.
Feb  2 05:52:36 np0005604943 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Feb  2 05:52:36 np0005604943 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Feb  2 05:52:36 np0005604943 kernel: usb 1-1: Product: QEMU USB Tablet
Feb  2 05:52:36 np0005604943 kernel: usb 1-1: Manufacturer: QEMU
Feb  2 05:52:36 np0005604943 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Feb  2 05:52:36 np0005604943 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Feb  2 05:52:36 np0005604943 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Feb  2 05:52:36 np0005604943 systemd: Queued start job for default target Initrd Default Target.
Feb  2 05:52:36 np0005604943 systemd: Started Dispatch Password Requests to Console Directory Watch.
Feb  2 05:52:36 np0005604943 systemd: Reached target Local Encrypted Volumes.
Feb  2 05:52:36 np0005604943 systemd: Reached target Initrd /usr File System.
Feb  2 05:52:36 np0005604943 systemd: Reached target Local File Systems.
Feb  2 05:52:36 np0005604943 systemd: Reached target Path Units.
Feb  2 05:52:36 np0005604943 systemd: Reached target Slice Units.
Feb  2 05:52:36 np0005604943 systemd: Reached target Swaps.
Feb  2 05:52:36 np0005604943 systemd: Reached target Timer Units.
Feb  2 05:52:36 np0005604943 systemd: Listening on D-Bus System Message Bus Socket.
Feb  2 05:52:36 np0005604943 systemd: Listening on Journal Socket (/dev/log).
Feb  2 05:52:36 np0005604943 systemd: Listening on Journal Socket.
Feb  2 05:52:36 np0005604943 systemd: Listening on udev Control Socket.
Feb  2 05:52:36 np0005604943 systemd: Listening on udev Kernel Socket.
Feb  2 05:52:36 np0005604943 systemd: Reached target Socket Units.
Feb  2 05:52:36 np0005604943 systemd: Starting Create List of Static Device Nodes...
Feb  2 05:52:36 np0005604943 systemd: Starting Journal Service...
Feb  2 05:52:36 np0005604943 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb  2 05:52:36 np0005604943 systemd: Starting Apply Kernel Variables...
Feb  2 05:52:36 np0005604943 systemd: Starting Create System Users...
Feb  2 05:52:36 np0005604943 systemd: Starting Setup Virtual Console...
Feb  2 05:52:36 np0005604943 systemd: Finished Create List of Static Device Nodes.
Feb  2 05:52:36 np0005604943 systemd: Finished Apply Kernel Variables.
Feb  2 05:52:36 np0005604943 systemd: Finished Create System Users.
Feb  2 05:52:36 np0005604943 systemd-journald[306]: Journal started
Feb  2 05:52:36 np0005604943 systemd-journald[306]: Runtime Journal (/run/log/journal/4ccddb6be5c44cee96abcfd456961526) is 8.0M, max 153.6M, 145.6M free.
Feb  2 05:52:36 np0005604943 systemd-sysusers[311]: Creating group 'users' with GID 100.
Feb  2 05:52:36 np0005604943 systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Feb  2 05:52:36 np0005604943 systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Feb  2 05:52:36 np0005604943 systemd: Started Journal Service.
Feb  2 05:52:36 np0005604943 systemd[1]: Starting Create Static Device Nodes in /dev...
Feb  2 05:52:36 np0005604943 systemd[1]: Starting Create Volatile Files and Directories...
Feb  2 05:52:36 np0005604943 systemd[1]: Finished Create Static Device Nodes in /dev.
Feb  2 05:52:36 np0005604943 systemd[1]: Finished Create Volatile Files and Directories.
Feb  2 05:52:36 np0005604943 systemd[1]: Finished Setup Virtual Console.
Feb  2 05:52:36 np0005604943 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Feb  2 05:52:36 np0005604943 systemd[1]: Starting dracut cmdline hook...
Feb  2 05:52:36 np0005604943 dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Feb  2 05:52:36 np0005604943 dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb  2 05:52:36 np0005604943 systemd[1]: Finished dracut cmdline hook.
Feb  2 05:52:36 np0005604943 systemd[1]: Starting dracut pre-udev hook...
Feb  2 05:52:36 np0005604943 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb  2 05:52:36 np0005604943 kernel: device-mapper: uevent: version 1.0.3
Feb  2 05:52:36 np0005604943 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Feb  2 05:52:36 np0005604943 kernel: RPC: Registered named UNIX socket transport module.
Feb  2 05:52:36 np0005604943 kernel: RPC: Registered udp transport module.
Feb  2 05:52:36 np0005604943 kernel: RPC: Registered tcp transport module.
Feb  2 05:52:36 np0005604943 kernel: RPC: Registered tcp-with-tls transport module.
Feb  2 05:52:36 np0005604943 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb  2 05:52:36 np0005604943 rpc.statd[441]: Version 2.5.4 starting
Feb  2 05:52:36 np0005604943 rpc.statd[441]: Initializing NSM state
Feb  2 05:52:36 np0005604943 rpc.idmapd[446]: Setting log level to 0
Feb  2 05:52:36 np0005604943 systemd[1]: Finished dracut pre-udev hook.
Feb  2 05:52:36 np0005604943 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb  2 05:52:36 np0005604943 systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Feb  2 05:52:36 np0005604943 systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb  2 05:52:36 np0005604943 systemd[1]: Starting dracut pre-trigger hook...
Feb  2 05:52:36 np0005604943 systemd[1]: Finished dracut pre-trigger hook.
Feb  2 05:52:36 np0005604943 systemd[1]: Starting Coldplug All udev Devices...
Feb  2 05:52:36 np0005604943 systemd[1]: Created slice Slice /system/modprobe.
Feb  2 05:52:36 np0005604943 systemd[1]: Starting Load Kernel Module configfs...
Feb  2 05:52:36 np0005604943 systemd[1]: Finished Coldplug All udev Devices.
Feb  2 05:52:36 np0005604943 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  2 05:52:36 np0005604943 systemd[1]: Finished Load Kernel Module configfs.
Feb  2 05:52:36 np0005604943 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb  2 05:52:36 np0005604943 systemd[1]: Reached target Network.
Feb  2 05:52:36 np0005604943 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb  2 05:52:36 np0005604943 systemd[1]: Starting dracut initqueue hook...
Feb  2 05:52:36 np0005604943 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Feb  2 05:52:36 np0005604943 systemd-udevd[494]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 05:52:36 np0005604943 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Feb  2 05:52:36 np0005604943 kernel: vda: vda1
Feb  2 05:52:36 np0005604943 kernel: scsi host0: ata_piix
Feb  2 05:52:36 np0005604943 kernel: scsi host1: ata_piix
Feb  2 05:52:36 np0005604943 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Feb  2 05:52:36 np0005604943 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Feb  2 05:52:36 np0005604943 systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Feb  2 05:52:36 np0005604943 systemd[1]: Reached target Initrd Root Device.
Feb  2 05:52:36 np0005604943 kernel: ata1: found unknown device (class 0)
Feb  2 05:52:36 np0005604943 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb  2 05:52:36 np0005604943 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb  2 05:52:36 np0005604943 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Feb  2 05:52:36 np0005604943 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb  2 05:52:36 np0005604943 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb  2 05:52:37 np0005604943 systemd[1]: Mounting Kernel Configuration File System...
Feb  2 05:52:37 np0005604943 systemd[1]: Mounted Kernel Configuration File System.
Feb  2 05:52:37 np0005604943 systemd[1]: Reached target System Initialization.
Feb  2 05:52:37 np0005604943 systemd[1]: Reached target Basic System.
Feb  2 05:52:37 np0005604943 systemd[1]: Finished dracut initqueue hook.
Feb  2 05:52:37 np0005604943 systemd[1]: Reached target Preparation for Remote File Systems.
Feb  2 05:52:37 np0005604943 systemd[1]: Reached target Remote Encrypted Volumes.
Feb  2 05:52:37 np0005604943 systemd[1]: Reached target Remote File Systems.
Feb  2 05:52:37 np0005604943 systemd[1]: Starting dracut pre-mount hook...
Feb  2 05:52:37 np0005604943 systemd[1]: Finished dracut pre-mount hook.
Feb  2 05:52:37 np0005604943 systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Feb  2 05:52:37 np0005604943 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Feb  2 05:52:37 np0005604943 systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Feb  2 05:52:37 np0005604943 systemd[1]: Mounting /sysroot...
Feb  2 05:52:37 np0005604943 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Feb  2 05:52:37 np0005604943 kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Feb  2 05:52:37 np0005604943 kernel: XFS (vda1): Ending clean mount
Feb  2 05:52:37 np0005604943 systemd[1]: Mounted /sysroot.
Feb  2 05:52:37 np0005604943 systemd[1]: Reached target Initrd Root File System.
Feb  2 05:52:37 np0005604943 systemd[1]: Starting Mountpoints Configured in the Real Root...
Feb  2 05:52:37 np0005604943 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Finished Mountpoints Configured in the Real Root.
Feb  2 05:52:37 np0005604943 systemd[1]: Reached target Initrd File Systems.
Feb  2 05:52:37 np0005604943 systemd[1]: Reached target Initrd Default Target.
Feb  2 05:52:37 np0005604943 systemd[1]: Starting dracut mount hook...
Feb  2 05:52:37 np0005604943 systemd[1]: Finished dracut mount hook.
Feb  2 05:52:37 np0005604943 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Feb  2 05:52:37 np0005604943 rpc.idmapd[446]: exiting on signal 15
Feb  2 05:52:37 np0005604943 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Feb  2 05:52:37 np0005604943 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped target Network.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped target Remote Encrypted Volumes.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped target Timer Units.
Feb  2 05:52:37 np0005604943 systemd[1]: dbus.socket: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Closed D-Bus System Message Bus Socket.
Feb  2 05:52:37 np0005604943 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped target Initrd Default Target.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped target Basic System.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped target Initrd Root Device.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped target Initrd /usr File System.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped target Path Units.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped target Remote File Systems.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped target Preparation for Remote File Systems.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped target Slice Units.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped target Socket Units.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped target System Initialization.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped target Local File Systems.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped target Swaps.
Feb  2 05:52:37 np0005604943 systemd[1]: dracut-mount.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped dracut mount hook.
Feb  2 05:52:37 np0005604943 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped dracut pre-mount hook.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped target Local Encrypted Volumes.
Feb  2 05:52:37 np0005604943 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Feb  2 05:52:37 np0005604943 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped dracut initqueue hook.
Feb  2 05:52:37 np0005604943 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped Apply Kernel Variables.
Feb  2 05:52:37 np0005604943 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped Create Volatile Files and Directories.
Feb  2 05:52:37 np0005604943 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped Coldplug All udev Devices.
Feb  2 05:52:37 np0005604943 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped dracut pre-trigger hook.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb  2 05:52:37 np0005604943 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped Setup Virtual Console.
Feb  2 05:52:37 np0005604943 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb  2 05:52:37 np0005604943 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Feb  2 05:52:37 np0005604943 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Closed udev Control Socket.
Feb  2 05:52:37 np0005604943 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Closed udev Kernel Socket.
Feb  2 05:52:37 np0005604943 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped dracut pre-udev hook.
Feb  2 05:52:37 np0005604943 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped dracut cmdline hook.
Feb  2 05:52:37 np0005604943 systemd[1]: Starting Cleanup udev Database...
Feb  2 05:52:37 np0005604943 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped Create Static Device Nodes in /dev.
Feb  2 05:52:37 np0005604943 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped Create List of Static Device Nodes.
Feb  2 05:52:37 np0005604943 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Stopped Create System Users.
Feb  2 05:52:37 np0005604943 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb  2 05:52:37 np0005604943 systemd[1]: Finished Cleanup udev Database.
Feb  2 05:52:37 np0005604943 systemd[1]: Reached target Switch Root.
Feb  2 05:52:37 np0005604943 systemd[1]: Starting Switch Root...
Feb  2 05:52:37 np0005604943 systemd[1]: Switching root.
Feb  2 05:52:37 np0005604943 systemd-journald[306]: Journal stopped
Feb  2 05:52:38 np0005604943 systemd-journald: Received SIGTERM from PID 1 (systemd).
Feb  2 05:52:38 np0005604943 kernel: audit: type=1404 audit(1770029558.142:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Feb  2 05:52:38 np0005604943 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 05:52:38 np0005604943 kernel: SELinux:  policy capability open_perms=1
Feb  2 05:52:38 np0005604943 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 05:52:38 np0005604943 kernel: SELinux:  policy capability always_check_network=0
Feb  2 05:52:38 np0005604943 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 05:52:38 np0005604943 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 05:52:38 np0005604943 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 05:52:38 np0005604943 kernel: audit: type=1403 audit(1770029558.265:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb  2 05:52:38 np0005604943 systemd: Successfully loaded SELinux policy in 130.201ms.
Feb  2 05:52:38 np0005604943 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.631ms.
Feb  2 05:52:38 np0005604943 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  2 05:52:38 np0005604943 systemd: Detected virtualization kvm.
Feb  2 05:52:38 np0005604943 systemd: Detected architecture x86-64.
Feb  2 05:52:38 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 05:52:38 np0005604943 systemd: initrd-switch-root.service: Deactivated successfully.
Feb  2 05:52:38 np0005604943 systemd: Stopped Switch Root.
Feb  2 05:52:38 np0005604943 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb  2 05:52:38 np0005604943 systemd: Created slice Slice /system/getty.
Feb  2 05:52:38 np0005604943 systemd: Created slice Slice /system/serial-getty.
Feb  2 05:52:38 np0005604943 systemd: Created slice Slice /system/sshd-keygen.
Feb  2 05:52:38 np0005604943 systemd: Created slice User and Session Slice.
Feb  2 05:52:38 np0005604943 systemd: Started Dispatch Password Requests to Console Directory Watch.
Feb  2 05:52:38 np0005604943 systemd: Started Forward Password Requests to Wall Directory Watch.
Feb  2 05:52:38 np0005604943 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Feb  2 05:52:38 np0005604943 systemd: Reached target Local Encrypted Volumes.
Feb  2 05:52:38 np0005604943 systemd: Stopped target Switch Root.
Feb  2 05:52:38 np0005604943 systemd: Stopped target Initrd File Systems.
Feb  2 05:52:38 np0005604943 systemd: Stopped target Initrd Root File System.
Feb  2 05:52:38 np0005604943 systemd: Reached target Local Integrity Protected Volumes.
Feb  2 05:52:38 np0005604943 systemd: Reached target Path Units.
Feb  2 05:52:38 np0005604943 systemd: Reached target rpc_pipefs.target.
Feb  2 05:52:38 np0005604943 systemd: Reached target Slice Units.
Feb  2 05:52:38 np0005604943 systemd: Reached target Swaps.
Feb  2 05:52:38 np0005604943 systemd: Reached target Local Verity Protected Volumes.
Feb  2 05:52:38 np0005604943 systemd: Listening on RPCbind Server Activation Socket.
Feb  2 05:52:38 np0005604943 systemd: Reached target RPC Port Mapper.
Feb  2 05:52:38 np0005604943 systemd: Listening on Process Core Dump Socket.
Feb  2 05:52:38 np0005604943 systemd: Listening on initctl Compatibility Named Pipe.
Feb  2 05:52:38 np0005604943 systemd: Listening on udev Control Socket.
Feb  2 05:52:38 np0005604943 systemd: Listening on udev Kernel Socket.
Feb  2 05:52:38 np0005604943 systemd: Mounting Huge Pages File System...
Feb  2 05:52:38 np0005604943 systemd: Mounting POSIX Message Queue File System...
Feb  2 05:52:38 np0005604943 systemd: Mounting Kernel Debug File System...
Feb  2 05:52:38 np0005604943 systemd: Mounting Kernel Trace File System...
Feb  2 05:52:38 np0005604943 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb  2 05:52:38 np0005604943 systemd: Starting Create List of Static Device Nodes...
Feb  2 05:52:38 np0005604943 systemd: Starting Load Kernel Module configfs...
Feb  2 05:52:38 np0005604943 systemd: Starting Load Kernel Module drm...
Feb  2 05:52:38 np0005604943 systemd: Starting Load Kernel Module efi_pstore...
Feb  2 05:52:38 np0005604943 systemd: Starting Load Kernel Module fuse...
Feb  2 05:52:38 np0005604943 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Feb  2 05:52:38 np0005604943 systemd: systemd-fsck-root.service: Deactivated successfully.
Feb  2 05:52:38 np0005604943 systemd: Stopped File System Check on Root Device.
Feb  2 05:52:38 np0005604943 systemd: Stopped Journal Service.
Feb  2 05:52:38 np0005604943 kernel: fuse: init (API version 7.37)
Feb  2 05:52:38 np0005604943 systemd: Starting Journal Service...
Feb  2 05:52:38 np0005604943 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb  2 05:52:38 np0005604943 systemd: Starting Generate network units from Kernel command line...
Feb  2 05:52:38 np0005604943 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  2 05:52:38 np0005604943 systemd: Starting Remount Root and Kernel File Systems...
Feb  2 05:52:38 np0005604943 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Feb  2 05:52:38 np0005604943 systemd: Starting Apply Kernel Variables...
Feb  2 05:52:38 np0005604943 systemd-journald[680]: Journal started
Feb  2 05:52:38 np0005604943 systemd-journald[680]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Feb  2 05:52:38 np0005604943 systemd[1]: Queued start job for default target Multi-User System.
Feb  2 05:52:38 np0005604943 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb  2 05:52:38 np0005604943 systemd: Starting Coldplug All udev Devices...
Feb  2 05:52:38 np0005604943 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Feb  2 05:52:38 np0005604943 systemd: Started Journal Service.
Feb  2 05:52:38 np0005604943 kernel: ACPI: bus type drm_connector registered
Feb  2 05:52:38 np0005604943 systemd[1]: Mounted Huge Pages File System.
Feb  2 05:52:38 np0005604943 systemd[1]: Mounted POSIX Message Queue File System.
Feb  2 05:52:38 np0005604943 systemd[1]: Mounted Kernel Debug File System.
Feb  2 05:52:38 np0005604943 systemd[1]: Mounted Kernel Trace File System.
Feb  2 05:52:38 np0005604943 systemd[1]: Finished Create List of Static Device Nodes.
Feb  2 05:52:38 np0005604943 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  2 05:52:38 np0005604943 systemd[1]: Finished Load Kernel Module configfs.
Feb  2 05:52:38 np0005604943 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb  2 05:52:38 np0005604943 systemd[1]: Finished Load Kernel Module drm.
Feb  2 05:52:38 np0005604943 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb  2 05:52:38 np0005604943 systemd[1]: Finished Load Kernel Module efi_pstore.
Feb  2 05:52:38 np0005604943 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb  2 05:52:38 np0005604943 systemd[1]: Finished Load Kernel Module fuse.
Feb  2 05:52:38 np0005604943 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Feb  2 05:52:38 np0005604943 systemd[1]: Finished Generate network units from Kernel command line.
Feb  2 05:52:38 np0005604943 systemd[1]: Finished Remount Root and Kernel File Systems.
Feb  2 05:52:38 np0005604943 systemd[1]: Finished Apply Kernel Variables.
Feb  2 05:52:38 np0005604943 systemd[1]: Mounting FUSE Control File System...
Feb  2 05:52:38 np0005604943 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb  2 05:52:38 np0005604943 systemd[1]: Starting Rebuild Hardware Database...
Feb  2 05:52:38 np0005604943 systemd[1]: Starting Flush Journal to Persistent Storage...
Feb  2 05:52:38 np0005604943 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb  2 05:52:38 np0005604943 systemd[1]: Starting Load/Save OS Random Seed...
Feb  2 05:52:38 np0005604943 systemd[1]: Starting Create System Users...
Feb  2 05:52:38 np0005604943 systemd[1]: Mounted FUSE Control File System.
Feb  2 05:52:38 np0005604943 systemd[1]: Finished Coldplug All udev Devices.
Feb  2 05:52:38 np0005604943 systemd-journald[680]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Feb  2 05:52:38 np0005604943 systemd-journald[680]: Received client request to flush runtime journal.
Feb  2 05:52:38 np0005604943 systemd[1]: Finished Flush Journal to Persistent Storage.
Feb  2 05:52:38 np0005604943 systemd[1]: Finished Load/Save OS Random Seed.
Feb  2 05:52:38 np0005604943 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb  2 05:52:38 np0005604943 systemd[1]: Finished Create System Users.
Feb  2 05:52:38 np0005604943 systemd[1]: Starting Create Static Device Nodes in /dev...
Feb  2 05:52:39 np0005604943 systemd[1]: Finished Create Static Device Nodes in /dev.
Feb  2 05:52:39 np0005604943 systemd[1]: Reached target Preparation for Local File Systems.
Feb  2 05:52:39 np0005604943 systemd[1]: Reached target Local File Systems.
Feb  2 05:52:39 np0005604943 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Feb  2 05:52:39 np0005604943 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Feb  2 05:52:39 np0005604943 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb  2 05:52:39 np0005604943 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Feb  2 05:52:39 np0005604943 systemd[1]: Starting Automatic Boot Loader Update...
Feb  2 05:52:39 np0005604943 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Feb  2 05:52:39 np0005604943 systemd[1]: Starting Create Volatile Files and Directories...
Feb  2 05:52:39 np0005604943 bootctl[698]: Couldn't find EFI system partition, skipping.
Feb  2 05:52:39 np0005604943 systemd[1]: Finished Automatic Boot Loader Update.
Feb  2 05:52:39 np0005604943 systemd[1]: Finished Create Volatile Files and Directories.
Feb  2 05:52:39 np0005604943 systemd[1]: Starting Security Auditing Service...
Feb  2 05:52:39 np0005604943 systemd[1]: Starting RPC Bind...
Feb  2 05:52:39 np0005604943 systemd[1]: Starting Rebuild Journal Catalog...
Feb  2 05:52:39 np0005604943 auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Feb  2 05:52:39 np0005604943 auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Feb  2 05:52:39 np0005604943 systemd[1]: Finished Rebuild Journal Catalog.
Feb  2 05:52:39 np0005604943 systemd[1]: Started RPC Bind.
Feb  2 05:52:39 np0005604943 augenrules[709]: /sbin/augenrules: No change
Feb  2 05:52:39 np0005604943 augenrules[724]: No rules
Feb  2 05:52:39 np0005604943 augenrules[724]: enabled 1
Feb  2 05:52:39 np0005604943 augenrules[724]: failure 1
Feb  2 05:52:39 np0005604943 augenrules[724]: pid 704
Feb  2 05:52:39 np0005604943 augenrules[724]: rate_limit 0
Feb  2 05:52:39 np0005604943 augenrules[724]: backlog_limit 8192
Feb  2 05:52:39 np0005604943 augenrules[724]: lost 0
Feb  2 05:52:39 np0005604943 augenrules[724]: backlog 3
Feb  2 05:52:39 np0005604943 augenrules[724]: backlog_wait_time 60000
Feb  2 05:52:39 np0005604943 augenrules[724]: backlog_wait_time_actual 0
Feb  2 05:52:39 np0005604943 augenrules[724]: enabled 1
Feb  2 05:52:39 np0005604943 augenrules[724]: failure 1
Feb  2 05:52:39 np0005604943 augenrules[724]: pid 704
Feb  2 05:52:39 np0005604943 augenrules[724]: rate_limit 0
Feb  2 05:52:39 np0005604943 augenrules[724]: backlog_limit 8192
Feb  2 05:52:39 np0005604943 augenrules[724]: lost 0
Feb  2 05:52:39 np0005604943 augenrules[724]: backlog 3
Feb  2 05:52:39 np0005604943 augenrules[724]: backlog_wait_time 60000
Feb  2 05:52:39 np0005604943 augenrules[724]: backlog_wait_time_actual 0
Feb  2 05:52:39 np0005604943 augenrules[724]: enabled 1
Feb  2 05:52:39 np0005604943 augenrules[724]: failure 1
Feb  2 05:52:39 np0005604943 augenrules[724]: pid 704
Feb  2 05:52:39 np0005604943 augenrules[724]: rate_limit 0
Feb  2 05:52:39 np0005604943 augenrules[724]: backlog_limit 8192
Feb  2 05:52:39 np0005604943 augenrules[724]: lost 0
Feb  2 05:52:39 np0005604943 augenrules[724]: backlog 3
Feb  2 05:52:39 np0005604943 augenrules[724]: backlog_wait_time 60000
Feb  2 05:52:39 np0005604943 augenrules[724]: backlog_wait_time_actual 0
Feb  2 05:52:39 np0005604943 systemd[1]: Started Security Auditing Service.
Feb  2 05:52:39 np0005604943 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Feb  2 05:52:39 np0005604943 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Feb  2 05:52:39 np0005604943 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Feb  2 05:52:39 np0005604943 systemd[1]: Finished Rebuild Hardware Database.
Feb  2 05:52:39 np0005604943 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb  2 05:52:39 np0005604943 systemd[1]: Starting Update is Completed...
Feb  2 05:52:39 np0005604943 systemd[1]: Finished Update is Completed.
Feb  2 05:52:39 np0005604943 systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Feb  2 05:52:39 np0005604943 systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb  2 05:52:39 np0005604943 systemd[1]: Reached target System Initialization.
Feb  2 05:52:39 np0005604943 systemd[1]: Started dnf makecache --timer.
Feb  2 05:52:39 np0005604943 systemd[1]: Started Daily rotation of log files.
Feb  2 05:52:39 np0005604943 systemd[1]: Started Daily Cleanup of Temporary Directories.
Feb  2 05:52:39 np0005604943 systemd[1]: Reached target Timer Units.
Feb  2 05:52:39 np0005604943 systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb  2 05:52:39 np0005604943 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Feb  2 05:52:39 np0005604943 systemd[1]: Reached target Socket Units.
Feb  2 05:52:39 np0005604943 systemd[1]: Starting D-Bus System Message Bus...
Feb  2 05:52:39 np0005604943 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  2 05:52:39 np0005604943 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Feb  2 05:52:39 np0005604943 systemd[1]: Starting Load Kernel Module configfs...
Feb  2 05:52:39 np0005604943 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  2 05:52:39 np0005604943 systemd[1]: Finished Load Kernel Module configfs.
Feb  2 05:52:39 np0005604943 systemd-udevd[734]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 05:52:39 np0005604943 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Feb  2 05:52:39 np0005604943 systemd[1]: Started D-Bus System Message Bus.
Feb  2 05:52:39 np0005604943 systemd[1]: Reached target Basic System.
Feb  2 05:52:39 np0005604943 dbus-broker-lau[768]: Ready
Feb  2 05:52:39 np0005604943 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb  2 05:52:39 np0005604943 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Feb  2 05:52:39 np0005604943 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Feb  2 05:52:39 np0005604943 systemd[1]: Starting NTP client/server...
Feb  2 05:52:39 np0005604943 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Feb  2 05:52:39 np0005604943 systemd[1]: Starting Restore /run/initramfs on shutdown...
Feb  2 05:52:39 np0005604943 systemd[1]: Starting IPv4 firewall with iptables...
Feb  2 05:52:39 np0005604943 systemd[1]: Started irqbalance daemon.
Feb  2 05:52:39 np0005604943 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Feb  2 05:52:39 np0005604943 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 05:52:39 np0005604943 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 05:52:39 np0005604943 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 05:52:39 np0005604943 systemd[1]: Reached target sshd-keygen.target.
Feb  2 05:52:39 np0005604943 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Feb  2 05:52:39 np0005604943 systemd[1]: Reached target User and Group Name Lookups.
Feb  2 05:52:39 np0005604943 systemd[1]: Starting User Login Management...
Feb  2 05:52:39 np0005604943 systemd[1]: Finished Restore /run/initramfs on shutdown.
Feb  2 05:52:39 np0005604943 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Feb  2 05:52:39 np0005604943 chronyd[801]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb  2 05:52:39 np0005604943 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb  2 05:52:39 np0005604943 systemd-logind[786]: New seat seat0.
Feb  2 05:52:39 np0005604943 systemd[1]: Started User Login Management.
Feb  2 05:52:39 np0005604943 chronyd[801]: Loaded 0 symmetric keys
Feb  2 05:52:39 np0005604943 chronyd[801]: Using right/UTC timezone to obtain leap second data
Feb  2 05:52:39 np0005604943 chronyd[801]: Loaded seccomp filter (level 2)
Feb  2 05:52:39 np0005604943 systemd[1]: Started NTP client/server.
Feb  2 05:52:40 np0005604943 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Feb  2 05:52:40 np0005604943 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Feb  2 05:52:40 np0005604943 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Feb  2 05:52:40 np0005604943 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Feb  2 05:52:40 np0005604943 kernel: Console: switching to colour dummy device 80x25
Feb  2 05:52:40 np0005604943 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Feb  2 05:52:40 np0005604943 kernel: [drm] features: -context_init
Feb  2 05:52:40 np0005604943 kernel: [drm] number of scanouts: 1
Feb  2 05:52:40 np0005604943 kernel: [drm] number of cap sets: 0
Feb  2 05:52:40 np0005604943 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Feb  2 05:52:40 np0005604943 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Feb  2 05:52:40 np0005604943 kernel: Console: switching to colour frame buffer device 128x48
Feb  2 05:52:40 np0005604943 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Feb  2 05:52:40 np0005604943 kernel: kvm_amd: TSC scaling supported
Feb  2 05:52:40 np0005604943 kernel: kvm_amd: Nested Virtualization enabled
Feb  2 05:52:40 np0005604943 kernel: kvm_amd: Nested Paging enabled
Feb  2 05:52:40 np0005604943 kernel: kvm_amd: LBR virtualization supported
Feb  2 05:52:40 np0005604943 iptables.init[781]: iptables: Applying firewall rules: [  OK  ]
Feb  2 05:52:40 np0005604943 systemd[1]: Finished IPv4 firewall with iptables.
Feb  2 05:52:40 np0005604943 cloud-init[841]: Cloud-init v. 24.4-8.el9 running 'init-local' at Mon, 02 Feb 2026 10:52:40 +0000. Up 6.07 seconds.
Feb  2 05:52:40 np0005604943 systemd[1]: run-cloud\x2dinit-tmp-tmpcxu2d0km.mount: Deactivated successfully.
Feb  2 05:52:40 np0005604943 systemd[1]: Starting Hostname Service...
Feb  2 05:52:41 np0005604943 systemd[1]: Started Hostname Service.
Feb  2 05:52:41 np0005604943 systemd-hostnamed[855]: Hostname set to <np0005604943.novalocal> (static)
Feb  2 05:52:41 np0005604943 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Feb  2 05:52:41 np0005604943 systemd[1]: Reached target Preparation for Network.
Feb  2 05:52:41 np0005604943 systemd[1]: Starting Network Manager...
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.1666] NetworkManager (version 1.54.3-2.el9) is starting... (boot:dbf42354-b9fe-4201-8fff-3ebe30e4e21a)
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.1671] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.1807] manager[0x55864517e000]: monitoring kernel firmware directory '/lib/firmware'.
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.1844] hostname: hostname: using hostnamed
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.1844] hostname: static hostname changed from (none) to "np0005604943.novalocal"
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.1847] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.1931] manager[0x55864517e000]: rfkill: Wi-Fi hardware radio set enabled
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.1931] manager[0x55864517e000]: rfkill: WWAN hardware radio set enabled
Feb  2 05:52:41 np0005604943 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2000] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2000] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2001] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2001] manager: Networking is enabled by state file
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2003] settings: Loaded settings plugin: keyfile (internal)
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2037] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2057] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2071] dhcp: init: Using DHCP client 'internal'
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2079] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2090] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2102] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2113] device (lo): Activation: starting connection 'lo' (175d73e5-40e0-45b2-8b10-784bc91cfee9)
Feb  2 05:52:41 np0005604943 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2120] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2123] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 05:52:41 np0005604943 systemd[1]: Started Network Manager.
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2153] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2158] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2160] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2162] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb  2 05:52:41 np0005604943 systemd[1]: Reached target Network.
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2166] device (eth0): carrier: link connected
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2169] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2176] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2185] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2189] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2189] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2191] manager: NetworkManager state is now CONNECTING
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2193] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 05:52:41 np0005604943 systemd[1]: Starting Network Manager Wait Online...
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2203] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2207] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 05:52:41 np0005604943 systemd[1]: Starting GSSAPI Proxy Daemon...
Feb  2 05:52:41 np0005604943 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2332] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2334] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb  2 05:52:41 np0005604943 NetworkManager[859]: <info>  [1770029561.2340] device (lo): Activation: successful, device activated.
Feb  2 05:52:41 np0005604943 systemd[1]: Started GSSAPI Proxy Daemon.
Feb  2 05:52:41 np0005604943 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb  2 05:52:41 np0005604943 systemd[1]: Reached target NFS client services.
Feb  2 05:52:41 np0005604943 systemd[1]: Reached target Preparation for Remote File Systems.
Feb  2 05:52:41 np0005604943 systemd[1]: Reached target Remote File Systems.
Feb  2 05:52:41 np0005604943 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  2 05:52:42 np0005604943 NetworkManager[859]: <info>  [1770029562.4513] dhcp4 (eth0): state changed new lease, address=38.102.83.41
Feb  2 05:52:42 np0005604943 NetworkManager[859]: <info>  [1770029562.4527] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb  2 05:52:42 np0005604943 NetworkManager[859]: <info>  [1770029562.4556] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 05:52:42 np0005604943 NetworkManager[859]: <info>  [1770029562.4589] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 05:52:42 np0005604943 NetworkManager[859]: <info>  [1770029562.4593] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 05:52:42 np0005604943 NetworkManager[859]: <info>  [1770029562.4599] manager: NetworkManager state is now CONNECTED_SITE
Feb  2 05:52:42 np0005604943 NetworkManager[859]: <info>  [1770029562.4604] device (eth0): Activation: successful, device activated.
Feb  2 05:52:42 np0005604943 NetworkManager[859]: <info>  [1770029562.4611] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb  2 05:52:42 np0005604943 NetworkManager[859]: <info>  [1770029562.4616] manager: startup complete
Feb  2 05:52:42 np0005604943 systemd[1]: Finished Network Manager Wait Online.
Feb  2 05:52:42 np0005604943 systemd[1]: Starting Cloud-init: Network Stage...
Feb  2 05:52:42 np0005604943 cloud-init[924]: Cloud-init v. 24.4-8.el9 running 'init' at Mon, 02 Feb 2026 10:52:42 +0000. Up 8.14 seconds.
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: |  eth0  | True |         38.102.83.41         | 255.255.255.0 | global | fa:16:3e:c9:58:e3 |
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fec9:58e3/64 |       .       |  link  | fa:16:3e:c9:58:e3 |
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Feb  2 05:52:42 np0005604943 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Feb  2 05:52:43 np0005604943 cloud-init[924]: Generating public/private rsa key pair.
Feb  2 05:52:43 np0005604943 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Feb  2 05:52:43 np0005604943 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Feb  2 05:52:43 np0005604943 cloud-init[924]: The key fingerprint is:
Feb  2 05:52:43 np0005604943 cloud-init[924]: SHA256:0URjfKMrpish9/pJR3f9jALpXSET5bz/k7fQQ885suw root@np0005604943.novalocal
Feb  2 05:52:43 np0005604943 cloud-init[924]: The key's randomart image is:
Feb  2 05:52:43 np0005604943 cloud-init[924]: +---[RSA 3072]----+
Feb  2 05:52:43 np0005604943 cloud-init[924]: |         o= ...  |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |         +..o+   |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |        . .oo.+  |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |         ... + o |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |        S +.. +. |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |  . o  .oo.+ .o*o|
Feb  2 05:52:43 np0005604943 cloud-init[924]: |   o o.o... oo.=B|
Feb  2 05:52:43 np0005604943 cloud-init[924]: |    ..oo    ..+o=|
Feb  2 05:52:43 np0005604943 cloud-init[924]: |    .++.    .E .=|
Feb  2 05:52:43 np0005604943 cloud-init[924]: +----[SHA256]-----+
Feb  2 05:52:43 np0005604943 cloud-init[924]: Generating public/private ecdsa key pair.
Feb  2 05:52:43 np0005604943 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Feb  2 05:52:43 np0005604943 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Feb  2 05:52:43 np0005604943 cloud-init[924]: The key fingerprint is:
Feb  2 05:52:43 np0005604943 cloud-init[924]: SHA256:XnmzDf4dK4U4uyZKRPL1U1I4mkBvVAZV5nrZmJLPqUI root@np0005604943.novalocal
Feb  2 05:52:43 np0005604943 cloud-init[924]: The key's randomart image is:
Feb  2 05:52:43 np0005604943 cloud-init[924]: +---[ECDSA 256]---+
Feb  2 05:52:43 np0005604943 cloud-init[924]: |      .. o++o+   |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |       .o .o+    |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |     . ..+o..o   |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |      + oo..= =  |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |       oS o*==.. |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |      .. E =**.. |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |       .o   =+o. |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |      .  o o.o .o|
Feb  2 05:52:43 np0005604943 cloud-init[924]: |       .. +o. o..|
Feb  2 05:52:43 np0005604943 cloud-init[924]: +----[SHA256]-----+
Feb  2 05:52:43 np0005604943 cloud-init[924]: Generating public/private ed25519 key pair.
Feb  2 05:52:43 np0005604943 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Feb  2 05:52:43 np0005604943 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Feb  2 05:52:43 np0005604943 cloud-init[924]: The key fingerprint is:
Feb  2 05:52:43 np0005604943 cloud-init[924]: SHA256:ooRBqR5GRNfH3I4uiFVVzyjxbtrluS2rG6BpMlEh7wY root@np0005604943.novalocal
Feb  2 05:52:43 np0005604943 cloud-init[924]: The key's randomart image is:
Feb  2 05:52:43 np0005604943 cloud-init[924]: +--[ED25519 256]--+
Feb  2 05:52:43 np0005604943 cloud-init[924]: |oo.+..+oo.       |
Feb  2 05:52:43 np0005604943 cloud-init[924]: | oo oo.+o.+      |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |...E.o..oo o     |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |.o ++  .o.       |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |o =.ooo.So .     |
Feb  2 05:52:43 np0005604943 cloud-init[924]: | o ooooo= o .    |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |   o.+.. o o     |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |    +     ..o    |
Feb  2 05:52:43 np0005604943 cloud-init[924]: |         oooo.   |
Feb  2 05:52:43 np0005604943 cloud-init[924]: +----[SHA256]-----+
Feb  2 05:52:44 np0005604943 systemd[1]: Finished Cloud-init: Network Stage.
Feb  2 05:52:44 np0005604943 systemd[1]: Reached target Cloud-config availability.
Feb  2 05:52:44 np0005604943 systemd[1]: Reached target Network is Online.
Feb  2 05:52:44 np0005604943 systemd[1]: Starting Cloud-init: Config Stage...
Feb  2 05:52:44 np0005604943 systemd[1]: Starting Crash recovery kernel arming...
Feb  2 05:52:44 np0005604943 systemd[1]: Starting Notify NFS peers of a restart...
Feb  2 05:52:44 np0005604943 systemd[1]: Starting System Logging Service...
Feb  2 05:52:44 np0005604943 systemd[1]: Starting OpenSSH server daemon...
Feb  2 05:52:44 np0005604943 sm-notify[1008]: Version 2.5.4 starting
Feb  2 05:52:44 np0005604943 systemd[1]: Starting Permit User Sessions...
Feb  2 05:52:44 np0005604943 systemd[1]: Started Notify NFS peers of a restart.
Feb  2 05:52:44 np0005604943 systemd[1]: Finished Permit User Sessions.
Feb  2 05:52:44 np0005604943 systemd[1]: Started Command Scheduler.
Feb  2 05:52:44 np0005604943 systemd[1]: Started Getty on tty1.
Feb  2 05:52:44 np0005604943 systemd[1]: Started Serial Getty on ttyS0.
Feb  2 05:52:44 np0005604943 systemd[1]: Reached target Login Prompts.
Feb  2 05:52:44 np0005604943 systemd[1]: Started OpenSSH server daemon.
Feb  2 05:52:44 np0005604943 rsyslogd[1009]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1009" x-info="https://www.rsyslog.com"] start
Feb  2 05:52:44 np0005604943 rsyslogd[1009]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Feb  2 05:52:44 np0005604943 systemd[1]: Started System Logging Service.
Feb  2 05:52:44 np0005604943 systemd[1]: Reached target Multi-User System.
Feb  2 05:52:44 np0005604943 systemd[1]: Starting Record Runlevel Change in UTMP...
Feb  2 05:52:44 np0005604943 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb  2 05:52:44 np0005604943 systemd[1]: Finished Record Runlevel Change in UTMP.
Feb  2 05:52:44 np0005604943 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 05:52:44 np0005604943 kdumpctl[1022]: kdump: No kdump initial ramdisk found.
Feb  2 05:52:44 np0005604943 kdumpctl[1022]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Feb  2 05:52:44 np0005604943 cloud-init[1186]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Mon, 02 Feb 2026 10:52:44 +0000. Up 9.80 seconds.
Feb  2 05:52:44 np0005604943 systemd[1]: Finished Cloud-init: Config Stage.
Feb  2 05:52:44 np0005604943 systemd[1]: Starting Cloud-init: Final Stage...
Feb  2 05:52:44 np0005604943 dracut[1269]: dracut-057-102.git20250818.el9
Feb  2 05:52:44 np0005604943 dracut[1271]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Feb  2 05:52:44 np0005604943 cloud-init[1339]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Mon, 02 Feb 2026 10:52:44 +0000. Up 10.16 seconds.
Feb  2 05:52:44 np0005604943 cloud-init[1344]: #############################################################
Feb  2 05:52:44 np0005604943 cloud-init[1347]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Feb  2 05:52:44 np0005604943 cloud-init[1350]: 256 SHA256:XnmzDf4dK4U4uyZKRPL1U1I4mkBvVAZV5nrZmJLPqUI root@np0005604943.novalocal (ECDSA)
Feb  2 05:52:44 np0005604943 cloud-init[1352]: 256 SHA256:ooRBqR5GRNfH3I4uiFVVzyjxbtrluS2rG6BpMlEh7wY root@np0005604943.novalocal (ED25519)
Feb  2 05:52:44 np0005604943 cloud-init[1354]: 3072 SHA256:0URjfKMrpish9/pJR3f9jALpXSET5bz/k7fQQ885suw root@np0005604943.novalocal (RSA)
Feb  2 05:52:44 np0005604943 cloud-init[1358]: -----END SSH HOST KEY FINGERPRINTS-----
Feb  2 05:52:44 np0005604943 cloud-init[1359]: #############################################################
Feb  2 05:52:44 np0005604943 cloud-init[1339]: Cloud-init v. 24.4-8.el9 finished at Mon, 02 Feb 2026 10:52:44 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.33 seconds
Feb  2 05:52:44 np0005604943 systemd[1]: Finished Cloud-init: Final Stage.
Feb  2 05:52:44 np0005604943 systemd[1]: Reached target Cloud-init target.
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: memstrack is not available
Feb  2 05:52:45 np0005604943 dracut[1271]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb  2 05:52:45 np0005604943 dracut[1271]: memstrack is not available
Feb  2 05:52:45 np0005604943 dracut[1271]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb  2 05:52:45 np0005604943 dracut[1271]: *** Including module: systemd ***
Feb  2 05:52:46 np0005604943 dracut[1271]: *** Including module: fips ***
Feb  2 05:52:46 np0005604943 dracut[1271]: *** Including module: systemd-initrd ***
Feb  2 05:52:46 np0005604943 dracut[1271]: *** Including module: i18n ***
Feb  2 05:52:46 np0005604943 dracut[1271]: *** Including module: drm ***
Feb  2 05:52:46 np0005604943 dracut[1271]: *** Including module: prefixdevname ***
Feb  2 05:52:46 np0005604943 dracut[1271]: *** Including module: kernel-modules ***
Feb  2 05:52:47 np0005604943 kernel: block vda: the capability attribute has been deprecated.
Feb  2 05:52:47 np0005604943 chronyd[801]: Selected source 103.254.63.155 (2.centos.pool.ntp.org)
Feb  2 05:52:47 np0005604943 chronyd[801]: System clock TAI offset set to 37 seconds
Feb  2 05:52:47 np0005604943 dracut[1271]: *** Including module: kernel-modules-extra ***
Feb  2 05:52:47 np0005604943 dracut[1271]: *** Including module: qemu ***
Feb  2 05:52:47 np0005604943 dracut[1271]: *** Including module: fstab-sys ***
Feb  2 05:52:47 np0005604943 dracut[1271]: *** Including module: rootfs-block ***
Feb  2 05:52:47 np0005604943 dracut[1271]: *** Including module: terminfo ***
Feb  2 05:52:47 np0005604943 dracut[1271]: *** Including module: udev-rules ***
Feb  2 05:52:48 np0005604943 dracut[1271]: Skipping udev rule: 91-permissions.rules
Feb  2 05:52:48 np0005604943 dracut[1271]: Skipping udev rule: 80-drivers-modprobe.rules
Feb  2 05:52:48 np0005604943 dracut[1271]: *** Including module: virtiofs ***
Feb  2 05:52:48 np0005604943 dracut[1271]: *** Including module: dracut-systemd ***
Feb  2 05:52:48 np0005604943 dracut[1271]: *** Including module: usrmount ***
Feb  2 05:52:48 np0005604943 dracut[1271]: *** Including module: base ***
Feb  2 05:52:48 np0005604943 dracut[1271]: *** Including module: fs-lib ***
Feb  2 05:52:48 np0005604943 dracut[1271]: *** Including module: kdumpbase ***
Feb  2 05:52:48 np0005604943 dracut[1271]: *** Including module: microcode_ctl-fw_dir_override ***
Feb  2 05:52:48 np0005604943 dracut[1271]:  microcode_ctl module: mangling fw_dir
Feb  2 05:52:48 np0005604943 dracut[1271]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Feb  2 05:52:48 np0005604943 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Feb  2 05:52:48 np0005604943 dracut[1271]:    microcode_ctl: configuration "intel" is ignored
Feb  2 05:52:48 np0005604943 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Feb  2 05:52:48 np0005604943 dracut[1271]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Feb  2 05:52:48 np0005604943 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Feb  2 05:52:48 np0005604943 dracut[1271]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Feb  2 05:52:48 np0005604943 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Feb  2 05:52:48 np0005604943 dracut[1271]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Feb  2 05:52:48 np0005604943 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Feb  2 05:52:48 np0005604943 dracut[1271]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Feb  2 05:52:48 np0005604943 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Feb  2 05:52:48 np0005604943 dracut[1271]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Feb  2 05:52:48 np0005604943 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Feb  2 05:52:48 np0005604943 dracut[1271]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Feb  2 05:52:48 np0005604943 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Feb  2 05:52:49 np0005604943 dracut[1271]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Feb  2 05:52:49 np0005604943 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Feb  2 05:52:49 np0005604943 dracut[1271]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Feb  2 05:52:49 np0005604943 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Feb  2 05:52:49 np0005604943 dracut[1271]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Feb  2 05:52:49 np0005604943 dracut[1271]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Feb  2 05:52:49 np0005604943 dracut[1271]: *** Including module: openssl ***
Feb  2 05:52:49 np0005604943 dracut[1271]: *** Including module: shutdown ***
Feb  2 05:52:49 np0005604943 dracut[1271]: *** Including module: squash ***
Feb  2 05:52:49 np0005604943 dracut[1271]: *** Including modules done ***
Feb  2 05:52:49 np0005604943 dracut[1271]: *** Installing kernel module dependencies ***
Feb  2 05:52:49 np0005604943 dracut[1271]: *** Installing kernel module dependencies done ***
Feb  2 05:52:49 np0005604943 dracut[1271]: *** Resolving executable dependencies ***
Feb  2 05:52:50 np0005604943 irqbalance[782]: Cannot change IRQ 35 affinity: Operation not permitted
Feb  2 05:52:50 np0005604943 irqbalance[782]: IRQ 35 affinity is now unmanaged
Feb  2 05:52:50 np0005604943 irqbalance[782]: Cannot change IRQ 33 affinity: Operation not permitted
Feb  2 05:52:50 np0005604943 irqbalance[782]: IRQ 33 affinity is now unmanaged
Feb  2 05:52:50 np0005604943 irqbalance[782]: Cannot change IRQ 31 affinity: Operation not permitted
Feb  2 05:52:50 np0005604943 irqbalance[782]: IRQ 31 affinity is now unmanaged
Feb  2 05:52:50 np0005604943 irqbalance[782]: Cannot change IRQ 28 affinity: Operation not permitted
Feb  2 05:52:50 np0005604943 irqbalance[782]: IRQ 28 affinity is now unmanaged
Feb  2 05:52:50 np0005604943 irqbalance[782]: Cannot change IRQ 34 affinity: Operation not permitted
Feb  2 05:52:50 np0005604943 irqbalance[782]: IRQ 34 affinity is now unmanaged
Feb  2 05:52:50 np0005604943 irqbalance[782]: Cannot change IRQ 32 affinity: Operation not permitted
Feb  2 05:52:50 np0005604943 irqbalance[782]: IRQ 32 affinity is now unmanaged
Feb  2 05:52:50 np0005604943 irqbalance[782]: Cannot change IRQ 30 affinity: Operation not permitted
Feb  2 05:52:50 np0005604943 irqbalance[782]: IRQ 30 affinity is now unmanaged
Feb  2 05:52:50 np0005604943 irqbalance[782]: Cannot change IRQ 29 affinity: Operation not permitted
Feb  2 05:52:50 np0005604943 irqbalance[782]: IRQ 29 affinity is now unmanaged
Feb  2 05:52:51 np0005604943 dracut[1271]: *** Resolving executable dependencies done ***
Feb  2 05:52:51 np0005604943 dracut[1271]: *** Generating early-microcode cpio image ***
Feb  2 05:52:51 np0005604943 dracut[1271]: *** Store current command line parameters ***
Feb  2 05:52:51 np0005604943 dracut[1271]: Stored kernel commandline:
Feb  2 05:52:51 np0005604943 dracut[1271]: No dracut internal kernel commandline stored in the initramfs
Feb  2 05:52:51 np0005604943 dracut[1271]: *** Install squash loader ***
Feb  2 05:52:52 np0005604943 dracut[1271]: *** Squashing the files inside the initramfs ***
Feb  2 05:52:52 np0005604943 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 05:52:53 np0005604943 dracut[1271]: *** Squashing the files inside the initramfs done ***
Feb  2 05:52:53 np0005604943 dracut[1271]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Feb  2 05:52:53 np0005604943 dracut[1271]: *** Hardlinking files ***
Feb  2 05:52:53 np0005604943 dracut[1271]: *** Hardlinking files done ***
Feb  2 05:52:53 np0005604943 dracut[1271]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Feb  2 05:52:53 np0005604943 kdumpctl[1022]: kdump: kexec: loaded kdump kernel
Feb  2 05:52:53 np0005604943 kdumpctl[1022]: kdump: Starting kdump: [OK]
Feb  2 05:52:53 np0005604943 systemd[1]: Finished Crash recovery kernel arming.
Feb  2 05:52:53 np0005604943 systemd[1]: Startup finished in 1.252s (kernel) + 2.293s (initrd) + 15.770s (userspace) = 19.316s.
Feb  2 05:52:58 np0005604943 systemd[1]: Created slice User Slice of UID 1000.
Feb  2 05:52:58 np0005604943 systemd[1]: Starting User Runtime Directory /run/user/1000...
Feb  2 05:52:58 np0005604943 systemd-logind[786]: New session 1 of user zuul.
Feb  2 05:52:58 np0005604943 systemd[1]: Finished User Runtime Directory /run/user/1000.
Feb  2 05:52:58 np0005604943 systemd[1]: Starting User Manager for UID 1000...
Feb  2 05:52:58 np0005604943 systemd[4309]: Queued start job for default target Main User Target.
Feb  2 05:52:58 np0005604943 systemd[4309]: Created slice User Application Slice.
Feb  2 05:52:58 np0005604943 systemd[4309]: Started Mark boot as successful after the user session has run 2 minutes.
Feb  2 05:52:58 np0005604943 systemd[4309]: Started Daily Cleanup of User's Temporary Directories.
Feb  2 05:52:58 np0005604943 systemd[4309]: Reached target Paths.
Feb  2 05:52:58 np0005604943 systemd[4309]: Reached target Timers.
Feb  2 05:52:58 np0005604943 systemd[4309]: Starting D-Bus User Message Bus Socket...
Feb  2 05:52:58 np0005604943 systemd[4309]: Starting Create User's Volatile Files and Directories...
Feb  2 05:52:58 np0005604943 systemd[4309]: Finished Create User's Volatile Files and Directories.
Feb  2 05:52:58 np0005604943 systemd[4309]: Listening on D-Bus User Message Bus Socket.
Feb  2 05:52:58 np0005604943 systemd[4309]: Reached target Sockets.
Feb  2 05:52:58 np0005604943 systemd[4309]: Reached target Basic System.
Feb  2 05:52:58 np0005604943 systemd[4309]: Reached target Main User Target.
Feb  2 05:52:58 np0005604943 systemd[4309]: Startup finished in 127ms.
Feb  2 05:52:58 np0005604943 systemd[1]: Started User Manager for UID 1000.
Feb  2 05:52:58 np0005604943 systemd[1]: Started Session 1 of User zuul.
Feb  2 05:52:59 np0005604943 python3[4391]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 05:53:02 np0005604943 python3[4419]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 05:53:08 np0005604943 python3[4477]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 05:53:09 np0005604943 python3[4517]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Feb  2 05:53:11 np0005604943 python3[4543]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDdf9y9MFanepdmG4wV8a5jOg32ETe0Pg1TzqAHnOLeMeTCNEuccJm7TqIoyNkUV8eYvb2SJ79B5FNdcihpa/WTGhJReLunK7JsCfy0sc5hSP6OoLNKGNNU5FmdyuB5m4dG3stNw07vZ3lOT1eEtWKs2bPdjiRFDdfAXLnvoaqQAjuAOxgAnZxW2yyH5m44BB2EjRF+fPczalB/XjGfgl0y44KmiwsnrZ3DRBiJ+UTnhhUKuBKlxeEOdcpQs9rVQNsMDUAyUIfbPC27NK7Oq2pvgKFcm9um0ZiD1l+/Wa07M0lBxlH1bXdeW3ULJbz6Hj+1bgTT1dBLqmZm/7BhSRhvMsvOASO/8cueUfGqVu+QyuIFgkP12zXEb72gyz3dKyKflMWQsozs/Q4s/Gu7x+GX+R7G6ATOnWAmWrACY7gRD0RcwyGMJPgyBMdghSyThzyuoQalrB7mewuReCTSdjDXDi9dPxr7a6vtni+ngPPcG16l0OGUYEB0EoQfZ6SPqUM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:11 np0005604943 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  2 05:53:11 np0005604943 python3[4569]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:53:12 np0005604943 python3[4668]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 05:53:12 np0005604943 python3[4739]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770029591.7953176-207-103648950608721/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=1692a61f97514f4687ef82fba75a7a4b_id_rsa follow=False checksum=1c54221f1499a9d8530ee6509e4718d5372ef323 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:53:13 np0005604943 python3[4862]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 05:53:13 np0005604943 python3[4933]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770029592.7970645-240-91126973564845/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=1692a61f97514f4687ef82fba75a7a4b_id_rsa.pub follow=False checksum=23ce9e9acb5614ec4a3d42f4018863cfc82e6aa3 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:53:14 np0005604943 python3[4981]: ansible-ping Invoked with data=pong
Feb  2 05:53:15 np0005604943 python3[5005]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 05:53:17 np0005604943 python3[5063]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Feb  2 05:53:18 np0005604943 python3[5095]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:53:19 np0005604943 python3[5119]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:53:19 np0005604943 python3[5143]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:53:19 np0005604943 python3[5167]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:53:19 np0005604943 python3[5191]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:53:20 np0005604943 python3[5215]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:53:22 np0005604943 python3[5241]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:53:22 np0005604943 python3[5319]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 05:53:23 np0005604943 python3[5392]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1770029602.2833333-21-94514912428922/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:53:23 np0005604943 python3[5440]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:23 np0005604943 python3[5464]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:24 np0005604943 python3[5488]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:24 np0005604943 python3[5512]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:24 np0005604943 python3[5536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:25 np0005604943 python3[5560]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:25 np0005604943 python3[5584]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:25 np0005604943 python3[5608]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:25 np0005604943 python3[5632]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:25 np0005604943 python3[5656]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:26 np0005604943 python3[5680]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:26 np0005604943 python3[5704]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:26 np0005604943 python3[5728]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:27 np0005604943 python3[5752]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:27 np0005604943 python3[5776]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:27 np0005604943 python3[5800]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:27 np0005604943 python3[5824]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:28 np0005604943 python3[5848]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:28 np0005604943 python3[5872]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:28 np0005604943 python3[5896]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:28 np0005604943 python3[5920]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:29 np0005604943 python3[5944]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:29 np0005604943 python3[5968]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:29 np0005604943 python3[5992]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:30 np0005604943 python3[6016]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:30 np0005604943 python3[6040]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 05:53:33 np0005604943 python3[6066]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb  2 05:53:33 np0005604943 systemd[1]: Starting Time & Date Service...
Feb  2 05:53:33 np0005604943 systemd[1]: Started Time & Date Service.
Feb  2 05:53:33 np0005604943 systemd-timedated[6068]: Changed time zone to 'UTC' (UTC).
Feb  2 05:53:34 np0005604943 python3[6098]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:53:34 np0005604943 python3[6174]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 05:53:35 np0005604943 python3[6245]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1770029614.6510072-153-211143309167447/source _original_basename=tmpf0vanydm follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:53:35 np0005604943 python3[6345]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 05:53:36 np0005604943 python3[6416]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1770029615.6170502-183-64798892217761/source _original_basename=tmprve89o1l follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:53:37 np0005604943 python3[6518]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 05:53:37 np0005604943 python3[6591]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1770029616.723496-231-262461522115796/source _original_basename=tmpfa86r7vr follow=False checksum=2fe30ad4c160832b8d14788afb52d6cbbb5eee57 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:53:38 np0005604943 python3[6639]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 05:53:38 np0005604943 python3[6665]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 05:53:38 np0005604943 python3[6745]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 05:53:39 np0005604943 python3[6818]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1770029618.600949-273-109076756278875/source _original_basename=tmphhwzs1ll follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:53:39 np0005604943 python3[6869]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-d445-b4d4-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 05:53:40 np0005604943 python3[6897]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-d445-b4d4-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Feb  2 05:53:41 np0005604943 python3[6925]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:54:01 np0005604943 python3[6951]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:54:03 np0005604943 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb  2 05:54:35 np0005604943 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb  2 05:54:35 np0005604943 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Feb  2 05:54:35 np0005604943 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Feb  2 05:54:35 np0005604943 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Feb  2 05:54:35 np0005604943 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Feb  2 05:54:35 np0005604943 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Feb  2 05:54:35 np0005604943 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Feb  2 05:54:35 np0005604943 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Feb  2 05:54:35 np0005604943 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Feb  2 05:54:35 np0005604943 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Feb  2 05:54:35 np0005604943 NetworkManager[859]: <info>  [1770029675.1353] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb  2 05:54:35 np0005604943 systemd-udevd[6955]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 05:54:35 np0005604943 NetworkManager[859]: <info>  [1770029675.1503] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 05:54:35 np0005604943 NetworkManager[859]: <info>  [1770029675.1524] settings: (eth1): created default wired connection 'Wired connection 1'
Feb  2 05:54:35 np0005604943 NetworkManager[859]: <info>  [1770029675.1526] device (eth1): carrier: link connected
Feb  2 05:54:35 np0005604943 NetworkManager[859]: <info>  [1770029675.1528] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb  2 05:54:35 np0005604943 NetworkManager[859]: <info>  [1770029675.1532] policy: auto-activating connection 'Wired connection 1' (e49d727a-1413-3d39-bbad-8217f4075818)
Feb  2 05:54:35 np0005604943 NetworkManager[859]: <info>  [1770029675.1535] device (eth1): Activation: starting connection 'Wired connection 1' (e49d727a-1413-3d39-bbad-8217f4075818)
Feb  2 05:54:35 np0005604943 NetworkManager[859]: <info>  [1770029675.1535] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 05:54:35 np0005604943 NetworkManager[859]: <info>  [1770029675.1537] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 05:54:35 np0005604943 NetworkManager[859]: <info>  [1770029675.1540] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 05:54:35 np0005604943 NetworkManager[859]: <info>  [1770029675.1543] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb  2 05:54:35 np0005604943 python3[6981]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-707d-9118-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 05:54:45 np0005604943 python3[7061]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 05:54:46 np0005604943 python3[7134]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770029685.6808577-102-198423017076932/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=9f4fe84ccb5ac35d22a38866c7b132f1c698c370 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:54:47 np0005604943 python3[7184]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 05:54:47 np0005604943 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb  2 05:54:47 np0005604943 systemd[1]: Stopped Network Manager Wait Online.
Feb  2 05:54:47 np0005604943 systemd[1]: Stopping Network Manager Wait Online...
Feb  2 05:54:47 np0005604943 systemd[1]: Stopping Network Manager...
Feb  2 05:54:47 np0005604943 NetworkManager[859]: <info>  [1770029687.1317] caught SIGTERM, shutting down normally.
Feb  2 05:54:47 np0005604943 NetworkManager[859]: <info>  [1770029687.1328] dhcp4 (eth0): canceled DHCP transaction
Feb  2 05:54:47 np0005604943 NetworkManager[859]: <info>  [1770029687.1328] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 05:54:47 np0005604943 NetworkManager[859]: <info>  [1770029687.1328] dhcp4 (eth0): state changed no lease
Feb  2 05:54:47 np0005604943 NetworkManager[859]: <info>  [1770029687.1335] manager: NetworkManager state is now CONNECTING
Feb  2 05:54:47 np0005604943 NetworkManager[859]: <info>  [1770029687.1422] dhcp4 (eth1): canceled DHCP transaction
Feb  2 05:54:47 np0005604943 NetworkManager[859]: <info>  [1770029687.1423] dhcp4 (eth1): state changed no lease
Feb  2 05:54:47 np0005604943 NetworkManager[859]: <info>  [1770029687.1475] exiting (success)
Feb  2 05:54:47 np0005604943 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 05:54:47 np0005604943 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 05:54:47 np0005604943 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb  2 05:54:47 np0005604943 systemd[1]: Stopped Network Manager.
Feb  2 05:54:47 np0005604943 systemd[1]: Starting Network Manager...
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.1828] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:dbf42354-b9fe-4201-8fff-3ebe30e4e21a)
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.1830] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.1871] manager[0x55adfaaac000]: monitoring kernel firmware directory '/lib/firmware'.
Feb  2 05:54:47 np0005604943 systemd[1]: Starting Hostname Service...
Feb  2 05:54:47 np0005604943 systemd[1]: Started Hostname Service.
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2618] hostname: hostname: using hostnamed
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2618] hostname: static hostname changed from (none) to "np0005604943.novalocal"
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2625] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2630] manager[0x55adfaaac000]: rfkill: Wi-Fi hardware radio set enabled
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2630] manager[0x55adfaaac000]: rfkill: WWAN hardware radio set enabled
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2659] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2661] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2662] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2663] manager: Networking is enabled by state file
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2666] settings: Loaded settings plugin: keyfile (internal)
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2670] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2699] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2710] dhcp: init: Using DHCP client 'internal'
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2713] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2719] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2725] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2734] device (lo): Activation: starting connection 'lo' (175d73e5-40e0-45b2-8b10-784bc91cfee9)
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2740] device (eth0): carrier: link connected
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2744] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2749] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2750] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2756] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2763] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2768] device (eth1): carrier: link connected
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2772] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2777] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (e49d727a-1413-3d39-bbad-8217f4075818) (indicated)
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2777] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2781] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2788] device (eth1): Activation: starting connection 'Wired connection 1' (e49d727a-1413-3d39-bbad-8217f4075818)
Feb  2 05:54:47 np0005604943 systemd[1]: Started Network Manager.
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2795] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2798] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2800] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2802] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2804] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2807] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2810] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2814] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2819] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2825] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2828] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2836] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2838] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2861] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2867] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2871] device (lo): Activation: successful, device activated.
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2886] dhcp4 (eth0): state changed new lease, address=38.102.83.41
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.2901] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb  2 05:54:47 np0005604943 systemd[1]: Starting Network Manager Wait Online...
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.3010] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.3038] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.3041] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.3047] manager: NetworkManager state is now CONNECTED_SITE
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.3052] device (eth0): Activation: successful, device activated.
Feb  2 05:54:47 np0005604943 NetworkManager[7193]: <info>  [1770029687.3061] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb  2 05:54:47 np0005604943 python3[7268]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-707d-9118-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 05:54:57 np0005604943 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 05:55:17 np0005604943 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.5826] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb  2 05:55:32 np0005604943 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 05:55:32 np0005604943 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6096] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6100] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6109] device (eth1): Activation: successful, device activated.
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6120] manager: startup complete
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6123] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <warn>  [1770029732.6131] device (eth1): Activation: failed for connection 'Wired connection 1'
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6145] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Feb  2 05:55:32 np0005604943 systemd[1]: Finished Network Manager Wait Online.
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6353] dhcp4 (eth1): canceled DHCP transaction
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6353] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6353] dhcp4 (eth1): state changed no lease
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6374] policy: auto-activating connection 'ci-private-network' (89166417-2117-5428-b08a-28089d1bb51f)
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6381] device (eth1): Activation: starting connection 'ci-private-network' (89166417-2117-5428-b08a-28089d1bb51f)
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6383] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6388] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6397] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6410] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6469] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6473] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 05:55:32 np0005604943 NetworkManager[7193]: <info>  [1770029732.6479] device (eth1): Activation: successful, device activated.
Feb  2 05:55:42 np0005604943 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 05:55:44 np0005604943 python3[7373]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 05:55:44 np0005604943 python3[7446]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770029744.3708994-267-192545410168167/source _original_basename=tmpzt68unfw follow=False checksum=e4c393ff94986f8a93327c2207a14275aca333c2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 05:55:52 np0005604943 systemd[4309]: Starting Mark boot as successful...
Feb  2 05:55:52 np0005604943 systemd[4309]: Finished Mark boot as successful.
Feb  2 05:56:45 np0005604943 systemd-logind[786]: Session 1 logged out. Waiting for processes to exit.
Feb  2 05:58:52 np0005604943 systemd[4309]: Created slice User Background Tasks Slice.
Feb  2 05:58:52 np0005604943 systemd[4309]: Starting Cleanup of User's Temporary Files and Directories...
Feb  2 05:58:52 np0005604943 systemd[4309]: Finished Cleanup of User's Temporary Files and Directories.
Feb  2 06:03:24 np0005604943 systemd-logind[786]: New session 3 of user zuul.
Feb  2 06:03:24 np0005604943 systemd[1]: Started Session 3 of User zuul.
Feb  2 06:03:25 np0005604943 python3[7523]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-a546-43ca-000000002169-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:03:26 np0005604943 python3[7552]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:03:26 np0005604943 python3[7578]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:03:26 np0005604943 python3[7604]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:03:26 np0005604943 python3[7630]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:03:27 np0005604943 python3[7656]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:03:28 np0005604943 python3[7734]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:03:28 np0005604943 python3[7807]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770030207.746539-498-242155122348111/source _original_basename=tmpg0enuuoe follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:03:29 np0005604943 python3[7857]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 06:03:29 np0005604943 systemd[1]: Reloading.
Feb  2 06:03:29 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:03:31 np0005604943 python3[7913]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Feb  2 06:03:31 np0005604943 python3[7939]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:03:31 np0005604943 python3[7967]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:03:32 np0005604943 python3[7995]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:03:32 np0005604943 python3[8023]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:03:33 np0005604943 python3[8050]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-a546-43ca-000000002170-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:03:33 np0005604943 python3[8080]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 06:03:35 np0005604943 systemd[1]: session-3.scope: Deactivated successfully.
Feb  2 06:03:35 np0005604943 systemd[1]: session-3.scope: Consumed 3.836s CPU time.
Feb  2 06:03:35 np0005604943 systemd-logind[786]: Session 3 logged out. Waiting for processes to exit.
Feb  2 06:03:35 np0005604943 systemd-logind[786]: Removed session 3.
Feb  2 06:03:37 np0005604943 systemd-logind[786]: New session 4 of user zuul.
Feb  2 06:03:37 np0005604943 systemd[1]: Started Session 4 of User zuul.
Feb  2 06:03:37 np0005604943 python3[8115]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  2 06:03:44 np0005604943 setsebool[8158]: The virt_use_nfs policy boolean was changed to 1 by root
Feb  2 06:03:44 np0005604943 setsebool[8158]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Feb  2 06:03:56 np0005604943 kernel: SELinux:  Converting 386 SID table entries...
Feb  2 06:03:56 np0005604943 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 06:03:56 np0005604943 kernel: SELinux:  policy capability open_perms=1
Feb  2 06:03:56 np0005604943 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 06:03:56 np0005604943 kernel: SELinux:  policy capability always_check_network=0
Feb  2 06:03:56 np0005604943 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 06:03:56 np0005604943 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 06:03:56 np0005604943 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 06:04:10 np0005604943 kernel: SELinux:  Converting 389 SID table entries...
Feb  2 06:04:10 np0005604943 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 06:04:10 np0005604943 kernel: SELinux:  policy capability open_perms=1
Feb  2 06:04:10 np0005604943 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 06:04:10 np0005604943 kernel: SELinux:  policy capability always_check_network=0
Feb  2 06:04:10 np0005604943 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 06:04:10 np0005604943 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 06:04:10 np0005604943 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 06:04:31 np0005604943 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb  2 06:04:31 np0005604943 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 06:04:31 np0005604943 systemd[1]: Starting man-db-cache-update.service...
Feb  2 06:04:31 np0005604943 systemd[1]: Reloading.
Feb  2 06:04:31 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:04:31 np0005604943 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 06:04:36 np0005604943 python3[13286]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-11f8-f976-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:04:36 np0005604943 kernel: evm: overlay not supported
Feb  2 06:04:36 np0005604943 systemd[4309]: Starting D-Bus User Message Bus...
Feb  2 06:04:36 np0005604943 dbus-broker-launch[13969]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Feb  2 06:04:36 np0005604943 dbus-broker-launch[13969]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Feb  2 06:04:36 np0005604943 systemd[4309]: Started D-Bus User Message Bus.
Feb  2 06:04:36 np0005604943 dbus-broker-lau[13969]: Ready
Feb  2 06:04:36 np0005604943 systemd[4309]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb  2 06:04:36 np0005604943 systemd[4309]: Created slice Slice /user.
Feb  2 06:04:36 np0005604943 systemd[4309]: podman-13947.scope: unit configures an IP firewall, but not running as root.
Feb  2 06:04:36 np0005604943 systemd[4309]: (This warning is only shown for the first unit using IP firewalling.)
Feb  2 06:04:37 np0005604943 systemd[4309]: Started podman-13947.scope.
Feb  2 06:04:37 np0005604943 systemd[4309]: Started podman-pause-2465a80e.scope.
Feb  2 06:04:38 np0005604943 python3[14156]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.180:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.180:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:04:38 np0005604943 python3[14156]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Feb  2 06:04:38 np0005604943 systemd[1]: session-4.scope: Deactivated successfully.
Feb  2 06:04:38 np0005604943 systemd[1]: session-4.scope: Consumed 44.613s CPU time.
Feb  2 06:04:38 np0005604943 systemd-logind[786]: Session 4 logged out. Waiting for processes to exit.
Feb  2 06:04:38 np0005604943 systemd-logind[786]: Removed session 4.
Feb  2 06:05:02 np0005604943 systemd-logind[786]: New session 5 of user zuul.
Feb  2 06:05:02 np0005604943 systemd[1]: Started Session 5 of User zuul.
Feb  2 06:05:02 np0005604943 python3[24708]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFH1or0xR1IRg+BnsIp4D+UZvAvo92aMemMM7gXlX8LIjxvs8KTb9evrVIPbBD+RQK8Vwoj5o9PFZ7Kx19pcYWg= zuul@np0005604942.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 06:05:03 np0005604943 python3[24851]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFH1or0xR1IRg+BnsIp4D+UZvAvo92aMemMM7gXlX8LIjxvs8KTb9evrVIPbBD+RQK8Vwoj5o9PFZ7Kx19pcYWg= zuul@np0005604942.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 06:05:04 np0005604943 python3[25232]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005604943.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Feb  2 06:05:04 np0005604943 python3[25459]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFH1or0xR1IRg+BnsIp4D+UZvAvo92aMemMM7gXlX8LIjxvs8KTb9evrVIPbBD+RQK8Vwoj5o9PFZ7Kx19pcYWg= zuul@np0005604942.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 06:05:05 np0005604943 python3[25733]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:05:05 np0005604943 python3[26000]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1770030304.7342222-135-220308121334627/source _original_basename=tmpxjfnnrcg follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:05:06 np0005604943 python3[26288]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Feb  2 06:05:06 np0005604943 systemd[1]: Starting Hostname Service...
Feb  2 06:05:06 np0005604943 systemd[1]: Started Hostname Service.
Feb  2 06:05:06 np0005604943 systemd-hostnamed[26401]: Changed pretty hostname to 'compute-0'
Feb  2 06:05:06 np0005604943 systemd-hostnamed[26401]: Hostname set to <compute-0> (static)
Feb  2 06:05:06 np0005604943 NetworkManager[7193]: <info>  [1770030306.3554] hostname: static hostname changed from "np0005604943.novalocal" to "compute-0"
Feb  2 06:05:06 np0005604943 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 06:05:06 np0005604943 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 06:05:06 np0005604943 systemd[1]: session-5.scope: Deactivated successfully.
Feb  2 06:05:06 np0005604943 systemd[1]: session-5.scope: Consumed 2.138s CPU time.
Feb  2 06:05:06 np0005604943 systemd-logind[786]: Session 5 logged out. Waiting for processes to exit.
Feb  2 06:05:06 np0005604943 systemd-logind[786]: Removed session 5.
Feb  2 06:05:16 np0005604943 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 06:05:17 np0005604943 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 06:05:17 np0005604943 systemd[1]: Finished man-db-cache-update.service.
Feb  2 06:05:17 np0005604943 systemd[1]: man-db-cache-update.service: Consumed 44.435s CPU time.
Feb  2 06:05:17 np0005604943 systemd[1]: run-r10560acf80d9461b826cd1fe6edf56f5.service: Deactivated successfully.
Feb  2 06:05:36 np0005604943 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  2 06:07:52 np0005604943 systemd[1]: Starting Cleanup of Temporary Directories...
Feb  2 06:07:53 np0005604943 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Feb  2 06:07:53 np0005604943 systemd[1]: Finished Cleanup of Temporary Directories.
Feb  2 06:07:53 np0005604943 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Feb  2 06:08:41 np0005604943 systemd-logind[786]: New session 6 of user zuul.
Feb  2 06:08:41 np0005604943 systemd[1]: Started Session 6 of User zuul.
Feb  2 06:08:41 np0005604943 python3[30091]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:08:42 np0005604943 python3[30207]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:08:43 np0005604943 python3[30280]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770030522.745474-33703-174688474827147/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:08:43 np0005604943 python3[30306]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:08:44 np0005604943 python3[30379]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770030522.745474-33703-174688474827147/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:08:44 np0005604943 python3[30405]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:08:44 np0005604943 python3[30478]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770030522.745474-33703-174688474827147/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:08:44 np0005604943 python3[30504]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:08:45 np0005604943 python3[30577]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770030522.745474-33703-174688474827147/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:08:45 np0005604943 python3[30603]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:08:45 np0005604943 python3[30676]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770030522.745474-33703-174688474827147/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:08:45 np0005604943 python3[30702]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:08:46 np0005604943 python3[30775]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770030522.745474-33703-174688474827147/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:08:46 np0005604943 python3[30801]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:08:46 np0005604943 python3[30874]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770030522.745474-33703-174688474827147/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:08:57 np0005604943 python3[30932]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:11:52 np0005604943 systemd[1]: Starting dnf makecache...
Feb  2 06:11:53 np0005604943 dnf[30937]: Failed determining last makecache time.
Feb  2 06:11:53 np0005604943 dnf[30937]: delorean-openstack-barbican-42b4c41831408a8e323 351 kB/s |  13 kB     00:00
Feb  2 06:11:53 np0005604943 dnf[30937]: delorean-python-glean-642fffe0203a8ffcc2443db52 3.2 MB/s |  65 kB     00:00
Feb  2 06:11:53 np0005604943 dnf[30937]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.3 MB/s |  32 kB     00:00
Feb  2 06:11:53 np0005604943 dnf[30937]: delorean-python-stevedore-c4acc5639fd2329372142 4.7 MB/s | 131 kB     00:00
Feb  2 06:11:53 np0005604943 dnf[30937]: delorean-python-cloudkitty-tests-tempest-783703 1.3 MB/s |  32 kB     00:00
Feb  2 06:11:53 np0005604943 dnf[30937]: delorean-diskimage-builder-61b717cc45660834fe9a  13 MB/s | 349 kB     00:00
Feb  2 06:11:53 np0005604943 dnf[30937]: delorean-openstack-nova-eaa65f0b85123a4ee343246 2.2 MB/s |  42 kB     00:00
Feb  2 06:11:53 np0005604943 dnf[30937]: delorean-python-designate-tests-tempest-347fdbc 794 kB/s |  18 kB     00:00
Feb  2 06:11:53 np0005604943 dnf[30937]: delorean-openstack-glance-1fd12c29b339f30fe823e 721 kB/s |  18 kB     00:00
Feb  2 06:11:53 np0005604943 dnf[30937]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.4 MB/s |  29 kB     00:00
Feb  2 06:11:53 np0005604943 dnf[30937]: delorean-openstack-manila-d783d10e75495b73866db 1.1 MB/s |  25 kB     00:00
Feb  2 06:11:53 np0005604943 dnf[30937]: delorean-openstack-neutron-95cadbd379667c8520c8 6.3 MB/s | 154 kB     00:00
Feb  2 06:11:53 np0005604943 dnf[30937]: delorean-openstack-octavia-5975097dd4b021385178 1.0 MB/s |  26 kB     00:00
Feb  2 06:11:53 np0005604943 dnf[30937]: delorean-openstack-watcher-c014f81a8647287f6dcc 721 kB/s |  16 kB     00:00
Feb  2 06:11:54 np0005604943 dnf[30937]: delorean-python-tcib-78032d201b02cee27e8e644c61 356 kB/s | 7.4 kB     00:00
Feb  2 06:11:54 np0005604943 dnf[30937]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 4.9 MB/s | 144 kB     00:00
Feb  2 06:11:54 np0005604943 dnf[30937]: delorean-openstack-swift-dc98a8463506ac520c469a 637 kB/s |  14 kB     00:00
Feb  2 06:11:54 np0005604943 dnf[30937]: delorean-python-tempestconf-8515371b7cceebd4282 2.6 MB/s |  53 kB     00:00
Feb  2 06:11:54 np0005604943 dnf[30937]: delorean-openstack-heat-ui-013accbfd179753bc3f0 3.5 MB/s |  96 kB     00:00
Feb  2 06:11:54 np0005604943 dnf[30937]: CentOS Stream 9 - BaseOS                         63 kB/s | 6.7 kB     00:00
Feb  2 06:11:54 np0005604943 dnf[30937]: CentOS Stream 9 - AppStream                      70 kB/s | 6.8 kB     00:00
Feb  2 06:11:54 np0005604943 dnf[30937]: CentOS Stream 9 - CRB                            66 kB/s | 6.6 kB     00:00
Feb  2 06:11:54 np0005604943 dnf[30937]: CentOS Stream 9 - Extras packages                72 kB/s | 7.3 kB     00:00
Feb  2 06:11:54 np0005604943 dnf[30937]: dlrn-antelope-testing                            27 MB/s | 1.1 MB     00:00
Feb  2 06:11:55 np0005604943 dnf[30937]: dlrn-antelope-build-deps                         16 MB/s | 461 kB     00:00
Feb  2 06:11:55 np0005604943 dnf[30937]: centos9-rabbitmq                                9.0 MB/s | 123 kB     00:00
Feb  2 06:11:55 np0005604943 dnf[30937]: centos9-storage                                  25 MB/s | 415 kB     00:00
Feb  2 06:11:55 np0005604943 dnf[30937]: centos9-opstools                                4.6 MB/s |  51 kB     00:00
Feb  2 06:11:55 np0005604943 dnf[30937]: NFV SIG OpenvSwitch                              22 MB/s | 461 kB     00:00
Feb  2 06:11:56 np0005604943 dnf[30937]: repo-setup-centos-appstream                     132 MB/s |  26 MB     00:00
Feb  2 06:12:01 np0005604943 dnf[30937]: repo-setup-centos-baseos                        117 MB/s | 8.9 MB     00:00
Feb  2 06:12:03 np0005604943 dnf[30937]: repo-setup-centos-highavailability               22 MB/s | 744 kB     00:00
Feb  2 06:12:03 np0005604943 dnf[30937]: repo-setup-centos-powertools                     74 MB/s | 7.6 MB     00:00
Feb  2 06:12:06 np0005604943 dnf[30937]: Extra Packages for Enterprise Linux 9 - x86_64   42 MB/s |  20 MB     00:00
Feb  2 06:12:19 np0005604943 dnf[30937]: Metadata cache created.
Feb  2 06:12:19 np0005604943 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb  2 06:12:19 np0005604943 systemd[1]: Finished dnf makecache.
Feb  2 06:12:19 np0005604943 systemd[1]: dnf-makecache.service: Consumed 24.262s CPU time.
Feb  2 06:13:57 np0005604943 systemd[1]: session-6.scope: Deactivated successfully.
Feb  2 06:13:57 np0005604943 systemd[1]: session-6.scope: Consumed 4.435s CPU time.
Feb  2 06:13:57 np0005604943 systemd-logind[786]: Session 6 logged out. Waiting for processes to exit.
Feb  2 06:13:57 np0005604943 systemd-logind[786]: Removed session 6.
Feb  2 06:23:10 np0005604943 systemd-logind[786]: New session 7 of user zuul.
Feb  2 06:23:10 np0005604943 systemd[1]: Started Session 7 of User zuul.
Feb  2 06:23:11 np0005604943 python3.9[31209]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:23:12 np0005604943 python3.9[31390]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:23:19 np0005604943 systemd[1]: session-7.scope: Deactivated successfully.
Feb  2 06:23:19 np0005604943 systemd[1]: session-7.scope: Consumed 7.279s CPU time.
Feb  2 06:23:19 np0005604943 systemd-logind[786]: Session 7 logged out. Waiting for processes to exit.
Feb  2 06:23:19 np0005604943 systemd-logind[786]: Removed session 7.
Feb  2 06:23:34 np0005604943 systemd-logind[786]: New session 8 of user zuul.
Feb  2 06:23:34 np0005604943 systemd[1]: Started Session 8 of User zuul.
Feb  2 06:23:35 np0005604943 python3.9[31603]: ansible-ansible.legacy.ping Invoked with data=pong
Feb  2 06:23:36 np0005604943 python3.9[31777]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:23:37 np0005604943 python3.9[31929]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:23:38 np0005604943 python3.9[32082]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:23:38 np0005604943 python3.9[32234]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:23:39 np0005604943 python3.9[32386]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:23:40 np0005604943 python3.9[32509]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031419.0780022-68-19617079109828/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:23:40 np0005604943 python3.9[32661]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:23:41 np0005604943 python3.9[32817]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:23:42 np0005604943 python3.9[32969]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:23:42 np0005604943 python3.9[33119]: ansible-ansible.builtin.service_facts Invoked
Feb  2 06:23:45 np0005604943 python3.9[33372]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:23:46 np0005604943 python3.9[33522]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:23:47 np0005604943 python3.9[33676]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:23:48 np0005604943 python3.9[33834]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:23:49 np0005604943 python3.9[33918]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:24:35 np0005604943 systemd[1]: Reloading.
Feb  2 06:24:35 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:24:35 np0005604943 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Feb  2 06:24:35 np0005604943 systemd[1]: Reloading.
Feb  2 06:24:35 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:24:35 np0005604943 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Feb  2 06:24:35 np0005604943 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Feb  2 06:24:35 np0005604943 systemd[1]: Reloading.
Feb  2 06:24:35 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:24:35 np0005604943 systemd[1]: Listening on LVM2 poll daemon socket.
Feb  2 06:24:36 np0005604943 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Feb  2 06:24:36 np0005604943 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Feb  2 06:25:30 np0005604943 kernel: SELinux:  Converting 2728 SID table entries...
Feb  2 06:25:30 np0005604943 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 06:25:30 np0005604943 kernel: SELinux:  policy capability open_perms=1
Feb  2 06:25:30 np0005604943 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 06:25:30 np0005604943 kernel: SELinux:  policy capability always_check_network=0
Feb  2 06:25:30 np0005604943 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 06:25:30 np0005604943 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 06:25:30 np0005604943 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 06:25:30 np0005604943 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Feb  2 06:25:31 np0005604943 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 06:25:31 np0005604943 systemd[1]: Starting man-db-cache-update.service...
Feb  2 06:25:31 np0005604943 systemd[1]: Reloading.
Feb  2 06:25:31 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:25:31 np0005604943 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 06:25:31 np0005604943 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 06:25:31 np0005604943 systemd[1]: Finished man-db-cache-update.service.
Feb  2 06:25:31 np0005604943 systemd[1]: run-re0411d65c2ba4b5986ca3110af625509.service: Deactivated successfully.
Feb  2 06:25:32 np0005604943 python3.9[35434]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:25:34 np0005604943 python3.9[35717]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb  2 06:25:34 np0005604943 python3.9[35869]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb  2 06:25:38 np0005604943 python3.9[36022]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:25:39 np0005604943 python3.9[36174]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb  2 06:25:40 np0005604943 python3.9[36326]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:25:41 np0005604943 python3.9[36478]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:25:42 np0005604943 python3.9[36601]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031541.1226017-231-265949314400112/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=89824fc7f99ee1c7063912a8f8135620e81daa3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:25:42 np0005604943 python3.9[36753]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:25:43 np0005604943 python3.9[36905]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:25:43 np0005604943 python3.9[37058]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:25:45 np0005604943 python3.9[37210]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb  2 06:25:45 np0005604943 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 06:25:45 np0005604943 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 06:25:45 np0005604943 python3.9[37364]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 06:25:46 np0005604943 python3.9[37522]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb  2 06:25:47 np0005604943 python3.9[37682]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb  2 06:25:47 np0005604943 python3.9[37835]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 06:25:48 np0005604943 python3.9[37993]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb  2 06:25:49 np0005604943 python3.9[38145]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:25:52 np0005604943 python3.9[38298]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:25:52 np0005604943 python3.9[38450]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:25:53 np0005604943 python3.9[38573]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031552.3104813-350-277287738332860/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:25:54 np0005604943 python3.9[38725]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:25:54 np0005604943 systemd[1]: Starting Load Kernel Modules...
Feb  2 06:25:54 np0005604943 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb  2 06:25:54 np0005604943 kernel: Bridge firewalling registered
Feb  2 06:25:54 np0005604943 systemd-modules-load[38729]: Inserted module 'br_netfilter'
Feb  2 06:25:54 np0005604943 systemd[1]: Finished Load Kernel Modules.
Feb  2 06:25:54 np0005604943 python3.9[38885]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:25:55 np0005604943 python3.9[39008]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031554.4949534-373-163102733431141/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:25:56 np0005604943 python3.9[39160]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:25:59 np0005604943 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Feb  2 06:25:59 np0005604943 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Feb  2 06:25:59 np0005604943 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 06:25:59 np0005604943 systemd[1]: Starting man-db-cache-update.service...
Feb  2 06:25:59 np0005604943 systemd[1]: Reloading.
Feb  2 06:25:59 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:25:59 np0005604943 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 06:26:00 np0005604943 python3.9[40667]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:26:01 np0005604943 python3.9[41838]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb  2 06:26:02 np0005604943 python3.9[42675]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:26:02 np0005604943 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 06:26:02 np0005604943 systemd[1]: Finished man-db-cache-update.service.
Feb  2 06:26:02 np0005604943 systemd[1]: man-db-cache-update.service: Consumed 3.636s CPU time.
Feb  2 06:26:02 np0005604943 systemd[1]: run-r0a20b3e52a0947129764dfcc893deb07.service: Deactivated successfully.
Feb  2 06:26:02 np0005604943 python3.9[43365]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:26:02 np0005604943 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb  2 06:26:03 np0005604943 systemd[1]: Starting Authorization Manager...
Feb  2 06:26:03 np0005604943 systemd[1]: Started Dynamic System Tuning Daemon.
Feb  2 06:26:03 np0005604943 polkitd[43582]: Started polkitd version 0.117
Feb  2 06:26:03 np0005604943 systemd[1]: Started Authorization Manager.
Feb  2 06:26:04 np0005604943 python3.9[43752]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:26:04 np0005604943 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb  2 06:26:04 np0005604943 systemd[1]: tuned.service: Deactivated successfully.
Feb  2 06:26:04 np0005604943 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb  2 06:26:04 np0005604943 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb  2 06:26:04 np0005604943 systemd[1]: Started Dynamic System Tuning Daemon.
Feb  2 06:26:04 np0005604943 python3.9[43914]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb  2 06:26:06 np0005604943 python3.9[44066]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:26:07 np0005604943 systemd[1]: Reloading.
Feb  2 06:26:07 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:26:07 np0005604943 python3.9[44256]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:26:07 np0005604943 systemd[1]: Reloading.
Feb  2 06:26:07 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:26:08 np0005604943 python3.9[44445]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:26:09 np0005604943 python3.9[44598]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:26:09 np0005604943 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Feb  2 06:26:09 np0005604943 python3.9[44751]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:26:11 np0005604943 python3.9[44913]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:26:12 np0005604943 python3.9[45066]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:26:12 np0005604943 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  2 06:26:12 np0005604943 systemd[1]: Stopped Apply Kernel Variables.
Feb  2 06:26:12 np0005604943 systemd[1]: Stopping Apply Kernel Variables...
Feb  2 06:26:12 np0005604943 systemd[1]: Starting Apply Kernel Variables...
Feb  2 06:26:12 np0005604943 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb  2 06:26:12 np0005604943 systemd[1]: Finished Apply Kernel Variables.
Feb  2 06:26:12 np0005604943 systemd[1]: session-8.scope: Deactivated successfully.
Feb  2 06:26:12 np0005604943 systemd[1]: session-8.scope: Consumed 2min 1.758s CPU time.
Feb  2 06:26:12 np0005604943 systemd-logind[786]: Session 8 logged out. Waiting for processes to exit.
Feb  2 06:26:12 np0005604943 systemd-logind[786]: Removed session 8.
Feb  2 06:26:19 np0005604943 systemd-logind[786]: New session 9 of user zuul.
Feb  2 06:26:19 np0005604943 systemd[1]: Started Session 9 of User zuul.
Feb  2 06:26:20 np0005604943 python3.9[45250]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:26:21 np0005604943 python3.9[45406]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb  2 06:26:22 np0005604943 python3.9[45559]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 06:26:22 np0005604943 python3.9[45717]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb  2 06:26:23 np0005604943 python3.9[45877]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:26:24 np0005604943 python3.9[45961]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb  2 06:26:27 np0005604943 python3.9[46124]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:26:37 np0005604943 kernel: SELinux:  Converting 2740 SID table entries...
Feb  2 06:26:37 np0005604943 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 06:26:37 np0005604943 kernel: SELinux:  policy capability open_perms=1
Feb  2 06:26:37 np0005604943 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 06:26:37 np0005604943 kernel: SELinux:  policy capability always_check_network=0
Feb  2 06:26:37 np0005604943 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 06:26:37 np0005604943 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 06:26:37 np0005604943 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 06:26:38 np0005604943 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Feb  2 06:26:38 np0005604943 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Feb  2 06:26:39 np0005604943 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 06:26:39 np0005604943 systemd[1]: Starting man-db-cache-update.service...
Feb  2 06:26:39 np0005604943 systemd[1]: Reloading.
Feb  2 06:26:39 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:26:39 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:26:39 np0005604943 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 06:26:40 np0005604943 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 06:26:40 np0005604943 systemd[1]: Finished man-db-cache-update.service.
Feb  2 06:26:40 np0005604943 systemd[1]: run-rab8cd4ed70804eff9e5887e4009712c2.service: Deactivated successfully.
Feb  2 06:26:41 np0005604943 python3.9[47221]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 06:26:41 np0005604943 systemd[1]: Reloading.
Feb  2 06:26:41 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:26:41 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:26:41 np0005604943 systemd[1]: Starting Open vSwitch Database Unit...
Feb  2 06:26:41 np0005604943 chown[47263]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Feb  2 06:26:41 np0005604943 ovs-ctl[47268]: /etc/openvswitch/conf.db does not exist ... (warning).
Feb  2 06:26:41 np0005604943 ovs-ctl[47268]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Feb  2 06:26:41 np0005604943 ovs-ctl[47268]: Starting ovsdb-server [  OK  ]
Feb  2 06:26:41 np0005604943 ovs-vsctl[47318]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Feb  2 06:26:41 np0005604943 ovs-vsctl[47338]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"63c28000-4b99-40fb-b19f-6b3ba1922f6d\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Feb  2 06:26:41 np0005604943 ovs-ctl[47268]: Configuring Open vSwitch system IDs [  OK  ]
Feb  2 06:26:41 np0005604943 ovs-vsctl[47344]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb  2 06:26:41 np0005604943 ovs-ctl[47268]: Enabling remote OVSDB managers [  OK  ]
Feb  2 06:26:41 np0005604943 systemd[1]: Started Open vSwitch Database Unit.
Feb  2 06:26:41 np0005604943 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Feb  2 06:26:41 np0005604943 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Feb  2 06:26:41 np0005604943 systemd[1]: Starting Open vSwitch Forwarding Unit...
Feb  2 06:26:41 np0005604943 kernel: openvswitch: Open vSwitch switching datapath
Feb  2 06:26:41 np0005604943 ovs-ctl[47388]: Inserting openvswitch module [  OK  ]
Feb  2 06:26:41 np0005604943 ovs-ctl[47357]: Starting ovs-vswitchd [  OK  ]
Feb  2 06:26:41 np0005604943 ovs-ctl[47357]: Enabling remote OVSDB managers [  OK  ]
Feb  2 06:26:41 np0005604943 systemd[1]: Started Open vSwitch Forwarding Unit.
Feb  2 06:26:41 np0005604943 ovs-vsctl[47406]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb  2 06:26:41 np0005604943 systemd[1]: Starting Open vSwitch...
Feb  2 06:26:41 np0005604943 systemd[1]: Finished Open vSwitch.
Feb  2 06:26:42 np0005604943 python3.9[47557]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:26:43 np0005604943 python3.9[47709]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb  2 06:26:44 np0005604943 kernel: SELinux:  Converting 2754 SID table entries...
Feb  2 06:26:44 np0005604943 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 06:26:44 np0005604943 kernel: SELinux:  policy capability open_perms=1
Feb  2 06:26:44 np0005604943 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 06:26:44 np0005604943 kernel: SELinux:  policy capability always_check_network=0
Feb  2 06:26:44 np0005604943 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 06:26:44 np0005604943 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 06:26:44 np0005604943 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 06:26:45 np0005604943 python3.9[47864]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:26:46 np0005604943 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Feb  2 06:26:46 np0005604943 python3.9[48022]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:26:48 np0005604943 python3.9[48175]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:26:49 np0005604943 python3.9[48462]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb  2 06:26:50 np0005604943 python3.9[48612]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:26:51 np0005604943 python3.9[48766]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:26:53 np0005604943 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 06:26:53 np0005604943 systemd[1]: Starting man-db-cache-update.service...
Feb  2 06:26:53 np0005604943 systemd[1]: Reloading.
Feb  2 06:26:53 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:26:53 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:26:53 np0005604943 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 06:26:53 np0005604943 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 06:26:53 np0005604943 systemd[1]: Finished man-db-cache-update.service.
Feb  2 06:26:53 np0005604943 systemd[1]: run-r398e2a5285704a5baccf2e22220964a5.service: Deactivated successfully.
Feb  2 06:26:54 np0005604943 python3.9[49084]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:26:54 np0005604943 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb  2 06:26:54 np0005604943 systemd[1]: Stopped Network Manager Wait Online.
Feb  2 06:26:54 np0005604943 systemd[1]: Stopping Network Manager Wait Online...
Feb  2 06:26:54 np0005604943 systemd[1]: Stopping Network Manager...
Feb  2 06:26:54 np0005604943 NetworkManager[7193]: <info>  [1770031614.5219] caught SIGTERM, shutting down normally.
Feb  2 06:26:54 np0005604943 NetworkManager[7193]: <info>  [1770031614.5235] dhcp4 (eth0): canceled DHCP transaction
Feb  2 06:26:54 np0005604943 NetworkManager[7193]: <info>  [1770031614.5235] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 06:26:54 np0005604943 NetworkManager[7193]: <info>  [1770031614.5235] dhcp4 (eth0): state changed no lease
Feb  2 06:26:54 np0005604943 NetworkManager[7193]: <info>  [1770031614.5240] manager: NetworkManager state is now CONNECTED_SITE
Feb  2 06:26:54 np0005604943 NetworkManager[7193]: <info>  [1770031614.5308] exiting (success)
Feb  2 06:26:54 np0005604943 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 06:26:54 np0005604943 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb  2 06:26:54 np0005604943 systemd[1]: Stopped Network Manager.
Feb  2 06:26:54 np0005604943 systemd[1]: NetworkManager.service: Consumed 14.621s CPU time, 4.1M memory peak, read 0B from disk, written 13.5K to disk.
Feb  2 06:26:54 np0005604943 systemd[1]: Starting Network Manager...
Feb  2 06:26:54 np0005604943 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.5827] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:dbf42354-b9fe-4201-8fff-3ebe30e4e21a)
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.5829] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.5881] manager[0x5582193bf000]: monitoring kernel firmware directory '/lib/firmware'.
Feb  2 06:26:54 np0005604943 systemd[1]: Starting Hostname Service...
Feb  2 06:26:54 np0005604943 systemd[1]: Started Hostname Service.
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6468] hostname: hostname: using hostnamed
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6470] hostname: static hostname changed from (none) to "compute-0"
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6475] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6480] manager[0x5582193bf000]: rfkill: Wi-Fi hardware radio set enabled
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6480] manager[0x5582193bf000]: rfkill: WWAN hardware radio set enabled
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6504] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6513] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6514] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6514] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6515] manager: Networking is enabled by state file
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6517] settings: Loaded settings plugin: keyfile (internal)
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6520] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6554] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6563] dhcp: init: Using DHCP client 'internal'
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6566] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6571] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6576] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6584] device (lo): Activation: starting connection 'lo' (175d73e5-40e0-45b2-8b10-784bc91cfee9)
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6590] device (eth0): carrier: link connected
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6594] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6599] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6600] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6605] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6612] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6617] device (eth1): carrier: link connected
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6621] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6626] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (89166417-2117-5428-b08a-28089d1bb51f) (indicated)
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6628] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6633] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6640] device (eth1): Activation: starting connection 'ci-private-network' (89166417-2117-5428-b08a-28089d1bb51f)
Feb  2 06:26:54 np0005604943 systemd[1]: Started Network Manager.
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6651] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6658] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6660] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6662] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6663] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6666] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6669] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6671] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6673] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6682] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6694] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6706] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6728] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6745] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6750] dhcp4 (eth0): state changed new lease, address=38.102.83.41
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6756] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6763] device (lo): Activation: successful, device activated.
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6779] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb  2 06:26:54 np0005604943 systemd[1]: Starting Network Manager Wait Online...
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6862] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6871] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6881] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6886] manager: NetworkManager state is now CONNECTED_LOCAL
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6889] device (eth1): Activation: successful, device activated.
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6912] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6915] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6920] manager: NetworkManager state is now CONNECTED_SITE
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6924] device (eth0): Activation: successful, device activated.
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6933] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb  2 06:26:54 np0005604943 NetworkManager[49093]: <info>  [1770031614.6940] manager: startup complete
Feb  2 06:26:54 np0005604943 systemd[1]: Finished Network Manager Wait Online.
Feb  2 06:26:55 np0005604943 python3.9[49310]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:26:59 np0005604943 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 06:26:59 np0005604943 systemd[1]: Starting man-db-cache-update.service...
Feb  2 06:26:59 np0005604943 systemd[1]: Reloading.
Feb  2 06:26:59 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:26:59 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:26:59 np0005604943 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 06:27:00 np0005604943 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 06:27:00 np0005604943 systemd[1]: Finished man-db-cache-update.service.
Feb  2 06:27:00 np0005604943 systemd[1]: run-rc5509ed9fde9409c98fb37063b4f552a.service: Deactivated successfully.
Feb  2 06:27:01 np0005604943 python3.9[49769]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:27:01 np0005604943 python3.9[49921]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:27:02 np0005604943 python3.9[50075]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:27:03 np0005604943 python3.9[50227]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:27:03 np0005604943 python3.9[50379]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:27:04 np0005604943 python3.9[50531]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:27:04 np0005604943 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 06:27:05 np0005604943 python3.9[50683]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:27:05 np0005604943 python3.9[50806]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031624.7210233-224-241260261654191/.source _original_basename=.gcg45_gf follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:27:06 np0005604943 python3.9[50958]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:27:07 np0005604943 python3.9[51110]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Feb  2 06:27:08 np0005604943 python3.9[51262]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:27:10 np0005604943 python3.9[51689]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Feb  2 06:27:11 np0005604943 ansible-async_wrapper.py[51864]: Invoked with j952474451850 300 /home/zuul/.ansible/tmp/ansible-tmp-1770031630.2808313-290-269248644412214/AnsiballZ_edpm_os_net_config.py _
Feb  2 06:27:11 np0005604943 ansible-async_wrapper.py[51867]: Starting module and watcher
Feb  2 06:27:11 np0005604943 ansible-async_wrapper.py[51867]: Start watching 51868 (300)
Feb  2 06:27:11 np0005604943 ansible-async_wrapper.py[51868]: Start module (51868)
Feb  2 06:27:11 np0005604943 ansible-async_wrapper.py[51864]: Return async_wrapper task started.
Feb  2 06:27:11 np0005604943 python3.9[51869]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Feb  2 06:27:12 np0005604943 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Feb  2 06:27:12 np0005604943 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Feb  2 06:27:12 np0005604943 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Feb  2 06:27:12 np0005604943 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Feb  2 06:27:12 np0005604943 kernel: cfg80211: failed to load regulatory.db
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.6962] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.6985] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7748] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7751] audit: op="connection-add" uuid="5fe18811-f5fc-4726-944c-962826931dbf" name="br-ex-br" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7772] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7775] audit: op="connection-add" uuid="9c458a88-736e-4a66-a90e-1e5264a4e003" name="br-ex-port" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7791] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7794] audit: op="connection-add" uuid="36369f25-ca4f-4e42-89ca-71555a68c567" name="eth1-port" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7809] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7813] audit: op="connection-add" uuid="1d87766a-5810-421a-80b3-72d04cee157d" name="vlan20-port" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7828] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7831] audit: op="connection-add" uuid="657a8b27-1894-466f-a98d-edb72d9a1e96" name="vlan21-port" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7846] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7849] audit: op="connection-add" uuid="329bf377-cda6-4990-a487-7c6e03a0840f" name="vlan22-port" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7864] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7868] audit: op="connection-add" uuid="70d5713b-7902-4b7d-bb74-6fd49049eb78" name="vlan23-port" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7892] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7913] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7916] audit: op="connection-add" uuid="113dbc34-0662-4a06-b829-bac50f3bb2c0" name="br-ex-if" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7958] audit: op="connection-update" uuid="89166417-2117-5428-b08a-28089d1bb51f" name="ci-private-network" args="ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.method,ipv6.addresses,ipv6.dns,ipv6.routes,connection.master,connection.controller,connection.slave-type,connection.timestamp,connection.port-type,ipv4.routing-rules,ipv4.routes,ipv4.method,ipv4.addresses,ipv4.dns,ipv4.never-default,ovs-external-ids.data,ovs-interface.type" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7979] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.7981] audit: op="connection-add" uuid="ea25a09a-f58a-4965-b176-638abaadbfc0" name="vlan20-if" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8004] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8008] audit: op="connection-add" uuid="92f24feb-f503-4aa4-9129-d90d70524483" name="vlan21-if" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8030] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8032] audit: op="connection-add" uuid="3879bfef-3ebf-461b-88ec-19ccc5e43edb" name="vlan22-if" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8054] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8057] audit: op="connection-add" uuid="b2c38f32-36e0-4928-ba6c-566f0ef9eb14" name="vlan23-if" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8071] audit: op="connection-delete" uuid="e49d727a-1413-3d39-bbad-8217f4075818" name="Wired connection 1" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8087] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <warn>  [1770031633.8091] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8101] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8107] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (5fe18811-f5fc-4726-944c-962826931dbf)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8109] audit: op="connection-activate" uuid="5fe18811-f5fc-4726-944c-962826931dbf" name="br-ex-br" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8112] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <warn>  [1770031633.8114] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8122] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8128] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (9c458a88-736e-4a66-a90e-1e5264a4e003)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8131] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <warn>  [1770031633.8133] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8139] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8147] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (36369f25-ca4f-4e42-89ca-71555a68c567)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8149] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <warn>  [1770031633.8149] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8156] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8161] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (1d87766a-5810-421a-80b3-72d04cee157d)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8164] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <warn>  [1770031633.8165] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8172] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8177] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (657a8b27-1894-466f-a98d-edb72d9a1e96)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8179] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <warn>  [1770031633.8179] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8185] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8189] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (329bf377-cda6-4990-a487-7c6e03a0840f)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8191] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <warn>  [1770031633.8193] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8198] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8202] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (70d5713b-7902-4b7d-bb74-6fd49049eb78)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8203] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8206] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8207] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8213] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <warn>  [1770031633.8214] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8216] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8222] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (113dbc34-0662-4a06-b829-bac50f3bb2c0)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8224] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8229] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8231] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8233] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8234] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8243] device (eth1): disconnecting for new activation request.
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8244] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8246] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8247] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8248] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8249] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <warn>  [1770031633.8250] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8251] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8254] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (ea25a09a-f58a-4965-b176-638abaadbfc0)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8255] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8256] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8258] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8258] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8260] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <warn>  [1770031633.8261] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8263] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8265] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (92f24feb-f503-4aa4-9129-d90d70524483)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8266] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8267] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8268] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8269] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8271] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <warn>  [1770031633.8271] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8273] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8275] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (3879bfef-3ebf-461b-88ec-19ccc5e43edb)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8276] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8277] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8278] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8279] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8281] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <warn>  [1770031633.8281] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8283] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8286] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (b2c38f32-36e0-4928-ba6c-566f0ef9eb14)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8286] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8288] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8289] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8290] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8291] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8299] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8300] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8303] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8304] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8308] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8310] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8313] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8315] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8316] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8319] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8322] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8324] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8325] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8328] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8330] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 kernel: ovs-system: entered promiscuous mode
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8332] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8334] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8338] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8342] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8346] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8348] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8352] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8357] dhcp4 (eth0): canceled DHCP transaction
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8357] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8357] dhcp4 (eth0): state changed no lease
Feb  2 06:27:13 np0005604943 systemd-udevd[51876]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:27:13 np0005604943 kernel: Timeout policy base is empty
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8358] dhcp4 (eth0): activation: beginning transaction (no timeout)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8368] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8370] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51870 uid=0 result="fail" reason="Device is not activated"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8406] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8413] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8418] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8424] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8427] dhcp4 (eth0): state changed new lease, address=38.102.83.41
Feb  2 06:27:13 np0005604943 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8467] device (eth1): disconnecting for new activation request.
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8468] audit: op="connection-activate" uuid="89166417-2117-5428-b08a-28089d1bb51f" name="ci-private-network" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8503] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51870 uid=0 result="success"
Feb  2 06:27:13 np0005604943 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8565] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8628] device (eth1): Activation: starting connection 'ci-private-network' (89166417-2117-5428-b08a-28089d1bb51f)
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8633] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8640] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8643] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8648] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8651] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8654] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8656] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8657] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8658] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8659] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8660] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8662] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8668] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8671] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8674] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8677] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8680] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8684] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8687] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8691] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8695] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8699] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8702] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8706] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8711] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8715] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 kernel: br-ex: entered promiscuous mode
Feb  2 06:27:13 np0005604943 kernel: vlan22: entered promiscuous mode
Feb  2 06:27:13 np0005604943 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Feb  2 06:27:13 np0005604943 kernel: vlan23: entered promiscuous mode
Feb  2 06:27:13 np0005604943 kernel: vlan20: entered promiscuous mode
Feb  2 06:27:13 np0005604943 kernel: vlan21: entered promiscuous mode
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8769] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8771] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.8776] device (eth1): Activation: successful, device activated.
Feb  2 06:27:13 np0005604943 systemd-udevd[51875]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:27:13 np0005604943 systemd-udevd[51874]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9026] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9035] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9065] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9066] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9071] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  2 06:27:13 np0005604943 systemd-udevd[51981]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9229] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9236] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9253] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9254] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9258] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9343] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9352] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9387] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9391] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9395] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9403] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9418] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9433] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9437] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9444] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9452] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9466] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9501] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9505] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 06:27:13 np0005604943 NetworkManager[49093]: <info>  [1770031633.9510] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  2 06:27:14 np0005604943 python3.9[52228]: ansible-ansible.legacy.async_status Invoked with jid=j952474451850.51864 mode=status _async_dir=/root/.ansible_async
Feb  2 06:27:15 np0005604943 NetworkManager[49093]: <info>  [1770031635.0939] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51870 uid=0 result="success"
Feb  2 06:27:15 np0005604943 NetworkManager[49093]: <info>  [1770031635.3755] checkpoint[0x558219393950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Feb  2 06:27:15 np0005604943 NetworkManager[49093]: <info>  [1770031635.3757] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51870 uid=0 result="success"
Feb  2 06:27:15 np0005604943 NetworkManager[49093]: <info>  [1770031635.6770] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51870 uid=0 result="success"
Feb  2 06:27:15 np0005604943 NetworkManager[49093]: <info>  [1770031635.6779] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51870 uid=0 result="success"
Feb  2 06:27:15 np0005604943 NetworkManager[49093]: <info>  [1770031635.8756] audit: op="networking-control" arg="global-dns-configuration" pid=51870 uid=0 result="success"
Feb  2 06:27:15 np0005604943 NetworkManager[49093]: <info>  [1770031635.8787] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Feb  2 06:27:15 np0005604943 NetworkManager[49093]: <info>  [1770031635.8820] audit: op="networking-control" arg="global-dns-configuration" pid=51870 uid=0 result="success"
Feb  2 06:27:15 np0005604943 NetworkManager[49093]: <info>  [1770031635.8861] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51870 uid=0 result="success"
Feb  2 06:27:16 np0005604943 NetworkManager[49093]: <info>  [1770031636.0460] checkpoint[0x558219393a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Feb  2 06:27:16 np0005604943 NetworkManager[49093]: <info>  [1770031636.0464] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51870 uid=0 result="success"
Feb  2 06:27:16 np0005604943 ansible-async_wrapper.py[51868]: Module complete (51868)
Feb  2 06:27:16 np0005604943 ansible-async_wrapper.py[51867]: 51868 still running (300)
Feb  2 06:27:18 np0005604943 python3.9[52334]: ansible-ansible.legacy.async_status Invoked with jid=j952474451850.51864 mode=status _async_dir=/root/.ansible_async
Feb  2 06:27:18 np0005604943 python3.9[52434]: ansible-ansible.legacy.async_status Invoked with jid=j952474451850.51864 mode=cleanup _async_dir=/root/.ansible_async
Feb  2 06:27:19 np0005604943 python3.9[52586]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:27:20 np0005604943 python3.9[52709]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031639.1842952-317-60406534987163/.source.returncode _original_basename=.iemvyzq5 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:27:21 np0005604943 ansible-async_wrapper.py[51867]: Done in kid B.
Feb  2 06:27:21 np0005604943 python3.9[52861]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:27:21 np0005604943 python3.9[52984]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031640.7797089-333-8826200906823/.source.cfg _original_basename=.ry1wvf4m follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:27:22 np0005604943 python3.9[53137]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:27:22 np0005604943 systemd[1]: Reloading Network Manager...
Feb  2 06:27:22 np0005604943 NetworkManager[49093]: <info>  [1770031642.5738] audit: op="reload" arg="0" pid=53141 uid=0 result="success"
Feb  2 06:27:22 np0005604943 NetworkManager[49093]: <info>  [1770031642.5748] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Feb  2 06:27:22 np0005604943 systemd[1]: Reloaded Network Manager.
Feb  2 06:27:22 np0005604943 systemd[1]: session-9.scope: Deactivated successfully.
Feb  2 06:27:22 np0005604943 systemd[1]: session-9.scope: Consumed 47.192s CPU time.
Feb  2 06:27:22 np0005604943 systemd-logind[786]: Session 9 logged out. Waiting for processes to exit.
Feb  2 06:27:22 np0005604943 systemd-logind[786]: Removed session 9.
Feb  2 06:27:24 np0005604943 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  2 06:27:28 np0005604943 systemd-logind[786]: New session 10 of user zuul.
Feb  2 06:27:28 np0005604943 systemd[1]: Started Session 10 of User zuul.
Feb  2 06:27:29 np0005604943 python3.9[53327]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:27:30 np0005604943 python3.9[53481]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:27:31 np0005604943 python3.9[53674]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:27:31 np0005604943 systemd[1]: session-10.scope: Deactivated successfully.
Feb  2 06:27:31 np0005604943 systemd[1]: session-10.scope: Consumed 2.228s CPU time.
Feb  2 06:27:31 np0005604943 systemd-logind[786]: Session 10 logged out. Waiting for processes to exit.
Feb  2 06:27:31 np0005604943 systemd-logind[786]: Removed session 10.
Feb  2 06:27:32 np0005604943 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 06:27:37 np0005604943 systemd-logind[786]: New session 11 of user zuul.
Feb  2 06:27:37 np0005604943 systemd[1]: Started Session 11 of User zuul.
Feb  2 06:27:38 np0005604943 python3.9[53857]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:27:39 np0005604943 python3.9[54011]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:27:40 np0005604943 python3.9[54169]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:27:41 np0005604943 python3.9[54254]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:27:43 np0005604943 python3.9[54407]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:27:44 np0005604943 python3.9[54603]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:27:44 np0005604943 python3.9[54755]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:27:45 np0005604943 systemd[1]: var-lib-containers-storage-overlay-compat3579352371-merged.mount: Deactivated successfully.
Feb  2 06:27:45 np0005604943 podman[54756]: 2026-02-02 11:27:45.044354389 +0000 UTC m=+0.046011102 system refresh
Feb  2 06:27:45 np0005604943 python3.9[54919]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:27:46 np0005604943 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 06:27:46 np0005604943 python3.9[55042]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031665.2352607-74-19936728576967/.source.json follow=False _original_basename=podman_network_config.j2 checksum=c91913d9c20e086fce634157c8f885d2cad50c67 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:27:47 np0005604943 python3.9[55194]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:27:47 np0005604943 python3.9[55317]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031666.6176028-89-70711299977113/.source.conf follow=False _original_basename=registries.conf.j2 checksum=e2de8f675731c4214a7c5cc6ee9f1ecc906a06ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:27:48 np0005604943 python3.9[55469]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:27:48 np0005604943 python3.9[55621]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:27:49 np0005604943 python3.9[55773]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:27:50 np0005604943 python3.9[55925]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:27:50 np0005604943 python3.9[56077]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:27:53 np0005604943 python3.9[56230]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:27:53 np0005604943 python3.9[56384]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:27:54 np0005604943 python3.9[56536]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:27:55 np0005604943 python3.9[56688]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:27:55 np0005604943 python3.9[56841]: ansible-service_facts Invoked
Feb  2 06:27:55 np0005604943 network[56858]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 06:27:55 np0005604943 network[56859]: 'network-scripts' will be removed from distribution in near future.
Feb  2 06:27:55 np0005604943 network[56860]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 06:28:01 np0005604943 python3.9[57312]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:28:03 np0005604943 python3.9[57465]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb  2 06:28:04 np0005604943 python3.9[57617]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:28:05 np0005604943 python3.9[57742]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031684.1490948-233-39534242038631/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:05 np0005604943 python3.9[57896]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:28:06 np0005604943 python3.9[58021]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031685.366395-248-70402523831530/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:07 np0005604943 python3.9[58175]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:08 np0005604943 python3.9[58329]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:28:09 np0005604943 python3.9[58413]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:28:10 np0005604943 python3.9[58567]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:28:11 np0005604943 python3.9[58651]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:28:11 np0005604943 chronyd[801]: chronyd exiting
Feb  2 06:28:11 np0005604943 systemd[1]: Stopping NTP client/server...
Feb  2 06:28:11 np0005604943 systemd[1]: chronyd.service: Deactivated successfully.
Feb  2 06:28:11 np0005604943 systemd[1]: Stopped NTP client/server.
Feb  2 06:28:11 np0005604943 systemd[1]: Starting NTP client/server...
Feb  2 06:28:11 np0005604943 chronyd[58660]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb  2 06:28:11 np0005604943 chronyd[58660]: Frequency -26.447 +/- 0.078 ppm read from /var/lib/chrony/drift
Feb  2 06:28:11 np0005604943 chronyd[58660]: Loaded seccomp filter (level 2)
Feb  2 06:28:11 np0005604943 systemd[1]: Started NTP client/server.
Feb  2 06:28:11 np0005604943 systemd[1]: session-11.scope: Deactivated successfully.
Feb  2 06:28:11 np0005604943 systemd[1]: session-11.scope: Consumed 23.823s CPU time.
Feb  2 06:28:11 np0005604943 systemd-logind[786]: Session 11 logged out. Waiting for processes to exit.
Feb  2 06:28:11 np0005604943 systemd-logind[786]: Removed session 11.
Feb  2 06:28:17 np0005604943 systemd-logind[786]: New session 12 of user zuul.
Feb  2 06:28:18 np0005604943 systemd[1]: Started Session 12 of User zuul.
Feb  2 06:28:18 np0005604943 python3.9[58841]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:19 np0005604943 python3.9[58993]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:28:20 np0005604943 python3.9[59116]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031698.9060125-29-56800783560706/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:20 np0005604943 systemd[1]: session-12.scope: Deactivated successfully.
Feb  2 06:28:20 np0005604943 systemd[1]: session-12.scope: Consumed 1.601s CPU time.
Feb  2 06:28:20 np0005604943 systemd-logind[786]: Session 12 logged out. Waiting for processes to exit.
Feb  2 06:28:20 np0005604943 systemd-logind[786]: Removed session 12.
Feb  2 06:28:26 np0005604943 systemd-logind[786]: New session 13 of user zuul.
Feb  2 06:28:26 np0005604943 systemd[1]: Started Session 13 of User zuul.
Feb  2 06:28:27 np0005604943 python3.9[59294]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:28:28 np0005604943 python3.9[59450]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:29 np0005604943 python3.9[59625]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:28:29 np0005604943 python3.9[59748]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1770031708.5224135-36-186323428704218/.source.json _original_basename=.1nt6s9ol follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:30 np0005604943 python3.9[59900]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:28:31 np0005604943 python3.9[60023]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031710.0813978-59-199122392637209/.source _original_basename=.vgiasknl follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:31 np0005604943 python3.9[60175]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:28:32 np0005604943 python3.9[60327]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:28:32 np0005604943 python3.9[60450]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031711.8355803-83-67083530476022/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:28:33 np0005604943 python3.9[60602]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:28:33 np0005604943 python3.9[60725]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770031712.92361-83-50965151721502/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:28:34 np0005604943 python3.9[60877]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:34 np0005604943 python3.9[61029]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:28:35 np0005604943 python3.9[61152]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031714.5598109-120-151408468823622/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:36 np0005604943 python3.9[61304]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:28:36 np0005604943 python3.9[61427]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031715.64478-135-108550312660530/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:37 np0005604943 python3.9[61579]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:28:37 np0005604943 systemd[1]: Reloading.
Feb  2 06:28:37 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:28:37 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:28:37 np0005604943 systemd[1]: Reloading.
Feb  2 06:28:37 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:28:37 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:28:38 np0005604943 systemd[1]: Starting EDPM Container Shutdown...
Feb  2 06:28:38 np0005604943 systemd[1]: Finished EDPM Container Shutdown.
Feb  2 06:28:38 np0005604943 python3.9[61806]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:28:39 np0005604943 python3.9[61929]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031718.2689626-158-103198950305016/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:39 np0005604943 python3.9[62081]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:28:40 np0005604943 python3.9[62204]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031719.434331-173-87244689960897/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:41 np0005604943 python3.9[62356]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:28:41 np0005604943 systemd[1]: Reloading.
Feb  2 06:28:41 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:28:41 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:28:41 np0005604943 systemd[1]: Reloading.
Feb  2 06:28:41 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:28:41 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:28:41 np0005604943 systemd[1]: Starting Create netns directory...
Feb  2 06:28:41 np0005604943 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb  2 06:28:41 np0005604943 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb  2 06:28:41 np0005604943 systemd[1]: Finished Create netns directory.
Feb  2 06:28:42 np0005604943 python3.9[62581]: ansible-ansible.builtin.service_facts Invoked
Feb  2 06:28:42 np0005604943 network[62598]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 06:28:42 np0005604943 network[62599]: 'network-scripts' will be removed from distribution in near future.
Feb  2 06:28:42 np0005604943 network[62600]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 06:28:45 np0005604943 python3.9[62862]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:28:45 np0005604943 systemd[1]: Reloading.
Feb  2 06:28:45 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:28:45 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:28:45 np0005604943 systemd[1]: Stopping IPv4 firewall with iptables...
Feb  2 06:28:45 np0005604943 iptables.init[62902]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Feb  2 06:28:45 np0005604943 iptables.init[62902]: iptables: Flushing firewall rules: [  OK  ]
Feb  2 06:28:45 np0005604943 systemd[1]: iptables.service: Deactivated successfully.
Feb  2 06:28:45 np0005604943 systemd[1]: Stopped IPv4 firewall with iptables.
Feb  2 06:28:46 np0005604943 python3.9[63098]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:28:47 np0005604943 python3.9[63252]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:28:47 np0005604943 systemd[1]: Reloading.
Feb  2 06:28:47 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:28:47 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:28:47 np0005604943 systemd[1]: Starting Netfilter Tables...
Feb  2 06:28:47 np0005604943 systemd[1]: Finished Netfilter Tables.
Feb  2 06:28:48 np0005604943 python3.9[63444]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:28:48 np0005604943 python3.9[63597]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:28:49 np0005604943 python3.9[63722]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031728.4596686-242-237759205174476/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:50 np0005604943 python3.9[63875]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:28:50 np0005604943 systemd[1]: Reloading OpenSSH server daemon...
Feb  2 06:28:50 np0005604943 systemd[1]: Reloaded OpenSSH server daemon.
Feb  2 06:28:50 np0005604943 python3.9[64031]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:51 np0005604943 python3.9[64183]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:28:52 np0005604943 python3.9[64306]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031731.0393493-273-17219111737150/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:52 np0005604943 python3.9[64458]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb  2 06:28:52 np0005604943 systemd[1]: Starting Time & Date Service...
Feb  2 06:28:53 np0005604943 systemd[1]: Started Time & Date Service.
Feb  2 06:28:53 np0005604943 python3.9[64614]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:54 np0005604943 python3.9[64766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:28:54 np0005604943 python3.9[64889]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031733.8986027-308-151931285435725/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:55 np0005604943 python3.9[65041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:28:55 np0005604943 python3.9[65164]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770031735.0254474-323-235272542059850/.source.yaml _original_basename=.ioz4y2u5 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:56 np0005604943 python3.9[65316]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:28:57 np0005604943 python3.9[65439]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031736.1230261-338-242318345441115/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:28:57 np0005604943 python3.9[65591]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:28:58 np0005604943 python3.9[65744]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:28:59 np0005604943 python3[65897]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb  2 06:28:59 np0005604943 python3.9[66049]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:29:00 np0005604943 python3.9[66172]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031739.3002157-377-156554585831889/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:29:00 np0005604943 python3.9[66324]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:29:01 np0005604943 python3.9[66447]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031740.399942-392-176446160050123/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:29:02 np0005604943 python3.9[66599]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:29:02 np0005604943 python3.9[66722]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031741.627886-407-202352422395024/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:29:03 np0005604943 python3.9[66874]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:29:03 np0005604943 python3.9[66997]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031742.7976298-422-195461157645469/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:29:04 np0005604943 python3.9[67149]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:29:05 np0005604943 python3.9[67272]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770031743.9396253-437-66515839129687/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:29:05 np0005604943 python3.9[67424]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:29:06 np0005604943 python3.9[67576]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:29:07 np0005604943 python3.9[67735]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:29:07 np0005604943 python3.9[67888]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:29:08 np0005604943 python3.9[68040]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:29:09 np0005604943 python3.9[68192]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb  2 06:29:10 np0005604943 python3.9[68345]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb  2 06:29:10 np0005604943 systemd[1]: session-13.scope: Deactivated successfully.
Feb  2 06:29:10 np0005604943 systemd[1]: session-13.scope: Consumed 33.311s CPU time.
Feb  2 06:29:10 np0005604943 systemd-logind[786]: Session 13 logged out. Waiting for processes to exit.
Feb  2 06:29:10 np0005604943 systemd-logind[786]: Removed session 13.
Feb  2 06:29:15 np0005604943 systemd-logind[786]: New session 14 of user zuul.
Feb  2 06:29:15 np0005604943 systemd[1]: Started Session 14 of User zuul.
Feb  2 06:29:16 np0005604943 python3.9[68526]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb  2 06:29:17 np0005604943 python3.9[68678]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:29:18 np0005604943 python3.9[68830]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:29:19 np0005604943 python3.9[68982]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfaF4IdqXfMSGs4GhWaZFtA4Qu8RGgt5AsxWEnaCvTDvzl7EYu73JYQ0NnpnA1dZyRSPtWK5yZNFuCp9mcYerHE/VOIeY84yboreiq6oJoObwGlSPmdrEZPMShaSMrVfhkVseqLc4y1S5kU2UZqM3+OkpCVVeajnfHqZi6qH1fdoEe+mLgxKgX/vu7GRACcrVTzSnOnvLGcbiUy+FF++Euyk9D7DfF6caFdd4zoDFTd4CWiGjsQ2yWxZ0L5PSc3ObyJt3lxxxujtpacNakug142pr0O5PWMASfhm5nw72W45Ejp6uoLsWLLa4YZT4UYD8/bhgf2KMFQAUCXMoU2/+zauSg+IzqW+JPFQWYFEsBiDqg/jqu3VV7PSDoh/PviiShJ0gtqZZiQWgw1MGv8Txh7WlfI/QobTyFkazk7TEYnUt3K0CZgDtFIPpsKf+XHDK/YZb2SGzh1G6BsnW3ty8rGUnugFyTcT+HdXU9zNqgUgNpsGUjuOHSCnjwTV4V1GM=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBgSlBSlJEtpKhJj2H7DuKmtCggvKxC6o/EZ8HL54jj6#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIXf1GmBQzALPmDKfSXAuVB+xRYqjD32Y59Ej3vCpmUbvcj30rVtWJi5Szv+7TjUVBrZXLEVEpadyLc+MRDWQ8Y=#012 create=True mode=0644 path=/tmp/ansible.9i5xkgnx state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:29:19 np0005604943 python3.9[69134]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.9i5xkgnx' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:29:20 np0005604943 python3.9[69288]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.9i5xkgnx state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:29:21 np0005604943 systemd[1]: session-14.scope: Deactivated successfully.
Feb  2 06:29:21 np0005604943 systemd[1]: session-14.scope: Consumed 3.256s CPU time.
Feb  2 06:29:21 np0005604943 systemd-logind[786]: Session 14 logged out. Waiting for processes to exit.
Feb  2 06:29:21 np0005604943 systemd-logind[786]: Removed session 14.
Feb  2 06:29:23 np0005604943 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb  2 06:29:26 np0005604943 systemd-logind[786]: New session 15 of user zuul.
Feb  2 06:29:26 np0005604943 systemd[1]: Started Session 15 of User zuul.
Feb  2 06:29:27 np0005604943 python3.9[69468]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:29:28 np0005604943 python3.9[69624]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb  2 06:29:29 np0005604943 python3.9[69778]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:29:30 np0005604943 python3.9[69931]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:29:31 np0005604943 python3.9[70084]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:29:31 np0005604943 python3.9[70238]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:29:32 np0005604943 python3.9[70393]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:29:33 np0005604943 systemd-logind[786]: Session 15 logged out. Waiting for processes to exit.
Feb  2 06:29:33 np0005604943 systemd[1]: session-15.scope: Deactivated successfully.
Feb  2 06:29:33 np0005604943 systemd[1]: session-15.scope: Consumed 4.204s CPU time.
Feb  2 06:29:33 np0005604943 systemd-logind[786]: Removed session 15.
Feb  2 06:29:37 np0005604943 systemd-logind[786]: New session 16 of user zuul.
Feb  2 06:29:37 np0005604943 systemd[1]: Started Session 16 of User zuul.
Feb  2 06:29:38 np0005604943 python3.9[70571]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:29:39 np0005604943 python3.9[70727]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:29:40 np0005604943 python3.9[70811]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb  2 06:29:42 np0005604943 python3.9[70962]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:29:43 np0005604943 python3.9[71113]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 06:29:44 np0005604943 python3.9[71263]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:29:44 np0005604943 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 06:29:44 np0005604943 python3.9[71414]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:29:45 np0005604943 systemd-logind[786]: Session 16 logged out. Waiting for processes to exit.
Feb  2 06:29:45 np0005604943 systemd[1]: session-16.scope: Deactivated successfully.
Feb  2 06:29:45 np0005604943 systemd[1]: session-16.scope: Consumed 5.465s CPU time.
Feb  2 06:29:45 np0005604943 systemd-logind[786]: Removed session 16.
Feb  2 06:29:53 np0005604943 systemd-logind[786]: New session 17 of user zuul.
Feb  2 06:29:53 np0005604943 systemd[1]: Started Session 17 of User zuul.
Feb  2 06:29:58 np0005604943 python3[72180]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:30:00 np0005604943 python3[72275]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  2 06:30:01 np0005604943 python3[72302]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 06:30:01 np0005604943 python3[72328]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:30:01 np0005604943 kernel: loop: module loaded
Feb  2 06:30:01 np0005604943 kernel: loop3: detected capacity change from 0 to 41943040
Feb  2 06:30:02 np0005604943 python3[72363]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:30:02 np0005604943 lvm[72366]: PV /dev/loop3 not used.
Feb  2 06:30:02 np0005604943 lvm[72375]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:30:02 np0005604943 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Feb  2 06:30:02 np0005604943 lvm[72377]:  1 logical volume(s) in volume group "ceph_vg0" now active
Feb  2 06:30:02 np0005604943 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Feb  2 06:30:02 np0005604943 python3[72455]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:30:03 np0005604943 python3[72528]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770031802.6319113-36286-210953956224704/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:30:03 np0005604943 python3[72578]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:30:04 np0005604943 systemd[1]: Reloading.
Feb  2 06:30:04 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:30:04 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:30:04 np0005604943 systemd[1]: Starting Ceph OSD losetup...
Feb  2 06:30:04 np0005604943 bash[72617]: /dev/loop3: [64513]:4194935 (/var/lib/ceph-osd-0.img)
Feb  2 06:30:04 np0005604943 systemd[1]: Finished Ceph OSD losetup.
Feb  2 06:30:04 np0005604943 lvm[72618]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:30:04 np0005604943 lvm[72618]: VG ceph_vg0 finished
Feb  2 06:30:04 np0005604943 python3[72644]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  2 06:30:06 np0005604943 python3[72671]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 06:30:06 np0005604943 python3[72697]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:30:06 np0005604943 kernel: loop4: detected capacity change from 0 to 41943040
Feb  2 06:30:06 np0005604943 python3[72729]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:30:06 np0005604943 lvm[72732]: PV /dev/loop4 not used.
Feb  2 06:30:06 np0005604943 lvm[72734]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:30:06 np0005604943 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Feb  2 06:30:06 np0005604943 lvm[72737]:  1 logical volume(s) in volume group "ceph_vg1" now active
Feb  2 06:30:06 np0005604943 lvm[72745]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:30:06 np0005604943 lvm[72745]: VG ceph_vg1 finished
Feb  2 06:30:06 np0005604943 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Feb  2 06:30:07 np0005604943 python3[72823]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:30:07 np0005604943 python3[72896]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770031807.0492792-36313-114706380983599/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:30:08 np0005604943 python3[72946]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:30:08 np0005604943 systemd[1]: Reloading.
Feb  2 06:30:08 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:30:08 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:30:08 np0005604943 systemd[1]: Starting Ceph OSD losetup...
Feb  2 06:30:08 np0005604943 bash[72986]: /dev/loop4: [64513]:4355818 (/var/lib/ceph-osd-1.img)
Feb  2 06:30:08 np0005604943 systemd[1]: Finished Ceph OSD losetup.
Feb  2 06:30:08 np0005604943 lvm[72987]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:30:08 np0005604943 lvm[72987]: VG ceph_vg1 finished
Feb  2 06:30:08 np0005604943 python3[73013]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  2 06:30:10 np0005604943 python3[73040]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 06:30:10 np0005604943 python3[73066]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:30:10 np0005604943 kernel: loop5: detected capacity change from 0 to 41943040
Feb  2 06:30:11 np0005604943 python3[73098]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:30:11 np0005604943 lvm[73101]: PV /dev/loop5 not used.
Feb  2 06:30:11 np0005604943 lvm[73103]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:30:11 np0005604943 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Feb  2 06:30:11 np0005604943 lvm[73106]:  1 logical volume(s) in volume group "ceph_vg2" now active
Feb  2 06:30:11 np0005604943 lvm[73114]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:30:11 np0005604943 lvm[73114]: VG ceph_vg2 finished
Feb  2 06:30:11 np0005604943 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Feb  2 06:30:11 np0005604943 python3[73192]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:30:12 np0005604943 python3[73265]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770031811.5309663-36340-228288665235441/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:30:12 np0005604943 python3[73315]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:30:12 np0005604943 systemd[1]: Reloading.
Feb  2 06:30:12 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:30:12 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:30:12 np0005604943 systemd[1]: Starting Ceph OSD losetup...
Feb  2 06:30:12 np0005604943 bash[73355]: /dev/loop5: [64513]:4355819 (/var/lib/ceph-osd-2.img)
Feb  2 06:30:12 np0005604943 systemd[1]: Finished Ceph OSD losetup.
Feb  2 06:30:12 np0005604943 lvm[73356]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:30:12 np0005604943 lvm[73356]: VG ceph_vg2 finished
Feb  2 06:30:14 np0005604943 python3[73380]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:30:16 np0005604943 python3[73473]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  2 06:30:19 np0005604943 python3[73530]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  2 06:30:21 np0005604943 chronyd[58660]: Selected source 149.56.19.163 (pool.ntp.org)
Feb  2 06:30:21 np0005604943 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 06:30:21 np0005604943 systemd[1]: Starting man-db-cache-update.service...
Feb  2 06:30:22 np0005604943 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 06:30:22 np0005604943 systemd[1]: Finished man-db-cache-update.service.
Feb  2 06:30:22 np0005604943 systemd[1]: run-rb4b8ab797acf48fd870e0a6d85816f67.service: Deactivated successfully.
Feb  2 06:30:22 np0005604943 python3[73649]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 06:30:22 np0005604943 python3[73677]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:30:23 np0005604943 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 06:30:23 np0005604943 python3[73716]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:30:24 np0005604943 python3[73742]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:30:24 np0005604943 python3[73820]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:30:25 np0005604943 python3[73893]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770031824.5596032-36488-192707615999175/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:30:25 np0005604943 python3[73995]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:30:26 np0005604943 python3[74068]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770031825.660926-36506-211599076721137/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:30:26 np0005604943 python3[74118]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 06:30:26 np0005604943 python3[74146]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 06:30:27 np0005604943 python3[74174]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 06:30:27 np0005604943 python3[74200]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 06:30:28 np0005604943 python3[74226]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:30:28 np0005604943 systemd-logind[786]: New session 18 of user ceph-admin.
Feb  2 06:30:28 np0005604943 systemd[1]: Created slice User Slice of UID 42477.
Feb  2 06:30:28 np0005604943 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb  2 06:30:28 np0005604943 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb  2 06:30:28 np0005604943 systemd[1]: Starting User Manager for UID 42477...
Feb  2 06:30:28 np0005604943 systemd[74234]: Queued start job for default target Main User Target.
Feb  2 06:30:28 np0005604943 systemd[74234]: Created slice User Application Slice.
Feb  2 06:30:28 np0005604943 systemd[74234]: Started Mark boot as successful after the user session has run 2 minutes.
Feb  2 06:30:28 np0005604943 systemd[74234]: Started Daily Cleanup of User's Temporary Directories.
Feb  2 06:30:28 np0005604943 systemd[74234]: Reached target Paths.
Feb  2 06:30:28 np0005604943 systemd[74234]: Reached target Timers.
Feb  2 06:30:28 np0005604943 systemd[74234]: Starting D-Bus User Message Bus Socket...
Feb  2 06:30:28 np0005604943 systemd[74234]: Starting Create User's Volatile Files and Directories...
Feb  2 06:30:28 np0005604943 systemd[74234]: Finished Create User's Volatile Files and Directories.
Feb  2 06:30:28 np0005604943 systemd[74234]: Listening on D-Bus User Message Bus Socket.
Feb  2 06:30:28 np0005604943 systemd[74234]: Reached target Sockets.
Feb  2 06:30:28 np0005604943 systemd[74234]: Reached target Basic System.
Feb  2 06:30:28 np0005604943 systemd[74234]: Reached target Main User Target.
Feb  2 06:30:28 np0005604943 systemd[74234]: Startup finished in 122ms.
Feb  2 06:30:28 np0005604943 systemd[1]: Started User Manager for UID 42477.
Feb  2 06:30:28 np0005604943 systemd[1]: Started Session 18 of User ceph-admin.
Feb  2 06:30:28 np0005604943 systemd[1]: session-18.scope: Deactivated successfully.
Feb  2 06:30:28 np0005604943 systemd-logind[786]: Session 18 logged out. Waiting for processes to exit.
Feb  2 06:30:28 np0005604943 systemd-logind[786]: Removed session 18.
Feb  2 06:30:28 np0005604943 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 06:30:28 np0005604943 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 06:30:30 np0005604943 systemd[1]: var-lib-containers-storage-overlay-compat3193276069-merged.mount: Deactivated successfully.
Feb  2 06:30:30 np0005604943 systemd[1]: var-lib-containers-storage-overlay-compat3193276069-lower\x2dmapped.mount: Deactivated successfully.
Feb  2 06:30:38 np0005604943 systemd[1]: Stopping User Manager for UID 42477...
Feb  2 06:30:38 np0005604943 systemd[74234]: Activating special unit Exit the Session...
Feb  2 06:30:38 np0005604943 systemd[74234]: Stopped target Main User Target.
Feb  2 06:30:38 np0005604943 systemd[74234]: Stopped target Basic System.
Feb  2 06:30:38 np0005604943 systemd[74234]: Stopped target Paths.
Feb  2 06:30:38 np0005604943 systemd[74234]: Stopped target Sockets.
Feb  2 06:30:38 np0005604943 systemd[74234]: Stopped target Timers.
Feb  2 06:30:38 np0005604943 systemd[74234]: Stopped Mark boot as successful after the user session has run 2 minutes.
Feb  2 06:30:38 np0005604943 systemd[74234]: Stopped Daily Cleanup of User's Temporary Directories.
Feb  2 06:30:38 np0005604943 systemd[74234]: Closed D-Bus User Message Bus Socket.
Feb  2 06:30:38 np0005604943 systemd[74234]: Stopped Create User's Volatile Files and Directories.
Feb  2 06:30:38 np0005604943 systemd[74234]: Removed slice User Application Slice.
Feb  2 06:30:38 np0005604943 systemd[74234]: Reached target Shutdown.
Feb  2 06:30:38 np0005604943 systemd[74234]: Finished Exit the Session.
Feb  2 06:30:38 np0005604943 systemd[74234]: Reached target Exit the Session.
Feb  2 06:30:38 np0005604943 systemd[1]: user@42477.service: Deactivated successfully.
Feb  2 06:30:38 np0005604943 systemd[1]: Stopped User Manager for UID 42477.
Feb  2 06:30:38 np0005604943 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Feb  2 06:30:38 np0005604943 systemd[1]: run-user-42477.mount: Deactivated successfully.
Feb  2 06:30:38 np0005604943 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Feb  2 06:30:38 np0005604943 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Feb  2 06:30:39 np0005604943 systemd[1]: Removed slice User Slice of UID 42477.
Feb  2 06:30:45 np0005604943 podman[74329]: 2026-02-02 11:30:45.508940703 +0000 UTC m=+16.444037916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:45 np0005604943 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 06:30:45 np0005604943 podman[74389]: 2026-02-02 11:30:45.584503216 +0000 UTC m=+0.052776074 container create 84e4727be8bfbafead3798b03a25264850a44fb9093e3e2783d0a11de59a84ea (image=quay.io/ceph/ceph:v20, name=blissful_chatelet, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 06:30:45 np0005604943 systemd[1]: Created slice Virtual Machine and Container Slice.
Feb  2 06:30:45 np0005604943 systemd[1]: Started libpod-conmon-84e4727be8bfbafead3798b03a25264850a44fb9093e3e2783d0a11de59a84ea.scope.
Feb  2 06:30:45 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:30:45 np0005604943 podman[74389]: 2026-02-02 11:30:45.562621902 +0000 UTC m=+0.030894840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:45 np0005604943 podman[74389]: 2026-02-02 11:30:45.703554339 +0000 UTC m=+0.171827287 container init 84e4727be8bfbafead3798b03a25264850a44fb9093e3e2783d0a11de59a84ea (image=quay.io/ceph/ceph:v20, name=blissful_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:30:45 np0005604943 podman[74389]: 2026-02-02 11:30:45.712884833 +0000 UTC m=+0.181157731 container start 84e4727be8bfbafead3798b03a25264850a44fb9093e3e2783d0a11de59a84ea (image=quay.io/ceph/ceph:v20, name=blissful_chatelet, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:30:45 np0005604943 podman[74389]: 2026-02-02 11:30:45.717329184 +0000 UTC m=+0.185602132 container attach 84e4727be8bfbafead3798b03a25264850a44fb9093e3e2783d0a11de59a84ea (image=quay.io/ceph/ceph:v20, name=blissful_chatelet, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:30:45 np0005604943 blissful_chatelet[74405]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Feb  2 06:30:45 np0005604943 systemd[1]: libpod-84e4727be8bfbafead3798b03a25264850a44fb9093e3e2783d0a11de59a84ea.scope: Deactivated successfully.
Feb  2 06:30:45 np0005604943 podman[74389]: 2026-02-02 11:30:45.823292782 +0000 UTC m=+0.291565670 container died 84e4727be8bfbafead3798b03a25264850a44fb9093e3e2783d0a11de59a84ea (image=quay.io/ceph/ceph:v20, name=blissful_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:30:45 np0005604943 systemd[1]: var-lib-containers-storage-overlay-ad07ddda16406e2420b473ff99fe7c9526bb2f17c448ab8d31701a058f4220f5-merged.mount: Deactivated successfully.
Feb  2 06:30:45 np0005604943 podman[74389]: 2026-02-02 11:30:45.868379286 +0000 UTC m=+0.336652174 container remove 84e4727be8bfbafead3798b03a25264850a44fb9093e3e2783d0a11de59a84ea (image=quay.io/ceph/ceph:v20, name=blissful_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:30:45 np0005604943 systemd[1]: libpod-conmon-84e4727be8bfbafead3798b03a25264850a44fb9093e3e2783d0a11de59a84ea.scope: Deactivated successfully.
Feb  2 06:30:45 np0005604943 podman[74424]: 2026-02-02 11:30:45.9303885 +0000 UTC m=+0.044191810 container create 4f79edd491aab81f5f5488183f6a141e3bf4bb5268877f9d1d82d2d009b7b47b (image=quay.io/ceph/ceph:v20, name=exciting_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:30:45 np0005604943 systemd[1]: Started libpod-conmon-4f79edd491aab81f5f5488183f6a141e3bf4bb5268877f9d1d82d2d009b7b47b.scope.
Feb  2 06:30:45 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:30:45 np0005604943 podman[74424]: 2026-02-02 11:30:45.988683444 +0000 UTC m=+0.102486824 container init 4f79edd491aab81f5f5488183f6a141e3bf4bb5268877f9d1d82d2d009b7b47b (image=quay.io/ceph/ceph:v20, name=exciting_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:30:45 np0005604943 podman[74424]: 2026-02-02 11:30:45.996571579 +0000 UTC m=+0.110374919 container start 4f79edd491aab81f5f5488183f6a141e3bf4bb5268877f9d1d82d2d009b7b47b (image=quay.io/ceph/ceph:v20, name=exciting_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:30:46 np0005604943 podman[74424]: 2026-02-02 11:30:46.000798753 +0000 UTC m=+0.114602143 container attach 4f79edd491aab81f5f5488183f6a141e3bf4bb5268877f9d1d82d2d009b7b47b (image=quay.io/ceph/ceph:v20, name=exciting_kapitsa, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:30:46 np0005604943 exciting_kapitsa[74440]: 167 167
Feb  2 06:30:46 np0005604943 systemd[1]: libpod-4f79edd491aab81f5f5488183f6a141e3bf4bb5268877f9d1d82d2d009b7b47b.scope: Deactivated successfully.
Feb  2 06:30:46 np0005604943 podman[74424]: 2026-02-02 11:30:46.002193941 +0000 UTC m=+0.115997271 container died 4f79edd491aab81f5f5488183f6a141e3bf4bb5268877f9d1d82d2d009b7b47b (image=quay.io/ceph/ceph:v20, name=exciting_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 06:30:46 np0005604943 podman[74424]: 2026-02-02 11:30:45.91267402 +0000 UTC m=+0.026477370 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:46 np0005604943 podman[74424]: 2026-02-02 11:30:46.044557982 +0000 UTC m=+0.158361292 container remove 4f79edd491aab81f5f5488183f6a141e3bf4bb5268877f9d1d82d2d009b7b47b (image=quay.io/ceph/ceph:v20, name=exciting_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 06:30:46 np0005604943 systemd[1]: libpod-conmon-4f79edd491aab81f5f5488183f6a141e3bf4bb5268877f9d1d82d2d009b7b47b.scope: Deactivated successfully.
Feb  2 06:30:46 np0005604943 podman[74458]: 2026-02-02 11:30:46.114499191 +0000 UTC m=+0.050797631 container create fa71e393d669107ec0f4ae0592669bf047e0da6a5055358f027297a4d28682c3 (image=quay.io/ceph/ceph:v20, name=lucid_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 06:30:46 np0005604943 systemd[1]: Started libpod-conmon-fa71e393d669107ec0f4ae0592669bf047e0da6a5055358f027297a4d28682c3.scope.
Feb  2 06:30:46 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:30:46 np0005604943 podman[74458]: 2026-02-02 11:30:46.175825197 +0000 UTC m=+0.112123657 container init fa71e393d669107ec0f4ae0592669bf047e0da6a5055358f027297a4d28682c3 (image=quay.io/ceph/ceph:v20, name=lucid_mcnulty, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:30:46 np0005604943 podman[74458]: 2026-02-02 11:30:46.086592003 +0000 UTC m=+0.022890503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:46 np0005604943 podman[74458]: 2026-02-02 11:30:46.182349574 +0000 UTC m=+0.118648014 container start fa71e393d669107ec0f4ae0592669bf047e0da6a5055358f027297a4d28682c3 (image=quay.io/ceph/ceph:v20, name=lucid_mcnulty, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 06:30:46 np0005604943 podman[74458]: 2026-02-02 11:30:46.186187228 +0000 UTC m=+0.122485638 container attach fa71e393d669107ec0f4ae0592669bf047e0da6a5055358f027297a4d28682c3 (image=quay.io/ceph/ceph:v20, name=lucid_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:30:46 np0005604943 lucid_mcnulty[74474]: AQDmioBpgDfHDBAA4mW22n9rvNztTUbLqHSYPA==
Feb  2 06:30:46 np0005604943 systemd[1]: libpod-fa71e393d669107ec0f4ae0592669bf047e0da6a5055358f027297a4d28682c3.scope: Deactivated successfully.
Feb  2 06:30:46 np0005604943 podman[74458]: 2026-02-02 11:30:46.219274477 +0000 UTC m=+0.155572887 container died fa71e393d669107ec0f4ae0592669bf047e0da6a5055358f027297a4d28682c3 (image=quay.io/ceph/ceph:v20, name=lucid_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:30:46 np0005604943 podman[74458]: 2026-02-02 11:30:46.259389536 +0000 UTC m=+0.195687956 container remove fa71e393d669107ec0f4ae0592669bf047e0da6a5055358f027297a4d28682c3 (image=quay.io/ceph/ceph:v20, name=lucid_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:30:46 np0005604943 systemd[1]: libpod-conmon-fa71e393d669107ec0f4ae0592669bf047e0da6a5055358f027297a4d28682c3.scope: Deactivated successfully.
Feb  2 06:30:46 np0005604943 podman[74494]: 2026-02-02 11:30:46.327037574 +0000 UTC m=+0.049139896 container create 2c969a763fe607d4e3e5ca91baa3316fa77d7a64461018cb517be7bb1a29070d (image=quay.io/ceph/ceph:v20, name=wizardly_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  2 06:30:46 np0005604943 systemd[1]: Started libpod-conmon-2c969a763fe607d4e3e5ca91baa3316fa77d7a64461018cb517be7bb1a29070d.scope.
Feb  2 06:30:46 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:30:46 np0005604943 podman[74494]: 2026-02-02 11:30:46.383999241 +0000 UTC m=+0.106101613 container init 2c969a763fe607d4e3e5ca91baa3316fa77d7a64461018cb517be7bb1a29070d (image=quay.io/ceph/ceph:v20, name=wizardly_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:30:46 np0005604943 podman[74494]: 2026-02-02 11:30:46.390531288 +0000 UTC m=+0.112633590 container start 2c969a763fe607d4e3e5ca91baa3316fa77d7a64461018cb517be7bb1a29070d (image=quay.io/ceph/ceph:v20, name=wizardly_pike, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  2 06:30:46 np0005604943 podman[74494]: 2026-02-02 11:30:46.393796427 +0000 UTC m=+0.115898809 container attach 2c969a763fe607d4e3e5ca91baa3316fa77d7a64461018cb517be7bb1a29070d (image=quay.io/ceph/ceph:v20, name=wizardly_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 06:30:46 np0005604943 podman[74494]: 2026-02-02 11:30:46.303922656 +0000 UTC m=+0.026025028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:46 np0005604943 wizardly_pike[74511]: AQDmioBpesgWGRAAMl6Ryecrn+VSd9o+i59Niw==
Feb  2 06:30:46 np0005604943 systemd[1]: libpod-2c969a763fe607d4e3e5ca91baa3316fa77d7a64461018cb517be7bb1a29070d.scope: Deactivated successfully.
Feb  2 06:30:46 np0005604943 podman[74494]: 2026-02-02 11:30:46.424651745 +0000 UTC m=+0.146754107 container died 2c969a763fe607d4e3e5ca91baa3316fa77d7a64461018cb517be7bb1a29070d (image=quay.io/ceph/ceph:v20, name=wizardly_pike, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 06:30:46 np0005604943 podman[74494]: 2026-02-02 11:30:46.465539785 +0000 UTC m=+0.187642077 container remove 2c969a763fe607d4e3e5ca91baa3316fa77d7a64461018cb517be7bb1a29070d (image=quay.io/ceph/ceph:v20, name=wizardly_pike, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  2 06:30:46 np0005604943 systemd[1]: libpod-conmon-2c969a763fe607d4e3e5ca91baa3316fa77d7a64461018cb517be7bb1a29070d.scope: Deactivated successfully.
Feb  2 06:30:46 np0005604943 podman[74529]: 2026-02-02 11:30:46.527606502 +0000 UTC m=+0.046958317 container create c3972784ae8b7f1d0b5aa1c06cef3c631e8d28e9b0bda53933de929e3c15be96 (image=quay.io/ceph/ceph:v20, name=nostalgic_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:30:46 np0005604943 systemd[1]: Started libpod-conmon-c3972784ae8b7f1d0b5aa1c06cef3c631e8d28e9b0bda53933de929e3c15be96.scope.
Feb  2 06:30:46 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:30:46 np0005604943 podman[74529]: 2026-02-02 11:30:46.504807983 +0000 UTC m=+0.024159858 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:47 np0005604943 podman[74529]: 2026-02-02 11:30:47.330385445 +0000 UTC m=+0.849737260 container init c3972784ae8b7f1d0b5aa1c06cef3c631e8d28e9b0bda53933de929e3c15be96 (image=quay.io/ceph/ceph:v20, name=nostalgic_mclaren, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:30:47 np0005604943 podman[74529]: 2026-02-02 11:30:47.336701247 +0000 UTC m=+0.856053032 container start c3972784ae8b7f1d0b5aa1c06cef3c631e8d28e9b0bda53933de929e3c15be96 (image=quay.io/ceph/ceph:v20, name=nostalgic_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 06:30:47 np0005604943 nostalgic_mclaren[74545]: AQDnioBpPI3hFRAA/AYlDZI3hfoggtNKQ3q2/g==
Feb  2 06:30:47 np0005604943 systemd[1]: libpod-c3972784ae8b7f1d0b5aa1c06cef3c631e8d28e9b0bda53933de929e3c15be96.scope: Deactivated successfully.
Feb  2 06:30:47 np0005604943 podman[74529]: 2026-02-02 11:30:47.377399482 +0000 UTC m=+0.896751347 container attach c3972784ae8b7f1d0b5aa1c06cef3c631e8d28e9b0bda53933de929e3c15be96 (image=quay.io/ceph/ceph:v20, name=nostalgic_mclaren, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:30:47 np0005604943 podman[74529]: 2026-02-02 11:30:47.377992138 +0000 UTC m=+0.897343943 container died c3972784ae8b7f1d0b5aa1c06cef3c631e8d28e9b0bda53933de929e3c15be96 (image=quay.io/ceph/ceph:v20, name=nostalgic_mclaren, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:30:47 np0005604943 systemd[1]: var-lib-containers-storage-overlay-04041db20a8c3ec5a104aa3801b47dfb1cf56ea63ad1f97b742662b00ba895ba-merged.mount: Deactivated successfully.
Feb  2 06:30:47 np0005604943 podman[74529]: 2026-02-02 11:30:47.426134266 +0000 UTC m=+0.945486081 container remove c3972784ae8b7f1d0b5aa1c06cef3c631e8d28e9b0bda53933de929e3c15be96 (image=quay.io/ceph/ceph:v20, name=nostalgic_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 06:30:47 np0005604943 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 06:30:47 np0005604943 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 06:30:47 np0005604943 systemd[1]: libpod-conmon-c3972784ae8b7f1d0b5aa1c06cef3c631e8d28e9b0bda53933de929e3c15be96.scope: Deactivated successfully.
Feb  2 06:30:47 np0005604943 podman[74563]: 2026-02-02 11:30:47.512758698 +0000 UTC m=+0.060630998 container create 8a059828f1ce5d219d4401249f968f3bcb613c16a952bcf585a524119e9e7a4c (image=quay.io/ceph/ceph:v20, name=relaxed_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 06:30:47 np0005604943 systemd[1]: Started libpod-conmon-8a059828f1ce5d219d4401249f968f3bcb613c16a952bcf585a524119e9e7a4c.scope.
Feb  2 06:30:47 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:30:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c4f8da477a39b0702578074c03774db8f5563859df17e1f27e18f017540fc3f/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:47 np0005604943 podman[74563]: 2026-02-02 11:30:47.57653314 +0000 UTC m=+0.124405450 container init 8a059828f1ce5d219d4401249f968f3bcb613c16a952bcf585a524119e9e7a4c (image=quay.io/ceph/ceph:v20, name=relaxed_cartwright, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 06:30:47 np0005604943 podman[74563]: 2026-02-02 11:30:47.486345911 +0000 UTC m=+0.034218271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:47 np0005604943 podman[74563]: 2026-02-02 11:30:47.582493462 +0000 UTC m=+0.130365772 container start 8a059828f1ce5d219d4401249f968f3bcb613c16a952bcf585a524119e9e7a4c (image=quay.io/ceph/ceph:v20, name=relaxed_cartwright, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:30:47 np0005604943 podman[74563]: 2026-02-02 11:30:47.58647282 +0000 UTC m=+0.134345140 container attach 8a059828f1ce5d219d4401249f968f3bcb613c16a952bcf585a524119e9e7a4c (image=quay.io/ceph/ceph:v20, name=relaxed_cartwright, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 06:30:47 np0005604943 relaxed_cartwright[74579]: /usr/bin/monmaptool: monmap file /tmp/monmap
Feb  2 06:30:47 np0005604943 relaxed_cartwright[74579]: setting min_mon_release = tentacle
Feb  2 06:30:47 np0005604943 relaxed_cartwright[74579]: /usr/bin/monmaptool: set fsid to 4548a36b-7cdc-5e3e-a814-4e1571be1fae
Feb  2 06:30:47 np0005604943 relaxed_cartwright[74579]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Feb  2 06:30:47 np0005604943 systemd[1]: libpod-8a059828f1ce5d219d4401249f968f3bcb613c16a952bcf585a524119e9e7a4c.scope: Deactivated successfully.
Feb  2 06:30:47 np0005604943 podman[74563]: 2026-02-02 11:30:47.632814789 +0000 UTC m=+0.180687099 container died 8a059828f1ce5d219d4401249f968f3bcb613c16a952bcf585a524119e9e7a4c (image=quay.io/ceph/ceph:v20, name=relaxed_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:30:47 np0005604943 systemd[1]: var-lib-containers-storage-overlay-4c4f8da477a39b0702578074c03774db8f5563859df17e1f27e18f017540fc3f-merged.mount: Deactivated successfully.
Feb  2 06:30:47 np0005604943 podman[74563]: 2026-02-02 11:30:47.676336561 +0000 UTC m=+0.224208871 container remove 8a059828f1ce5d219d4401249f968f3bcb613c16a952bcf585a524119e9e7a4c (image=quay.io/ceph/ceph:v20, name=relaxed_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 06:30:47 np0005604943 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 06:30:47 np0005604943 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 06:30:47 np0005604943 systemd[1]: libpod-conmon-8a059828f1ce5d219d4401249f968f3bcb613c16a952bcf585a524119e9e7a4c.scope: Deactivated successfully.
Feb  2 06:30:47 np0005604943 podman[74599]: 2026-02-02 11:30:47.756179459 +0000 UTC m=+0.058395776 container create 76c70d7d1c6586256b33ad0848ebd427b77a0c0560e8b1e15bbfb2908dbf7a47 (image=quay.io/ceph/ceph:v20, name=gallant_kirch, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 06:30:47 np0005604943 systemd[1]: Started libpod-conmon-76c70d7d1c6586256b33ad0848ebd427b77a0c0560e8b1e15bbfb2908dbf7a47.scope.
Feb  2 06:30:47 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:30:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6637c02366ce1dda6d1cbff06b2f67bdfdbc828d4a730da931efc9c1c6b1239a/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6637c02366ce1dda6d1cbff06b2f67bdfdbc828d4a730da931efc9c1c6b1239a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6637c02366ce1dda6d1cbff06b2f67bdfdbc828d4a730da931efc9c1c6b1239a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6637c02366ce1dda6d1cbff06b2f67bdfdbc828d4a730da931efc9c1c6b1239a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:47 np0005604943 podman[74599]: 2026-02-02 11:30:47.730848581 +0000 UTC m=+0.033064958 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:47 np0005604943 podman[74599]: 2026-02-02 11:30:47.838815794 +0000 UTC m=+0.141032171 container init 76c70d7d1c6586256b33ad0848ebd427b77a0c0560e8b1e15bbfb2908dbf7a47 (image=quay.io/ceph/ceph:v20, name=gallant_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 06:30:47 np0005604943 podman[74599]: 2026-02-02 11:30:47.854110229 +0000 UTC m=+0.156326516 container start 76c70d7d1c6586256b33ad0848ebd427b77a0c0560e8b1e15bbfb2908dbf7a47 (image=quay.io/ceph/ceph:v20, name=gallant_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:30:47 np0005604943 podman[74599]: 2026-02-02 11:30:47.858428906 +0000 UTC m=+0.160645283 container attach 76c70d7d1c6586256b33ad0848ebd427b77a0c0560e8b1e15bbfb2908dbf7a47 (image=quay.io/ceph/ceph:v20, name=gallant_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 06:30:47 np0005604943 systemd[1]: libpod-76c70d7d1c6586256b33ad0848ebd427b77a0c0560e8b1e15bbfb2908dbf7a47.scope: Deactivated successfully.
Feb  2 06:30:47 np0005604943 podman[74599]: 2026-02-02 11:30:47.962171584 +0000 UTC m=+0.264387911 container died 76c70d7d1c6586256b33ad0848ebd427b77a0c0560e8b1e15bbfb2908dbf7a47 (image=quay.io/ceph/ceph:v20, name=gallant_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:30:48 np0005604943 podman[74599]: 2026-02-02 11:30:48.003381424 +0000 UTC m=+0.305597741 container remove 76c70d7d1c6586256b33ad0848ebd427b77a0c0560e8b1e15bbfb2908dbf7a47 (image=quay.io/ceph/ceph:v20, name=gallant_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:30:48 np0005604943 systemd[1]: libpod-conmon-76c70d7d1c6586256b33ad0848ebd427b77a0c0560e8b1e15bbfb2908dbf7a47.scope: Deactivated successfully.
Feb  2 06:30:48 np0005604943 systemd[1]: Reloading.
Feb  2 06:30:48 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:30:48 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:30:48 np0005604943 systemd[1]: Reloading.
Feb  2 06:30:48 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:30:48 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:30:48 np0005604943 systemd[1]: Reached target All Ceph clusters and services.
Feb  2 06:30:48 np0005604943 systemd[1]: Reloading.
Feb  2 06:30:48 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:30:48 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:30:48 np0005604943 systemd[1]: Reached target Ceph cluster 4548a36b-7cdc-5e3e-a814-4e1571be1fae.
Feb  2 06:30:48 np0005604943 systemd[1]: Reloading.
Feb  2 06:30:48 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:30:48 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:30:48 np0005604943 systemd[1]: Reloading.
Feb  2 06:30:49 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:30:49 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:30:49 np0005604943 systemd[1]: Created slice Slice /system/ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae.
Feb  2 06:30:49 np0005604943 systemd[1]: Reached target System Time Set.
Feb  2 06:30:49 np0005604943 systemd[1]: Reached target System Time Synchronized.
Feb  2 06:30:49 np0005604943 systemd[1]: Starting Ceph mon.compute-0 for 4548a36b-7cdc-5e3e-a814-4e1571be1fae...
Feb  2 06:30:49 np0005604943 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 06:30:49 np0005604943 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 06:30:49 np0005604943 podman[74895]: 2026-02-02 11:30:49.412063475 +0000 UTC m=+0.052991711 container create d645437b2b507b601e55893e8cc7c02a19dee879dd7340de7bb03764926abb0d (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 06:30:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48eaa216c656ddaf470201f7c70b7b78f71d5f1fa4d34a01be5fe3f949bfc8f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48eaa216c656ddaf470201f7c70b7b78f71d5f1fa4d34a01be5fe3f949bfc8f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48eaa216c656ddaf470201f7c70b7b78f71d5f1fa4d34a01be5fe3f949bfc8f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48eaa216c656ddaf470201f7c70b7b78f71d5f1fa4d34a01be5fe3f949bfc8f4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:49 np0005604943 podman[74895]: 2026-02-02 11:30:49.482528528 +0000 UTC m=+0.123456824 container init d645437b2b507b601e55893e8cc7c02a19dee879dd7340de7bb03764926abb0d (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:30:49 np0005604943 podman[74895]: 2026-02-02 11:30:49.388125075 +0000 UTC m=+0.029053321 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:49 np0005604943 podman[74895]: 2026-02-02 11:30:49.494841063 +0000 UTC m=+0.135769299 container start d645437b2b507b601e55893e8cc7c02a19dee879dd7340de7bb03764926abb0d (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:30:49 np0005604943 bash[74895]: d645437b2b507b601e55893e8cc7c02a19dee879dd7340de7bb03764926abb0d
Feb  2 06:30:49 np0005604943 systemd[1]: Started Ceph mon.compute-0 for 4548a36b-7cdc-5e3e-a814-4e1571be1fae.
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: pidfile_write: ignore empty --pid-file
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: load: jerasure load: lrc 
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: RocksDB version: 7.9.2
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Git sha 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: DB SUMMARY
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: DB Session ID:  QAT82XWQ8O95L7ROHE0K
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: CURRENT file:  CURRENT
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                         Options.error_if_exists: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                       Options.create_if_missing: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                                     Options.env: 0x5615db127440
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                                      Options.fs: PosixFileSystem
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                                Options.info_log: 0x5615dd1953e0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                              Options.statistics: (nil)
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                               Options.use_fsync: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                              Options.db_log_dir: 
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                                 Options.wal_dir: 
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                    Options.write_buffer_manager: 0x5615dd114140
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                  Options.unordered_write: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                               Options.row_cache: None
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                              Options.wal_filter: None
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.two_write_queues: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.wal_compression: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.atomic_flush: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.max_background_jobs: 2
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.max_background_compactions: -1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.max_subcompactions: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.max_total_wal_size: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                          Options.max_open_files: -1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:       Options.compaction_readahead_size: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Compression algorithms supported:
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: #011kZSTD supported: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: #011kXpressCompression supported: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: #011kZlibCompression supported: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:           Options.merge_operator: 
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:        Options.compaction_filter: None
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5615dd120600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5615dd1058d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:        Options.write_buffer_size: 33554432
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:  Options.max_write_buffer_number: 2
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:          Options.compression: NoCompression
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.num_levels: 7
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: cd28d1c1-a55b-4e90-928b-e550748bad19
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031849558134, "job": 1, "event": "recovery_started", "wal_files": [4]}
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031849560831, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QAT82XWQ8O95L7ROHE0K", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031849560965, "job": 1, "event": "recovery_finished"}
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5615dd132e00
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: DB pointer 0x5615dd27e000
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5615dd1058d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@-1(???) e0 preinit fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(probing) e0 win_standalone_election
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(probing) e1 win_standalone_election
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: paxos.0).electionLogic(2) init, last seen epoch 2
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 06:30:49 np0005604943 podman[74916]: 2026-02-02 11:30:49.596645158 +0000 UTC m=+0.058401047 container create 9dd384f50a5df9ee38273dc91d4642114b61c797d0ecd853aa6769e53d45908c (image=quay.io/ceph/ceph:v20, name=romantic_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: log_channel(cluster) log [DBG] : fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: log_channel(cluster) log [DBG] : last_changed 2026-02-02T11:30:47.628454+0000
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: log_channel(cluster) log [DBG] : created 2026-02-02T11:30:47.628454+0000
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-02-02T11:30:47.909697Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,os=Linux}
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).mds e1 new map
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-02-02T11:30:49:598633+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: log_channel(cluster) log [DBG] : fsmap 
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mkfs 4548a36b-7cdc-5e3e-a814-4e1571be1fae
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 06:30:49 np0005604943 systemd[1]: Started libpod-conmon-9dd384f50a5df9ee38273dc91d4642114b61c797d0ecd853aa6769e53d45908c.scope.
Feb  2 06:30:49 np0005604943 podman[74916]: 2026-02-02 11:30:49.575251497 +0000 UTC m=+0.037007386 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:49 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:30:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167874f09223a942104810bf14725a4e8715e2a719f646667f1b1f163a967864/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167874f09223a942104810bf14725a4e8715e2a719f646667f1b1f163a967864/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167874f09223a942104810bf14725a4e8715e2a719f646667f1b1f163a967864/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:49 np0005604943 podman[74916]: 2026-02-02 11:30:49.717744787 +0000 UTC m=+0.179500716 container init 9dd384f50a5df9ee38273dc91d4642114b61c797d0ecd853aa6769e53d45908c (image=quay.io/ceph/ceph:v20, name=romantic_johnson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:30:49 np0005604943 podman[74916]: 2026-02-02 11:30:49.727240555 +0000 UTC m=+0.188996424 container start 9dd384f50a5df9ee38273dc91d4642114b61c797d0ecd853aa6769e53d45908c (image=quay.io/ceph/ceph:v20, name=romantic_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:30:49 np0005604943 podman[74916]: 2026-02-02 11:30:49.73107369 +0000 UTC m=+0.192829579 container attach 9dd384f50a5df9ee38273dc91d4642114b61c797d0ecd853aa6769e53d45908c (image=quay.io/ceph/ceph:v20, name=romantic_johnson, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb  2 06:30:49 np0005604943 ceph-mon[74915]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3673539816' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb  2 06:30:49 np0005604943 romantic_johnson[74970]:  cluster:
Feb  2 06:30:49 np0005604943 romantic_johnson[74970]:    id:     4548a36b-7cdc-5e3e-a814-4e1571be1fae
Feb  2 06:30:49 np0005604943 romantic_johnson[74970]:    health: HEALTH_OK
Feb  2 06:30:49 np0005604943 romantic_johnson[74970]: 
Feb  2 06:30:49 np0005604943 romantic_johnson[74970]:  services:
Feb  2 06:30:49 np0005604943 romantic_johnson[74970]:    mon: 1 daemons, quorum compute-0 (age 0.365071s) [leader: compute-0]
Feb  2 06:30:49 np0005604943 romantic_johnson[74970]:    mgr: no daemons active
Feb  2 06:30:49 np0005604943 romantic_johnson[74970]:    osd: 0 osds: 0 up, 0 in
Feb  2 06:30:49 np0005604943 romantic_johnson[74970]: 
Feb  2 06:30:49 np0005604943 romantic_johnson[74970]:  data:
Feb  2 06:30:49 np0005604943 romantic_johnson[74970]:    pools:   0 pools, 0 pgs
Feb  2 06:30:49 np0005604943 romantic_johnson[74970]:    objects: 0 objects, 0 B
Feb  2 06:30:49 np0005604943 romantic_johnson[74970]:    usage:   0 B used, 0 B / 0 B avail
Feb  2 06:30:49 np0005604943 romantic_johnson[74970]:    pgs:     
Feb  2 06:30:49 np0005604943 romantic_johnson[74970]: 
Feb  2 06:30:49 np0005604943 systemd[1]: libpod-9dd384f50a5df9ee38273dc91d4642114b61c797d0ecd853aa6769e53d45908c.scope: Deactivated successfully.
Feb  2 06:30:49 np0005604943 podman[74916]: 2026-02-02 11:30:49.974902752 +0000 UTC m=+0.436658641 container died 9dd384f50a5df9ee38273dc91d4642114b61c797d0ecd853aa6769e53d45908c (image=quay.io/ceph/ceph:v20, name=romantic_johnson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 06:30:50 np0005604943 systemd[1]: var-lib-containers-storage-overlay-167874f09223a942104810bf14725a4e8715e2a719f646667f1b1f163a967864-merged.mount: Deactivated successfully.
Feb  2 06:30:50 np0005604943 podman[74916]: 2026-02-02 11:30:50.015965857 +0000 UTC m=+0.477721716 container remove 9dd384f50a5df9ee38273dc91d4642114b61c797d0ecd853aa6769e53d45908c (image=quay.io/ceph/ceph:v20, name=romantic_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 06:30:50 np0005604943 systemd[1]: libpod-conmon-9dd384f50a5df9ee38273dc91d4642114b61c797d0ecd853aa6769e53d45908c.scope: Deactivated successfully.
Feb  2 06:30:50 np0005604943 podman[75007]: 2026-02-02 11:30:50.084894889 +0000 UTC m=+0.048226531 container create e2717077ca5e4c037757be53af16421789c66794dd91f1ff09573a5ccbf1ce7f (image=quay.io/ceph/ceph:v20, name=optimistic_lewin, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:30:50 np0005604943 systemd[1]: Started libpod-conmon-e2717077ca5e4c037757be53af16421789c66794dd91f1ff09573a5ccbf1ce7f.scope.
Feb  2 06:30:50 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:30:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44cc8f05e8feeec67e8de68fcb59dcfc0ab8364792642da05e13f676c38b6c3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44cc8f05e8feeec67e8de68fcb59dcfc0ab8364792642da05e13f676c38b6c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44cc8f05e8feeec67e8de68fcb59dcfc0ab8364792642da05e13f676c38b6c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44cc8f05e8feeec67e8de68fcb59dcfc0ab8364792642da05e13f676c38b6c3/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:50 np0005604943 podman[75007]: 2026-02-02 11:30:50.155562859 +0000 UTC m=+0.118894521 container init e2717077ca5e4c037757be53af16421789c66794dd91f1ff09573a5ccbf1ce7f (image=quay.io/ceph/ceph:v20, name=optimistic_lewin, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:30:50 np0005604943 podman[75007]: 2026-02-02 11:30:50.159454655 +0000 UTC m=+0.122786297 container start e2717077ca5e4c037757be53af16421789c66794dd91f1ff09573a5ccbf1ce7f (image=quay.io/ceph/ceph:v20, name=optimistic_lewin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 06:30:50 np0005604943 podman[75007]: 2026-02-02 11:30:50.162546689 +0000 UTC m=+0.125878351 container attach e2717077ca5e4c037757be53af16421789c66794dd91f1ff09573a5ccbf1ce7f (image=quay.io/ceph/ceph:v20, name=optimistic_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 06:30:50 np0005604943 podman[75007]: 2026-02-02 11:30:50.070426746 +0000 UTC m=+0.033758408 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:50 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb  2 06:30:50 np0005604943 ceph-mon[74915]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1526693827' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  2 06:30:50 np0005604943 ceph-mon[74915]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1526693827' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb  2 06:30:50 np0005604943 optimistic_lewin[75024]: 
Feb  2 06:30:50 np0005604943 optimistic_lewin[75024]: [global]
Feb  2 06:30:50 np0005604943 optimistic_lewin[75024]: #011fsid = 4548a36b-7cdc-5e3e-a814-4e1571be1fae
Feb  2 06:30:50 np0005604943 optimistic_lewin[75024]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb  2 06:30:50 np0005604943 optimistic_lewin[75024]: #011osd_crush_chooseleaf_type = 0
Feb  2 06:30:50 np0005604943 systemd[1]: libpod-e2717077ca5e4c037757be53af16421789c66794dd91f1ff09573a5ccbf1ce7f.scope: Deactivated successfully.
Feb  2 06:30:50 np0005604943 podman[75007]: 2026-02-02 11:30:50.393417329 +0000 UTC m=+0.356748981 container died e2717077ca5e4c037757be53af16421789c66794dd91f1ff09573a5ccbf1ce7f (image=quay.io/ceph/ceph:v20, name=optimistic_lewin, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 06:30:50 np0005604943 systemd[1]: var-lib-containers-storage-overlay-e44cc8f05e8feeec67e8de68fcb59dcfc0ab8364792642da05e13f676c38b6c3-merged.mount: Deactivated successfully.
Feb  2 06:30:50 np0005604943 podman[75007]: 2026-02-02 11:30:50.435118622 +0000 UTC m=+0.398450304 container remove e2717077ca5e4c037757be53af16421789c66794dd91f1ff09573a5ccbf1ce7f (image=quay.io/ceph/ceph:v20, name=optimistic_lewin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 06:30:50 np0005604943 systemd[1]: libpod-conmon-e2717077ca5e4c037757be53af16421789c66794dd91f1ff09573a5ccbf1ce7f.scope: Deactivated successfully.
Feb  2 06:30:50 np0005604943 podman[75063]: 2026-02-02 11:30:50.49432931 +0000 UTC m=+0.044624183 container create 7331ed1359b3e408993e25483500326d24438acf7e139c93341621a2f190fc47 (image=quay.io/ceph/ceph:v20, name=great_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 06:30:50 np0005604943 systemd[1]: Started libpod-conmon-7331ed1359b3e408993e25483500326d24438acf7e139c93341621a2f190fc47.scope.
Feb  2 06:30:50 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:30:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaacefaf0f9ac5c413707f553e31c9704583c9e7a1df7acf4eb41efabe34e50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaacefaf0f9ac5c413707f553e31c9704583c9e7a1df7acf4eb41efabe34e50/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaacefaf0f9ac5c413707f553e31c9704583c9e7a1df7acf4eb41efabe34e50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaacefaf0f9ac5c413707f553e31c9704583c9e7a1df7acf4eb41efabe34e50/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:50 np0005604943 podman[75063]: 2026-02-02 11:30:50.469341851 +0000 UTC m=+0.019636714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:50 np0005604943 podman[75063]: 2026-02-02 11:30:50.576540673 +0000 UTC m=+0.126835556 container init 7331ed1359b3e408993e25483500326d24438acf7e139c93341621a2f190fc47 (image=quay.io/ceph/ceph:v20, name=great_hermann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:30:50 np0005604943 podman[75063]: 2026-02-02 11:30:50.591596082 +0000 UTC m=+0.141890955 container start 7331ed1359b3e408993e25483500326d24438acf7e139c93341621a2f190fc47 (image=quay.io/ceph/ceph:v20, name=great_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 06:30:50 np0005604943 podman[75063]: 2026-02-02 11:30:50.595453666 +0000 UTC m=+0.145748580 container attach 7331ed1359b3e408993e25483500326d24438acf7e139c93341621a2f190fc47 (image=quay.io/ceph/ceph:v20, name=great_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 06:30:50 np0005604943 ceph-mon[74915]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 06:30:50 np0005604943 ceph-mon[74915]: from='client.? 192.168.122.100:0/1526693827' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  2 06:30:50 np0005604943 ceph-mon[74915]: from='client.? 192.168.122.100:0/1526693827' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb  2 06:30:50 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:30:50 np0005604943 ceph-mon[74915]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1110126289' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:30:50 np0005604943 systemd[1]: libpod-7331ed1359b3e408993e25483500326d24438acf7e139c93341621a2f190fc47.scope: Deactivated successfully.
Feb  2 06:30:50 np0005604943 podman[75063]: 2026-02-02 11:30:50.8230851 +0000 UTC m=+0.373379963 container died 7331ed1359b3e408993e25483500326d24438acf7e139c93341621a2f190fc47 (image=quay.io/ceph/ceph:v20, name=great_hermann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:30:50 np0005604943 systemd[1]: var-lib-containers-storage-overlay-aaaacefaf0f9ac5c413707f553e31c9704583c9e7a1df7acf4eb41efabe34e50-merged.mount: Deactivated successfully.
Feb  2 06:30:50 np0005604943 podman[75063]: 2026-02-02 11:30:50.869148511 +0000 UTC m=+0.419443384 container remove 7331ed1359b3e408993e25483500326d24438acf7e139c93341621a2f190fc47 (image=quay.io/ceph/ceph:v20, name=great_hermann, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:30:50 np0005604943 systemd[1]: libpod-conmon-7331ed1359b3e408993e25483500326d24438acf7e139c93341621a2f190fc47.scope: Deactivated successfully.
Feb  2 06:30:50 np0005604943 systemd[1]: Stopping Ceph mon.compute-0 for 4548a36b-7cdc-5e3e-a814-4e1571be1fae...
Feb  2 06:30:51 np0005604943 ceph-mon[74915]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb  2 06:30:51 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb  2 06:30:51 np0005604943 ceph-mon[74915]: mon.compute-0@0(leader) e1 shutdown
Feb  2 06:30:51 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0[74911]: 2026-02-02T11:30:51.099+0000 7fe6fbcdc640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb  2 06:30:51 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0[74911]: 2026-02-02T11:30:51.099+0000 7fe6fbcdc640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb  2 06:30:51 np0005604943 ceph-mon[74915]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb  2 06:30:51 np0005604943 ceph-mon[74915]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb  2 06:30:51 np0005604943 podman[75148]: 2026-02-02 11:30:51.212937718 +0000 UTC m=+0.163405600 container died d645437b2b507b601e55893e8cc7c02a19dee879dd7340de7bb03764926abb0d (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:30:51 np0005604943 systemd[1]: var-lib-containers-storage-overlay-48eaa216c656ddaf470201f7c70b7b78f71d5f1fa4d34a01be5fe3f949bfc8f4-merged.mount: Deactivated successfully.
Feb  2 06:30:51 np0005604943 podman[75148]: 2026-02-02 11:30:51.251438994 +0000 UTC m=+0.201906836 container remove d645437b2b507b601e55893e8cc7c02a19dee879dd7340de7bb03764926abb0d (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:30:51 np0005604943 bash[75148]: ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0
Feb  2 06:30:51 np0005604943 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 06:30:51 np0005604943 systemd[1]: ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae@mon.compute-0.service: Deactivated successfully.
Feb  2 06:30:51 np0005604943 systemd[1]: Stopped Ceph mon.compute-0 for 4548a36b-7cdc-5e3e-a814-4e1571be1fae.
Feb  2 06:30:51 np0005604943 systemd[1]: ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae@mon.compute-0.service: Consumed 1.025s CPU time.
Feb  2 06:30:51 np0005604943 systemd[1]: Starting Ceph mon.compute-0 for 4548a36b-7cdc-5e3e-a814-4e1571be1fae...
Feb  2 06:30:51 np0005604943 podman[75252]: 2026-02-02 11:30:51.642770333 +0000 UTC m=+0.044931302 container create fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:30:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9853a0e16c6435719fe32bd61ebb93037a2d91a40d261a4f51ee7b08167f64f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9853a0e16c6435719fe32bd61ebb93037a2d91a40d261a4f51ee7b08167f64f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9853a0e16c6435719fe32bd61ebb93037a2d91a40d261a4f51ee7b08167f64f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9853a0e16c6435719fe32bd61ebb93037a2d91a40d261a4f51ee7b08167f64f5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:51 np0005604943 podman[75252]: 2026-02-02 11:30:51.705542938 +0000 UTC m=+0.107703987 container init fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 06:30:51 np0005604943 podman[75252]: 2026-02-02 11:30:51.624542948 +0000 UTC m=+0.026703947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:51 np0005604943 podman[75252]: 2026-02-02 11:30:51.721274145 +0000 UTC m=+0.123435114 container start fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:30:51 np0005604943 bash[75252]: fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04
Feb  2 06:30:51 np0005604943 systemd[1]: Started Ceph mon.compute-0 for 4548a36b-7cdc-5e3e-a814-4e1571be1fae.
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: pidfile_write: ignore empty --pid-file
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: load: jerasure load: lrc 
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: RocksDB version: 7.9.2
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Git sha 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: DB SUMMARY
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: DB Session ID:  QIU1XPNVBJBWFCSW99QT
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: CURRENT file:  CURRENT
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                         Options.error_if_exists: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                       Options.create_if_missing: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                                     Options.env: 0x55cd5c064440
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                                      Options.fs: PosixFileSystem
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                                Options.info_log: 0x55cd5e48be80
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                              Options.statistics: (nil)
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                               Options.use_fsync: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                              Options.db_log_dir: 
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                                 Options.wal_dir: 
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                    Options.write_buffer_manager: 0x55cd5e4d6140
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                  Options.unordered_write: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                               Options.row_cache: None
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                              Options.wal_filter: None
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.two_write_queues: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.wal_compression: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.atomic_flush: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.max_background_jobs: 2
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.max_background_compactions: -1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.max_subcompactions: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.max_total_wal_size: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                          Options.max_open_files: -1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:       Options.compaction_readahead_size: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Compression algorithms supported:
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: #011kZSTD supported: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: #011kXpressCompression supported: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: #011kZlibCompression supported: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:           Options.merge_operator: 
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:        Options.compaction_filter: None
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cd5e4e2a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cd5e4c78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:        Options.write_buffer_size: 33554432
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:  Options.max_write_buffer_number: 2
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:          Options.compression: NoCompression
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.num_levels: 7
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: cd28d1c1-a55b-4e90-928b-e550748bad19
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031851781472, "job": 1, "event": "recovery_started", "wal_files": [9]}
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031851786486, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031851, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031851786606, "job": 1, "event": "recovery_finished"}
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55cd5e4f4e00
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: DB pointer 0x55cd5e63e000
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.1      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.1      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.1      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.1      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.34 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.34 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cd5e4c78d0#2 capacity: 512.00 MB usage: 1.80 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: mon.compute-0@-1(???) e1 preinit fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: mon.compute-0@-1(???).mds e1 new map
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-02-02T11:30:49:598633+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(probing) e1 win_standalone_election
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : last_changed 2026-02-02T11:30:47.628454+0000
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : created 2026-02-02T11:30:47.628454+0000
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : fsmap 
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb  2 06:30:51 np0005604943 podman[75272]: 2026-02-02 11:30:51.814656701 +0000 UTC m=+0.062228861 container create 795002d5dee4678593b54984b93be78fc44a4af435ab4f0a6a660e3752eeeac2 (image=quay.io/ceph/ceph:v20, name=angry_shtern, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 06:30:51 np0005604943 systemd[1]: Started libpod-conmon-795002d5dee4678593b54984b93be78fc44a4af435ab4f0a6a660e3752eeeac2.scope.
Feb  2 06:30:51 np0005604943 ceph-mon[75271]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 06:30:51 np0005604943 podman[75272]: 2026-02-02 11:30:51.788606504 +0000 UTC m=+0.036178704 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:51 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:30:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6539163c95baac86c2a176210f156296046a522d93af02ed728c5dccc4c811/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6539163c95baac86c2a176210f156296046a522d93af02ed728c5dccc4c811/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6539163c95baac86c2a176210f156296046a522d93af02ed728c5dccc4c811/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:51 np0005604943 podman[75272]: 2026-02-02 11:30:51.930529799 +0000 UTC m=+0.178102009 container init 795002d5dee4678593b54984b93be78fc44a4af435ab4f0a6a660e3752eeeac2 (image=quay.io/ceph/ceph:v20, name=angry_shtern, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 06:30:51 np0005604943 podman[75272]: 2026-02-02 11:30:51.937588151 +0000 UTC m=+0.185160321 container start 795002d5dee4678593b54984b93be78fc44a4af435ab4f0a6a660e3752eeeac2 (image=quay.io/ceph/ceph:v20, name=angry_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Feb  2 06:30:51 np0005604943 podman[75272]: 2026-02-02 11:30:51.941175027 +0000 UTC m=+0.188747167 container attach 795002d5dee4678593b54984b93be78fc44a4af435ab4f0a6a660e3752eeeac2 (image=quay.io/ceph/ceph:v20, name=angry_shtern, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 06:30:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Feb  2 06:30:52 np0005604943 systemd[1]: libpod-795002d5dee4678593b54984b93be78fc44a4af435ab4f0a6a660e3752eeeac2.scope: Deactivated successfully.
Feb  2 06:30:52 np0005604943 podman[75272]: 2026-02-02 11:30:52.190370406 +0000 UTC m=+0.437942536 container died 795002d5dee4678593b54984b93be78fc44a4af435ab4f0a6a660e3752eeeac2 (image=quay.io/ceph/ceph:v20, name=angry_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 06:30:52 np0005604943 systemd[1]: var-lib-containers-storage-overlay-3a6539163c95baac86c2a176210f156296046a522d93af02ed728c5dccc4c811-merged.mount: Deactivated successfully.
Feb  2 06:30:52 np0005604943 podman[75272]: 2026-02-02 11:30:52.222131019 +0000 UTC m=+0.469703159 container remove 795002d5dee4678593b54984b93be78fc44a4af435ab4f0a6a660e3752eeeac2 (image=quay.io/ceph/ceph:v20, name=angry_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 06:30:52 np0005604943 systemd[1]: libpod-conmon-795002d5dee4678593b54984b93be78fc44a4af435ab4f0a6a660e3752eeeac2.scope: Deactivated successfully.
Feb  2 06:30:52 np0005604943 podman[75362]: 2026-02-02 11:30:52.297929367 +0000 UTC m=+0.053644407 container create 0121affe7b32c82be68c1a4ed533d6279387e6f4c1a3adadee4645136ae16211 (image=quay.io/ceph/ceph:v20, name=objective_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:30:52 np0005604943 systemd[1]: Started libpod-conmon-0121affe7b32c82be68c1a4ed533d6279387e6f4c1a3adadee4645136ae16211.scope.
Feb  2 06:30:52 np0005604943 podman[75362]: 2026-02-02 11:30:52.274429739 +0000 UTC m=+0.030144849 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:52 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:30:52 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f826ee527a971bd877d45a8895cefcce7801355fd1c5a023dfa5041c2b6540d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:52 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f826ee527a971bd877d45a8895cefcce7801355fd1c5a023dfa5041c2b6540d0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:52 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f826ee527a971bd877d45a8895cefcce7801355fd1c5a023dfa5041c2b6540d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:52 np0005604943 podman[75362]: 2026-02-02 11:30:52.402451307 +0000 UTC m=+0.158166427 container init 0121affe7b32c82be68c1a4ed533d6279387e6f4c1a3adadee4645136ae16211 (image=quay.io/ceph/ceph:v20, name=objective_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:30:52 np0005604943 podman[75362]: 2026-02-02 11:30:52.40958554 +0000 UTC m=+0.165300610 container start 0121affe7b32c82be68c1a4ed533d6279387e6f4c1a3adadee4645136ae16211 (image=quay.io/ceph/ceph:v20, name=objective_heyrovsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:30:52 np0005604943 podman[75362]: 2026-02-02 11:30:52.413079245 +0000 UTC m=+0.168794305 container attach 0121affe7b32c82be68c1a4ed533d6279387e6f4c1a3adadee4645136ae16211 (image=quay.io/ceph/ceph:v20, name=objective_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 06:30:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Feb  2 06:30:52 np0005604943 systemd[1]: libpod-0121affe7b32c82be68c1a4ed533d6279387e6f4c1a3adadee4645136ae16211.scope: Deactivated successfully.
Feb  2 06:30:52 np0005604943 podman[75362]: 2026-02-02 11:30:52.668662627 +0000 UTC m=+0.424377697 container died 0121affe7b32c82be68c1a4ed533d6279387e6f4c1a3adadee4645136ae16211 (image=quay.io/ceph/ceph:v20, name=objective_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:30:52 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f826ee527a971bd877d45a8895cefcce7801355fd1c5a023dfa5041c2b6540d0-merged.mount: Deactivated successfully.
Feb  2 06:30:52 np0005604943 podman[75362]: 2026-02-02 11:30:52.715836599 +0000 UTC m=+0.471551669 container remove 0121affe7b32c82be68c1a4ed533d6279387e6f4c1a3adadee4645136ae16211 (image=quay.io/ceph/ceph:v20, name=objective_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  2 06:30:52 np0005604943 systemd[1]: libpod-conmon-0121affe7b32c82be68c1a4ed533d6279387e6f4c1a3adadee4645136ae16211.scope: Deactivated successfully.
Feb  2 06:30:52 np0005604943 systemd[1]: Reloading.
Feb  2 06:30:52 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:30:52 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:30:53 np0005604943 systemd[1]: Reloading.
Feb  2 06:30:53 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:30:53 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:30:53 np0005604943 systemd[1]: Starting Ceph mgr.compute-0.twcemg for 4548a36b-7cdc-5e3e-a814-4e1571be1fae...
Feb  2 06:30:53 np0005604943 podman[75541]: 2026-02-02 11:30:53.504428607 +0000 UTC m=+0.063910377 container create e108912e9f7d348f2198e623df71406e8b579a2b0383329619c90634f2e480e3 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-twcemg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 06:30:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c8e788c6d04014e4e74a6c8045d0882f4eaf94f1f7d567d237c4e2fdca572c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c8e788c6d04014e4e74a6c8045d0882f4eaf94f1f7d567d237c4e2fdca572c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c8e788c6d04014e4e74a6c8045d0882f4eaf94f1f7d567d237c4e2fdca572c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c8e788c6d04014e4e74a6c8045d0882f4eaf94f1f7d567d237c4e2fdca572c/merged/var/lib/ceph/mgr/ceph-compute-0.twcemg supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:53 np0005604943 podman[75541]: 2026-02-02 11:30:53.476262902 +0000 UTC m=+0.035744752 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:53 np0005604943 podman[75541]: 2026-02-02 11:30:53.573942615 +0000 UTC m=+0.133424445 container init e108912e9f7d348f2198e623df71406e8b579a2b0383329619c90634f2e480e3 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-twcemg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 06:30:53 np0005604943 podman[75541]: 2026-02-02 11:30:53.580027071 +0000 UTC m=+0.139508841 container start e108912e9f7d348f2198e623df71406e8b579a2b0383329619c90634f2e480e3 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-twcemg, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb  2 06:30:53 np0005604943 bash[75541]: e108912e9f7d348f2198e623df71406e8b579a2b0383329619c90634f2e480e3
Feb  2 06:30:53 np0005604943 systemd[1]: Started Ceph mgr.compute-0.twcemg for 4548a36b-7cdc-5e3e-a814-4e1571be1fae.
Feb  2 06:30:53 np0005604943 ceph-mgr[75558]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 06:30:53 np0005604943 ceph-mgr[75558]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb  2 06:30:53 np0005604943 ceph-mgr[75558]: pidfile_write: ignore empty --pid-file
Feb  2 06:30:53 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'alerts'
Feb  2 06:30:53 np0005604943 podman[75559]: 2026-02-02 11:30:53.697665936 +0000 UTC m=+0.070226788 container create a58b931d35f731c48bd7ee2454e5d533ac6dcd1d38978d02397155aa143dc3d7 (image=quay.io/ceph/ceph:v20, name=zealous_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 06:30:53 np0005604943 systemd[1]: Started libpod-conmon-a58b931d35f731c48bd7ee2454e5d533ac6dcd1d38978d02397155aa143dc3d7.scope.
Feb  2 06:30:53 np0005604943 podman[75559]: 2026-02-02 11:30:53.664370371 +0000 UTC m=+0.036931253 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:53 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:30:53 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'balancer'
Feb  2 06:30:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb58d301c79d1b468224fb8abd1e027904335fee0f3a31277a4564958d1d73e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb58d301c79d1b468224fb8abd1e027904335fee0f3a31277a4564958d1d73e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb58d301c79d1b468224fb8abd1e027904335fee0f3a31277a4564958d1d73e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:53 np0005604943 podman[75559]: 2026-02-02 11:30:53.800771756 +0000 UTC m=+0.173332578 container init a58b931d35f731c48bd7ee2454e5d533ac6dcd1d38978d02397155aa143dc3d7 (image=quay.io/ceph/ceph:v20, name=zealous_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 06:30:53 np0005604943 podman[75559]: 2026-02-02 11:30:53.811079846 +0000 UTC m=+0.183640688 container start a58b931d35f731c48bd7ee2454e5d533ac6dcd1d38978d02397155aa143dc3d7 (image=quay.io/ceph/ceph:v20, name=zealous_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:30:53 np0005604943 podman[75559]: 2026-02-02 11:30:53.815086005 +0000 UTC m=+0.187646847 container attach a58b931d35f731c48bd7ee2454e5d533ac6dcd1d38978d02397155aa143dc3d7 (image=quay.io/ceph/ceph:v20, name=zealous_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:30:53 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'cephadm'
Feb  2 06:30:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  2 06:30:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1594655885' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]: 
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]: {
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    "fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    "health": {
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "status": "HEALTH_OK",
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "checks": {},
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "mutes": []
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    },
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    "election_epoch": 5,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    "quorum": [
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        0
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    ],
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    "quorum_names": [
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "compute-0"
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    ],
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    "quorum_age": 2,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    "monmap": {
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "epoch": 1,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "min_mon_release_name": "tentacle",
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "num_mons": 1
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    },
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    "osdmap": {
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "epoch": 1,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "num_osds": 0,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "num_up_osds": 0,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "osd_up_since": 0,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "num_in_osds": 0,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "osd_in_since": 0,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "num_remapped_pgs": 0
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    },
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    "pgmap": {
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "pgs_by_state": [],
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "num_pgs": 0,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "num_pools": 0,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "num_objects": 0,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "data_bytes": 0,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "bytes_used": 0,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "bytes_avail": 0,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "bytes_total": 0
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    },
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    "fsmap": {
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "epoch": 1,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "btime": "2026-02-02T11:30:49:598633+0000",
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "by_rank": [],
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "up:standby": 0
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    },
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    "mgrmap": {
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "available": false,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "num_standbys": 0,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "modules": [
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:            "iostat",
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:            "nfs"
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        ],
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "services": {}
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    },
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    "servicemap": {
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "epoch": 1,
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "modified": "2026-02-02T11:30:49.600559+0000",
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:        "services": {}
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    },
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]:    "progress_events": {}
Feb  2 06:30:54 np0005604943 zealous_mclean[75596]: }
Feb  2 06:30:54 np0005604943 systemd[1]: libpod-a58b931d35f731c48bd7ee2454e5d533ac6dcd1d38978d02397155aa143dc3d7.scope: Deactivated successfully.
Feb  2 06:30:54 np0005604943 podman[75559]: 2026-02-02 11:30:54.072710043 +0000 UTC m=+0.445270845 container died a58b931d35f731c48bd7ee2454e5d533ac6dcd1d38978d02397155aa143dc3d7 (image=quay.io/ceph/ceph:v20, name=zealous_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 06:30:54 np0005604943 systemd[1]: var-lib-containers-storage-overlay-deb58d301c79d1b468224fb8abd1e027904335fee0f3a31277a4564958d1d73e-merged.mount: Deactivated successfully.
Feb  2 06:30:54 np0005604943 podman[75559]: 2026-02-02 11:30:54.110116268 +0000 UTC m=+0.482677070 container remove a58b931d35f731c48bd7ee2454e5d533ac6dcd1d38978d02397155aa143dc3d7 (image=quay.io/ceph/ceph:v20, name=zealous_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 06:30:54 np0005604943 systemd[1]: libpod-conmon-a58b931d35f731c48bd7ee2454e5d533ac6dcd1d38978d02397155aa143dc3d7.scope: Deactivated successfully.
Feb  2 06:30:54 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'crash'
Feb  2 06:30:54 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'dashboard'
Feb  2 06:30:55 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'devicehealth'
Feb  2 06:30:55 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'diskprediction_local'
Feb  2 06:30:55 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-twcemg[75554]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb  2 06:30:55 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-twcemg[75554]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb  2 06:30:55 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-twcemg[75554]:  from numpy import show_config as show_numpy_config
Feb  2 06:30:55 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'influx'
Feb  2 06:30:55 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'insights'
Feb  2 06:30:55 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'iostat'
Feb  2 06:30:55 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'k8sevents'
Feb  2 06:30:56 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'localpool'
Feb  2 06:30:56 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'mds_autoscaler'
Feb  2 06:30:56 np0005604943 podman[75644]: 2026-02-02 11:30:56.192535049 +0000 UTC m=+0.060631489 container create ef4d88bedab16a75d1e8669e53fc82b176b7edef239ffc858675dbb62c3c3c36 (image=quay.io/ceph/ceph:v20, name=jovial_shirley, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:30:56 np0005604943 systemd[1]: Started libpod-conmon-ef4d88bedab16a75d1e8669e53fc82b176b7edef239ffc858675dbb62c3c3c36.scope.
Feb  2 06:30:56 np0005604943 podman[75644]: 2026-02-02 11:30:56.161471074 +0000 UTC m=+0.029567364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:56 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:30:56 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe07af45620e3f645346ed3f6347de41cf0741d6eadee1a583002c97e59e0064/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:56 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe07af45620e3f645346ed3f6347de41cf0741d6eadee1a583002c97e59e0064/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:56 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe07af45620e3f645346ed3f6347de41cf0741d6eadee1a583002c97e59e0064/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:56 np0005604943 podman[75644]: 2026-02-02 11:30:56.303072381 +0000 UTC m=+0.171168681 container init ef4d88bedab16a75d1e8669e53fc82b176b7edef239ffc858675dbb62c3c3c36 (image=quay.io/ceph/ceph:v20, name=jovial_shirley, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:30:56 np0005604943 podman[75644]: 2026-02-02 11:30:56.311932842 +0000 UTC m=+0.180029092 container start ef4d88bedab16a75d1e8669e53fc82b176b7edef239ffc858675dbb62c3c3c36 (image=quay.io/ceph/ceph:v20, name=jovial_shirley, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:30:56 np0005604943 podman[75644]: 2026-02-02 11:30:56.315863979 +0000 UTC m=+0.183960279 container attach ef4d88bedab16a75d1e8669e53fc82b176b7edef239ffc858675dbb62c3c3c36 (image=quay.io/ceph/ceph:v20, name=jovial_shirley, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:30:56 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'mirroring'
Feb  2 06:30:56 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'nfs'
Feb  2 06:30:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  2 06:30:56 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3296359195' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]: 
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]: {
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    "fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    "health": {
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "status": "HEALTH_OK",
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "checks": {},
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "mutes": []
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    },
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    "election_epoch": 5,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    "quorum": [
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        0
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    ],
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    "quorum_names": [
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "compute-0"
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    ],
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    "quorum_age": 4,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    "monmap": {
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "epoch": 1,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "min_mon_release_name": "tentacle",
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "num_mons": 1
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    },
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    "osdmap": {
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "epoch": 1,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "num_osds": 0,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "num_up_osds": 0,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "osd_up_since": 0,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "num_in_osds": 0,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "osd_in_since": 0,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "num_remapped_pgs": 0
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    },
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    "pgmap": {
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "pgs_by_state": [],
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "num_pgs": 0,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "num_pools": 0,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "num_objects": 0,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "data_bytes": 0,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "bytes_used": 0,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "bytes_avail": 0,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "bytes_total": 0
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    },
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    "fsmap": {
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "epoch": 1,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "btime": "2026-02-02T11:30:49:598633+0000",
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "by_rank": [],
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "up:standby": 0
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    },
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    "mgrmap": {
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "available": false,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "num_standbys": 0,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "modules": [
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:            "iostat",
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:            "nfs"
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        ],
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "services": {}
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    },
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    "servicemap": {
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "epoch": 1,
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "modified": "2026-02-02T11:30:49.600559+0000",
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:        "services": {}
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    },
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]:    "progress_events": {}
Feb  2 06:30:56 np0005604943 jovial_shirley[75660]: }
Feb  2 06:30:56 np0005604943 systemd[1]: libpod-ef4d88bedab16a75d1e8669e53fc82b176b7edef239ffc858675dbb62c3c3c36.scope: Deactivated successfully.
Feb  2 06:30:56 np0005604943 podman[75644]: 2026-02-02 11:30:56.522916122 +0000 UTC m=+0.391012402 container died ef4d88bedab16a75d1e8669e53fc82b176b7edef239ffc858675dbb62c3c3c36 (image=quay.io/ceph/ceph:v20, name=jovial_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 06:30:56 np0005604943 systemd[1]: var-lib-containers-storage-overlay-fe07af45620e3f645346ed3f6347de41cf0741d6eadee1a583002c97e59e0064-merged.mount: Deactivated successfully.
Feb  2 06:30:56 np0005604943 podman[75644]: 2026-02-02 11:30:56.578189023 +0000 UTC m=+0.446285273 container remove ef4d88bedab16a75d1e8669e53fc82b176b7edef239ffc858675dbb62c3c3c36 (image=quay.io/ceph/ceph:v20, name=jovial_shirley, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:30:56 np0005604943 systemd[1]: libpod-conmon-ef4d88bedab16a75d1e8669e53fc82b176b7edef239ffc858675dbb62c3c3c36.scope: Deactivated successfully.
Feb  2 06:30:56 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'orchestrator'
Feb  2 06:30:56 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'osd_perf_query'
Feb  2 06:30:56 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'osd_support'
Feb  2 06:30:57 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'pg_autoscaler'
Feb  2 06:30:57 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'progress'
Feb  2 06:30:57 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'prometheus'
Feb  2 06:30:57 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'rbd_support'
Feb  2 06:30:57 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'rgw'
Feb  2 06:30:57 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'rook'
Feb  2 06:30:58 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'selftest'
Feb  2 06:30:58 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'smb'
Feb  2 06:30:58 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'snap_schedule'
Feb  2 06:30:58 np0005604943 podman[75699]: 2026-02-02 11:30:58.65014839 +0000 UTC m=+0.048381646 container create 9023128dcacc209061920ddcdd1f8a9f82a850159ec5690be59b7324dc91dda3 (image=quay.io/ceph/ceph:v20, name=zen_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  2 06:30:58 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'stats'
Feb  2 06:30:58 np0005604943 systemd[1]: Started libpod-conmon-9023128dcacc209061920ddcdd1f8a9f82a850159ec5690be59b7324dc91dda3.scope.
Feb  2 06:30:58 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:30:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/885bc5d2a1aeb25b7f4138e7daf520a0aa0c94791a9cb0c3762b27b3835fa451/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/885bc5d2a1aeb25b7f4138e7daf520a0aa0c94791a9cb0c3762b27b3835fa451/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/885bc5d2a1aeb25b7f4138e7daf520a0aa0c94791a9cb0c3762b27b3835fa451/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:30:58 np0005604943 podman[75699]: 2026-02-02 11:30:58.630787164 +0000 UTC m=+0.029020410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:30:58 np0005604943 podman[75699]: 2026-02-02 11:30:58.732703112 +0000 UTC m=+0.130936338 container init 9023128dcacc209061920ddcdd1f8a9f82a850159ec5690be59b7324dc91dda3 (image=quay.io/ceph/ceph:v20, name=zen_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Feb  2 06:30:58 np0005604943 podman[75699]: 2026-02-02 11:30:58.737776999 +0000 UTC m=+0.136010245 container start 9023128dcacc209061920ddcdd1f8a9f82a850159ec5690be59b7324dc91dda3 (image=quay.io/ceph/ceph:v20, name=zen_kirch, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 06:30:58 np0005604943 podman[75699]: 2026-02-02 11:30:58.751118722 +0000 UTC m=+0.149351948 container attach 9023128dcacc209061920ddcdd1f8a9f82a850159ec5690be59b7324dc91dda3 (image=quay.io/ceph/ceph:v20, name=zen_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 06:30:58 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'status'
Feb  2 06:30:58 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'telegraf'
Feb  2 06:30:58 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'telemetry'
Feb  2 06:30:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  2 06:30:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2830287468' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb  2 06:30:58 np0005604943 zen_kirch[75715]: 
Feb  2 06:30:58 np0005604943 zen_kirch[75715]: {
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    "fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    "health": {
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "status": "HEALTH_OK",
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "checks": {},
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "mutes": []
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    },
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    "election_epoch": 5,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    "quorum": [
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        0
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    ],
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    "quorum_names": [
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "compute-0"
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    ],
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    "quorum_age": 7,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    "monmap": {
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "epoch": 1,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "min_mon_release_name": "tentacle",
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "num_mons": 1
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    },
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    "osdmap": {
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "epoch": 1,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "num_osds": 0,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "num_up_osds": 0,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "osd_up_since": 0,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "num_in_osds": 0,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "osd_in_since": 0,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "num_remapped_pgs": 0
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    },
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    "pgmap": {
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "pgs_by_state": [],
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "num_pgs": 0,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "num_pools": 0,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "num_objects": 0,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "data_bytes": 0,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "bytes_used": 0,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "bytes_avail": 0,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "bytes_total": 0
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    },
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    "fsmap": {
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "epoch": 1,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "btime": "2026-02-02T11:30:49:598633+0000",
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "by_rank": [],
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "up:standby": 0
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    },
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    "mgrmap": {
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "available": false,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "num_standbys": 0,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "modules": [
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:            "iostat",
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:            "nfs"
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        ],
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "services": {}
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    },
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    "servicemap": {
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "epoch": 1,
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "modified": "2026-02-02T11:30:49.600559+0000",
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:        "services": {}
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    },
Feb  2 06:30:58 np0005604943 zen_kirch[75715]:    "progress_events": {}
Feb  2 06:30:58 np0005604943 zen_kirch[75715]: }
Feb  2 06:30:58 np0005604943 systemd[1]: libpod-9023128dcacc209061920ddcdd1f8a9f82a850159ec5690be59b7324dc91dda3.scope: Deactivated successfully.
Feb  2 06:30:58 np0005604943 conmon[75715]: conmon 9023128dcacc20906192 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9023128dcacc209061920ddcdd1f8a9f82a850159ec5690be59b7324dc91dda3.scope/container/memory.events
Feb  2 06:30:58 np0005604943 podman[75699]: 2026-02-02 11:30:58.93550283 +0000 UTC m=+0.333736096 container died 9023128dcacc209061920ddcdd1f8a9f82a850159ec5690be59b7324dc91dda3 (image=quay.io/ceph/ceph:v20, name=zen_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 06:30:58 np0005604943 systemd[1]: var-lib-containers-storage-overlay-885bc5d2a1aeb25b7f4138e7daf520a0aa0c94791a9cb0c3762b27b3835fa451-merged.mount: Deactivated successfully.
Feb  2 06:30:58 np0005604943 podman[75699]: 2026-02-02 11:30:58.977632045 +0000 UTC m=+0.375865301 container remove 9023128dcacc209061920ddcdd1f8a9f82a850159ec5690be59b7324dc91dda3 (image=quay.io/ceph/ceph:v20, name=zen_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:30:58 np0005604943 systemd[1]: libpod-conmon-9023128dcacc209061920ddcdd1f8a9f82a850159ec5690be59b7324dc91dda3.scope: Deactivated successfully.
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'test_orchestrator'
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'volumes'
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: ms_deliver_dispatch: unhandled message 0x560d636c3860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.twcemg
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: mgr handle_mgr_map Activating!
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: mgr handle_mgr_map I am now activating
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.twcemg(active, starting, since 0.0143654s)
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3372644604' entity='mgr.compute-0.twcemg' cmd={"prefix": "mds metadata"} : dispatch
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).mds e1 all = 1
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3372644604' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata"} : dispatch
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3372644604' entity='mgr.compute-0.twcemg' cmd={"prefix": "mon metadata"} : dispatch
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3372644604' entity='mgr.compute-0.twcemg' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.twcemg", "id": "compute-0.twcemg"} v 0)
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3372644604' entity='mgr.compute-0.twcemg' cmd={"prefix": "mgr metadata", "who": "compute-0.twcemg", "id": "compute-0.twcemg"} : dispatch
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: balancer
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [balancer INFO root] Starting
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: log_channel(cluster) log [INF] : Manager daemon compute-0.twcemg is now available
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: crash
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:30:59
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [balancer INFO root] No pools available
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: devicehealth
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: iostat
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [devicehealth INFO root] Starting
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: nfs
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: orchestrator
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: pg_autoscaler
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: progress
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [progress INFO root] Loading...
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [progress INFO root] No stored events to load
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [progress INFO root] Loaded [] historic events
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [progress INFO root] Loaded OSDMap, ready.
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] recovery thread starting
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] starting setup
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: rbd_support
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: status
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.twcemg/mirror_snapshot_schedule"} v 0)
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3372644604' entity='mgr.compute-0.twcemg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.twcemg/mirror_snapshot_schedule"} : dispatch
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: telemetry
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] PerfHandler: starting
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TaskHandler: starting
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.twcemg/trash_purge_schedule"} v 0)
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3372644604' entity='mgr.compute-0.twcemg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.twcemg/trash_purge_schedule"} : dispatch
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3372644604' entity='mgr.compute-0.twcemg' 
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] setup complete
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3372644604' entity='mgr.compute-0.twcemg' 
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Feb  2 06:30:59 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: volumes
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3372644604' entity='mgr.compute-0.twcemg' 
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: Activating manager daemon compute-0.twcemg
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: Manager daemon compute-0.twcemg is now available
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: from='mgr.14102 192.168.122.100:0/3372644604' entity='mgr.compute-0.twcemg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.twcemg/mirror_snapshot_schedule"} : dispatch
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: from='mgr.14102 192.168.122.100:0/3372644604' entity='mgr.compute-0.twcemg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.twcemg/trash_purge_schedule"} : dispatch
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: from='mgr.14102 192.168.122.100:0/3372644604' entity='mgr.compute-0.twcemg' 
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: from='mgr.14102 192.168.122.100:0/3372644604' entity='mgr.compute-0.twcemg' 
Feb  2 06:30:59 np0005604943 ceph-mon[75271]: from='mgr.14102 192.168.122.100:0/3372644604' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:00 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.twcemg(active, since 1.02951s)
Feb  2 06:31:01 np0005604943 podman[75830]: 2026-02-02 11:31:01.060364123 +0000 UTC m=+0.057331889 container create a150f29085866a2bd1ac45e8c34a3e7d8554707393a1fae694d5e83b6f58a5c4 (image=quay.io/ceph/ceph:v20, name=zen_haibt, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:01 np0005604943 systemd[1]: Started libpod-conmon-a150f29085866a2bd1ac45e8c34a3e7d8554707393a1fae694d5e83b6f58a5c4.scope.
Feb  2 06:31:01 np0005604943 podman[75830]: 2026-02-02 11:31:01.036190746 +0000 UTC m=+0.033158552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:01 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30ac141e88534cd76642cc88c27e6811b7b8dd4198fc9eca3c4d370089a55b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30ac141e88534cd76642cc88c27e6811b7b8dd4198fc9eca3c4d370089a55b4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30ac141e88534cd76642cc88c27e6811b7b8dd4198fc9eca3c4d370089a55b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:01 np0005604943 podman[75830]: 2026-02-02 11:31:01.167161854 +0000 UTC m=+0.164129670 container init a150f29085866a2bd1ac45e8c34a3e7d8554707393a1fae694d5e83b6f58a5c4 (image=quay.io/ceph/ceph:v20, name=zen_haibt, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 06:31:01 np0005604943 podman[75830]: 2026-02-02 11:31:01.17365656 +0000 UTC m=+0.170624306 container start a150f29085866a2bd1ac45e8c34a3e7d8554707393a1fae694d5e83b6f58a5c4 (image=quay.io/ceph/ceph:v20, name=zen_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 06:31:01 np0005604943 podman[75830]: 2026-02-02 11:31:01.177951096 +0000 UTC m=+0.174918892 container attach a150f29085866a2bd1ac45e8c34a3e7d8554707393a1fae694d5e83b6f58a5c4 (image=quay.io/ceph/ceph:v20, name=zen_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:31:01 np0005604943 ceph-mgr[75558]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 06:31:01 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.twcemg(active, since 2s)
Feb  2 06:31:01 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  2 06:31:01 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2883650054' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb  2 06:31:01 np0005604943 zen_haibt[75846]: 
Feb  2 06:31:01 np0005604943 zen_haibt[75846]: {
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    "fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    "health": {
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "status": "HEALTH_OK",
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "checks": {},
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "mutes": []
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    },
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    "election_epoch": 5,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    "quorum": [
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        0
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    ],
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    "quorum_names": [
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "compute-0"
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    ],
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    "quorum_age": 9,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    "monmap": {
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "epoch": 1,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "min_mon_release_name": "tentacle",
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "num_mons": 1
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    },
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    "osdmap": {
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "epoch": 1,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "num_osds": 0,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "num_up_osds": 0,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "osd_up_since": 0,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "num_in_osds": 0,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "osd_in_since": 0,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "num_remapped_pgs": 0
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    },
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    "pgmap": {
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "pgs_by_state": [],
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "num_pgs": 0,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "num_pools": 0,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "num_objects": 0,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "data_bytes": 0,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "bytes_used": 0,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "bytes_avail": 0,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "bytes_total": 0
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    },
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    "fsmap": {
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "epoch": 1,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "btime": "2026-02-02T11:30:49:598633+0000",
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "by_rank": [],
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "up:standby": 0
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    },
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    "mgrmap": {
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "available": true,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "num_standbys": 0,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "modules": [
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:            "iostat",
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:            "nfs"
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        ],
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "services": {}
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    },
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    "servicemap": {
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "epoch": 1,
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "modified": "2026-02-02T11:30:49.600559+0000",
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:        "services": {}
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    },
Feb  2 06:31:01 np0005604943 zen_haibt[75846]:    "progress_events": {}
Feb  2 06:31:01 np0005604943 zen_haibt[75846]: }
Feb  2 06:31:01 np0005604943 systemd[1]: libpod-a150f29085866a2bd1ac45e8c34a3e7d8554707393a1fae694d5e83b6f58a5c4.scope: Deactivated successfully.
Feb  2 06:31:01 np0005604943 podman[75830]: 2026-02-02 11:31:01.760732375 +0000 UTC m=+0.757700131 container died a150f29085866a2bd1ac45e8c34a3e7d8554707393a1fae694d5e83b6f58a5c4 (image=quay.io/ceph/ceph:v20, name=zen_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 06:31:01 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b30ac141e88534cd76642cc88c27e6811b7b8dd4198fc9eca3c4d370089a55b4-merged.mount: Deactivated successfully.
Feb  2 06:31:01 np0005604943 podman[75830]: 2026-02-02 11:31:01.802734656 +0000 UTC m=+0.799702412 container remove a150f29085866a2bd1ac45e8c34a3e7d8554707393a1fae694d5e83b6f58a5c4 (image=quay.io/ceph/ceph:v20, name=zen_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:01 np0005604943 systemd[1]: libpod-conmon-a150f29085866a2bd1ac45e8c34a3e7d8554707393a1fae694d5e83b6f58a5c4.scope: Deactivated successfully.
Feb  2 06:31:01 np0005604943 podman[75886]: 2026-02-02 11:31:01.859640542 +0000 UTC m=+0.041204700 container create 32bafc578df88eb4c129836b3775e60a349d1081fcd776df1c1a6df7c0430ece (image=quay.io/ceph/ceph:v20, name=tender_jennings, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 06:31:01 np0005604943 systemd[1]: Started libpod-conmon-32bafc578df88eb4c129836b3775e60a349d1081fcd776df1c1a6df7c0430ece.scope.
Feb  2 06:31:01 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f014056b7855c36db7d237f6845438c42b4aae6a38fcded4f23fc79bc09ca73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f014056b7855c36db7d237f6845438c42b4aae6a38fcded4f23fc79bc09ca73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f014056b7855c36db7d237f6845438c42b4aae6a38fcded4f23fc79bc09ca73/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f014056b7855c36db7d237f6845438c42b4aae6a38fcded4f23fc79bc09ca73/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:01 np0005604943 podman[75886]: 2026-02-02 11:31:01.838366024 +0000 UTC m=+0.019930182 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:01 np0005604943 podman[75886]: 2026-02-02 11:31:01.942364569 +0000 UTC m=+0.123928727 container init 32bafc578df88eb4c129836b3775e60a349d1081fcd776df1c1a6df7c0430ece (image=quay.io/ceph/ceph:v20, name=tender_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:01 np0005604943 podman[75886]: 2026-02-02 11:31:01.949066101 +0000 UTC m=+0.130630259 container start 32bafc578df88eb4c129836b3775e60a349d1081fcd776df1c1a6df7c0430ece (image=quay.io/ceph/ceph:v20, name=tender_jennings, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 06:31:01 np0005604943 podman[75886]: 2026-02-02 11:31:01.953282275 +0000 UTC m=+0.134846503 container attach 32bafc578df88eb4c129836b3775e60a349d1081fcd776df1c1a6df7c0430ece (image=quay.io/ceph/ceph:v20, name=tender_jennings, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 06:31:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb  2 06:31:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/466112257' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  2 06:31:02 np0005604943 tender_jennings[75903]: 
Feb  2 06:31:02 np0005604943 tender_jennings[75903]: [global]
Feb  2 06:31:02 np0005604943 tender_jennings[75903]: #011fsid = 4548a36b-7cdc-5e3e-a814-4e1571be1fae
Feb  2 06:31:02 np0005604943 tender_jennings[75903]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb  2 06:31:02 np0005604943 tender_jennings[75903]: #011osd_crush_chooseleaf_type = 0
Feb  2 06:31:02 np0005604943 systemd[1]: libpod-32bafc578df88eb4c129836b3775e60a349d1081fcd776df1c1a6df7c0430ece.scope: Deactivated successfully.
Feb  2 06:31:02 np0005604943 podman[75886]: 2026-02-02 11:31:02.368616667 +0000 UTC m=+0.550180795 container died 32bafc578df88eb4c129836b3775e60a349d1081fcd776df1c1a6df7c0430ece (image=quay.io/ceph/ceph:v20, name=tender_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 06:31:02 np0005604943 systemd[1]: var-lib-containers-storage-overlay-9f014056b7855c36db7d237f6845438c42b4aae6a38fcded4f23fc79bc09ca73-merged.mount: Deactivated successfully.
Feb  2 06:31:02 np0005604943 podman[75886]: 2026-02-02 11:31:02.403831813 +0000 UTC m=+0.585395971 container remove 32bafc578df88eb4c129836b3775e60a349d1081fcd776df1c1a6df7c0430ece (image=quay.io/ceph/ceph:v20, name=tender_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 06:31:02 np0005604943 systemd[1]: libpod-conmon-32bafc578df88eb4c129836b3775e60a349d1081fcd776df1c1a6df7c0430ece.scope: Deactivated successfully.
Feb  2 06:31:02 np0005604943 podman[75941]: 2026-02-02 11:31:02.466997288 +0000 UTC m=+0.047628694 container create 0579a4755e632d2ad4c5aee52066077df16c6550433dfcc100075048705c25e7 (image=quay.io/ceph/ceph:v20, name=serene_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 06:31:02 np0005604943 systemd[1]: Started libpod-conmon-0579a4755e632d2ad4c5aee52066077df16c6550433dfcc100075048705c25e7.scope.
Feb  2 06:31:02 np0005604943 podman[75941]: 2026-02-02 11:31:02.442071841 +0000 UTC m=+0.022703277 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:02 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:02 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ded323672291eed973d158709b320d5ffeddfc9e41ddb295dcb26afa62359f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:02 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ded323672291eed973d158709b320d5ffeddfc9e41ddb295dcb26afa62359f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:02 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ded323672291eed973d158709b320d5ffeddfc9e41ddb295dcb26afa62359f4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:02 np0005604943 podman[75941]: 2026-02-02 11:31:02.57826356 +0000 UTC m=+0.158894966 container init 0579a4755e632d2ad4c5aee52066077df16c6550433dfcc100075048705c25e7 (image=quay.io/ceph/ceph:v20, name=serene_vaughan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:31:02 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/466112257' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  2 06:31:02 np0005604943 podman[75941]: 2026-02-02 11:31:02.586942317 +0000 UTC m=+0.167573683 container start 0579a4755e632d2ad4c5aee52066077df16c6550433dfcc100075048705c25e7 (image=quay.io/ceph/ceph:v20, name=serene_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:02 np0005604943 podman[75941]: 2026-02-02 11:31:02.590698498 +0000 UTC m=+0.171329914 container attach 0579a4755e632d2ad4c5aee52066077df16c6550433dfcc100075048705c25e7 (image=quay.io/ceph/ceph:v20, name=serene_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 06:31:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Feb  2 06:31:03 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2396220357' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:03 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/2396220357' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Feb  2 06:31:03 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2396220357' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr handle_mgr_map respawning because set of enabled modules changed!
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr respawn  e: '/usr/bin/ceph-mgr'
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr respawn  0: '/usr/bin/ceph-mgr'
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr respawn  1: '-n'
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr respawn  2: 'mgr.compute-0.twcemg'
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr respawn  3: '-f'
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr respawn  4: '--setuser'
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr respawn  5: 'ceph'
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr respawn  6: '--setgroup'
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr respawn  7: 'ceph'
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr respawn  8: '--default-log-to-file=false'
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr respawn  9: '--default-log-to-journald=true'
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr respawn  10: '--default-log-to-stderr=false'
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr respawn  exe_path /proc/self/exe
Feb  2 06:31:03 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.twcemg(active, since 4s)
Feb  2 06:31:03 np0005604943 systemd[1]: libpod-0579a4755e632d2ad4c5aee52066077df16c6550433dfcc100075048705c25e7.scope: Deactivated successfully.
Feb  2 06:31:03 np0005604943 podman[75941]: 2026-02-02 11:31:03.629634506 +0000 UTC m=+1.210265912 container died 0579a4755e632d2ad4c5aee52066077df16c6550433dfcc100075048705c25e7 (image=quay.io/ceph/ceph:v20, name=serene_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:31:03 np0005604943 systemd[1]: var-lib-containers-storage-overlay-1ded323672291eed973d158709b320d5ffeddfc9e41ddb295dcb26afa62359f4-merged.mount: Deactivated successfully.
Feb  2 06:31:03 np0005604943 podman[75941]: 2026-02-02 11:31:03.668518183 +0000 UTC m=+1.249149579 container remove 0579a4755e632d2ad4c5aee52066077df16c6550433dfcc100075048705c25e7 (image=quay.io/ceph/ceph:v20, name=serene_vaughan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 06:31:03 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-twcemg[75554]: ignoring --setuser ceph since I am not root
Feb  2 06:31:03 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-twcemg[75554]: ignoring --setgroup ceph since I am not root
Feb  2 06:31:03 np0005604943 systemd[1]: libpod-conmon-0579a4755e632d2ad4c5aee52066077df16c6550433dfcc100075048705c25e7.scope: Deactivated successfully.
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: pidfile_write: ignore empty --pid-file
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'alerts'
Feb  2 06:31:03 np0005604943 podman[75995]: 2026-02-02 11:31:03.718253294 +0000 UTC m=+0.033565673 container create 61cd2b190f901377e8a962eacd9b312f531ce0ad6559ad8b8bd23e71f7c45f6a (image=quay.io/ceph/ceph:v20, name=pedantic_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:03 np0005604943 systemd[1]: Started libpod-conmon-61cd2b190f901377e8a962eacd9b312f531ce0ad6559ad8b8bd23e71f7c45f6a.scope.
Feb  2 06:31:03 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb46d8cdbe9ea0631d4d4a378f4246a8acb3eb8f6ba3282fbb38eec9a0a16364/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb46d8cdbe9ea0631d4d4a378f4246a8acb3eb8f6ba3282fbb38eec9a0a16364/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb46d8cdbe9ea0631d4d4a378f4246a8acb3eb8f6ba3282fbb38eec9a0a16364/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:03 np0005604943 podman[75995]: 2026-02-02 11:31:03.777687418 +0000 UTC m=+0.092999847 container init 61cd2b190f901377e8a962eacd9b312f531ce0ad6559ad8b8bd23e71f7c45f6a (image=quay.io/ceph/ceph:v20, name=pedantic_montalcini, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 06:31:03 np0005604943 podman[75995]: 2026-02-02 11:31:03.781371278 +0000 UTC m=+0.096683687 container start 61cd2b190f901377e8a962eacd9b312f531ce0ad6559ad8b8bd23e71f7c45f6a (image=quay.io/ceph/ceph:v20, name=pedantic_montalcini, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:31:03 np0005604943 podman[75995]: 2026-02-02 11:31:03.785517911 +0000 UTC m=+0.100830320 container attach 61cd2b190f901377e8a962eacd9b312f531ce0ad6559ad8b8bd23e71f7c45f6a (image=quay.io/ceph/ceph:v20, name=pedantic_montalcini, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:03 np0005604943 podman[75995]: 2026-02-02 11:31:03.701377666 +0000 UTC m=+0.016690065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'balancer'
Feb  2 06:31:03 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'cephadm'
Feb  2 06:31:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb  2 06:31:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666982848' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb  2 06:31:04 np0005604943 pedantic_montalcini[76029]: {
Feb  2 06:31:04 np0005604943 pedantic_montalcini[76029]:    "epoch": 5,
Feb  2 06:31:04 np0005604943 pedantic_montalcini[76029]:    "available": true,
Feb  2 06:31:04 np0005604943 pedantic_montalcini[76029]:    "active_name": "compute-0.twcemg",
Feb  2 06:31:04 np0005604943 pedantic_montalcini[76029]:    "num_standby": 0
Feb  2 06:31:04 np0005604943 pedantic_montalcini[76029]: }
Feb  2 06:31:04 np0005604943 systemd[1]: libpod-61cd2b190f901377e8a962eacd9b312f531ce0ad6559ad8b8bd23e71f7c45f6a.scope: Deactivated successfully.
Feb  2 06:31:04 np0005604943 conmon[76029]: conmon 61cd2b190f901377e8a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-61cd2b190f901377e8a962eacd9b312f531ce0ad6559ad8b8bd23e71f7c45f6a.scope/container/memory.events
Feb  2 06:31:04 np0005604943 podman[75995]: 2026-02-02 11:31:04.272767145 +0000 UTC m=+0.588079554 container died 61cd2b190f901377e8a962eacd9b312f531ce0ad6559ad8b8bd23e71f7c45f6a (image=quay.io/ceph/ceph:v20, name=pedantic_montalcini, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:31:04 np0005604943 systemd[1]: var-lib-containers-storage-overlay-cb46d8cdbe9ea0631d4d4a378f4246a8acb3eb8f6ba3282fbb38eec9a0a16364-merged.mount: Deactivated successfully.
Feb  2 06:31:04 np0005604943 podman[75995]: 2026-02-02 11:31:04.31236824 +0000 UTC m=+0.627680639 container remove 61cd2b190f901377e8a962eacd9b312f531ce0ad6559ad8b8bd23e71f7c45f6a (image=quay.io/ceph/ceph:v20, name=pedantic_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:04 np0005604943 systemd[1]: libpod-conmon-61cd2b190f901377e8a962eacd9b312f531ce0ad6559ad8b8bd23e71f7c45f6a.scope: Deactivated successfully.
Feb  2 06:31:04 np0005604943 podman[76076]: 2026-02-02 11:31:04.390682938 +0000 UTC m=+0.054621235 container create 24a86d81d9baa8d741db9915afb38ba9014b601c4f2f5f6af47762d33c5bb41f (image=quay.io/ceph/ceph:v20, name=friendly_franklin, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:04 np0005604943 systemd[1]: Started libpod-conmon-24a86d81d9baa8d741db9915afb38ba9014b601c4f2f5f6af47762d33c5bb41f.scope.
Feb  2 06:31:04 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:04 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855cd32c6b13b2329e49de1a9eb557166b6f0cfe007183bd54a5da80dab5800c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:04 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855cd32c6b13b2329e49de1a9eb557166b6f0cfe007183bd54a5da80dab5800c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:04 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855cd32c6b13b2329e49de1a9eb557166b6f0cfe007183bd54a5da80dab5800c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:04 np0005604943 podman[76076]: 2026-02-02 11:31:04.370400196 +0000 UTC m=+0.034338503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:04 np0005604943 podman[76076]: 2026-02-02 11:31:04.48907348 +0000 UTC m=+0.153011827 container init 24a86d81d9baa8d741db9915afb38ba9014b601c4f2f5f6af47762d33c5bb41f (image=quay.io/ceph/ceph:v20, name=friendly_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 06:31:04 np0005604943 podman[76076]: 2026-02-02 11:31:04.495257848 +0000 UTC m=+0.159196155 container start 24a86d81d9baa8d741db9915afb38ba9014b601c4f2f5f6af47762d33c5bb41f (image=quay.io/ceph/ceph:v20, name=friendly_franklin, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:04 np0005604943 podman[76076]: 2026-02-02 11:31:04.499124182 +0000 UTC m=+0.163062539 container attach 24a86d81d9baa8d741db9915afb38ba9014b601c4f2f5f6af47762d33c5bb41f (image=quay.io/ceph/ceph:v20, name=friendly_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 06:31:04 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'crash'
Feb  2 06:31:04 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/2396220357' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb  2 06:31:04 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'dashboard'
Feb  2 06:31:05 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'devicehealth'
Feb  2 06:31:05 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'diskprediction_local'
Feb  2 06:31:05 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-twcemg[75554]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb  2 06:31:05 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-twcemg[75554]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb  2 06:31:05 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-twcemg[75554]:  from numpy import show_config as show_numpy_config
Feb  2 06:31:05 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'influx'
Feb  2 06:31:05 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'insights'
Feb  2 06:31:05 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'iostat'
Feb  2 06:31:05 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'k8sevents'
Feb  2 06:31:06 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'localpool'
Feb  2 06:31:06 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'mds_autoscaler'
Feb  2 06:31:06 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'mirroring'
Feb  2 06:31:06 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'nfs'
Feb  2 06:31:06 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'orchestrator'
Feb  2 06:31:06 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'osd_perf_query'
Feb  2 06:31:06 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'osd_support'
Feb  2 06:31:06 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'pg_autoscaler'
Feb  2 06:31:07 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'progress'
Feb  2 06:31:07 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'prometheus'
Feb  2 06:31:07 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'rbd_support'
Feb  2 06:31:07 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'rgw'
Feb  2 06:31:07 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'rook'
Feb  2 06:31:08 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'selftest'
Feb  2 06:31:08 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'smb'
Feb  2 06:31:08 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'snap_schedule'
Feb  2 06:31:08 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'stats'
Feb  2 06:31:08 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'status'
Feb  2 06:31:08 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'telegraf'
Feb  2 06:31:08 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'telemetry'
Feb  2 06:31:09 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'test_orchestrator'
Feb  2 06:31:09 np0005604943 ceph-mgr[75558]: mgr[py] Loading python module 'volumes'
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: log_channel(cluster) log [INF] : Active manager daemon compute-0.twcemg restarted
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.twcemg
Feb  2 06:31:09 np0005604943 ceph-mgr[75558]: ms_deliver_dispatch: unhandled message 0x55c5b1e02000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Feb  2 06:31:09 np0005604943 ceph-mgr[75558]: mgr handle_mgr_map Activating!
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.twcemg(active, starting, since 0.0135331s)
Feb  2 06:31:09 np0005604943 ceph-mgr[75558]: mgr handle_mgr_map I am now activating
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.twcemg", "id": "compute-0.twcemg"} v 0)
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "mgr metadata", "who": "compute-0.twcemg", "id": "compute-0.twcemg"} : dispatch
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "mds metadata"} : dispatch
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).mds e1 all = 1
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata"} : dispatch
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "mon metadata"} : dispatch
Feb  2 06:31:09 np0005604943 ceph-mgr[75558]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:31:09 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: balancer
Feb  2 06:31:09 np0005604943 ceph-mgr[75558]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: log_channel(cluster) log [INF] : Manager daemon compute-0.twcemg is now available
Feb  2 06:31:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Starting
Feb  2 06:31:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:31:09
Feb  2 06:31:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:31:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:31:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] No pools available
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: Active manager daemon compute-0.twcemg restarted
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: Activating manager daemon compute-0.twcemg
Feb  2 06:31:09 np0005604943 ceph-mon[75271]: Manager daemon compute-0.twcemg is now available
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.twcemg(active, since 1.02732s)
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Feb  2 06:31:10 np0005604943 friendly_franklin[76092]: {
Feb  2 06:31:10 np0005604943 friendly_franklin[76092]:    "mgrmap_epoch": 7,
Feb  2 06:31:10 np0005604943 friendly_franklin[76092]:    "initialized": true
Feb  2 06:31:10 np0005604943 friendly_franklin[76092]: }
Feb  2 06:31:10 np0005604943 systemd[1]: libpod-24a86d81d9baa8d741db9915afb38ba9014b601c4f2f5f6af47762d33c5bb41f.scope: Deactivated successfully.
Feb  2 06:31:10 np0005604943 podman[76076]: 2026-02-02 11:31:10.585963387 +0000 UTC m=+6.249901684 container died 24a86d81d9baa8d741db9915afb38ba9014b601c4f2f5f6af47762d33c5bb41f (image=quay.io/ceph/ceph:v20, name=friendly_franklin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:10 np0005604943 systemd[1]: var-lib-containers-storage-overlay-855cd32c6b13b2329e49de1a9eb557166b6f0cfe007183bd54a5da80dab5800c-merged.mount: Deactivated successfully.
Feb  2 06:31:10 np0005604943 podman[76076]: 2026-02-02 11:31:10.628895253 +0000 UTC m=+6.292833550 container remove 24a86d81d9baa8d741db9915afb38ba9014b601c4f2f5f6af47762d33c5bb41f (image=quay.io/ceph/ceph:v20, name=friendly_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:10 np0005604943 systemd[1]: libpod-conmon-24a86d81d9baa8d741db9915afb38ba9014b601c4f2f5f6af47762d33c5bb41f.scope: Deactivated successfully.
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Feb  2 06:31:10 np0005604943 podman[76161]: 2026-02-02 11:31:10.693336353 +0000 UTC m=+0.047427679 container create 1faab1b35f9940316d864e5047dae610f5bef62fd4c7102d9dc7ae8e354e5cf1 (image=quay.io/ceph/ceph:v20, name=eloquent_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: cephadm
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: crash
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: devicehealth
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [devicehealth INFO root] Starting
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: iostat
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: nfs
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: orchestrator
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: pg_autoscaler
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: progress
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [progress INFO root] Loading...
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [progress INFO root] No stored events to load
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [progress INFO root] Loaded [] historic events
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [progress INFO root] Loaded OSDMap, ready.
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] recovery thread starting
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] starting setup
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: rbd_support
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: status
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: telemetry
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.twcemg/mirror_snapshot_schedule"} v 0)
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.twcemg/mirror_snapshot_schedule"} : dispatch
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:31:10 np0005604943 systemd[1]: Started libpod-conmon-1faab1b35f9940316d864e5047dae610f5bef62fd4c7102d9dc7ae8e354e5cf1.scope.
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] PerfHandler: starting
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TaskHandler: starting
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.twcemg/trash_purge_schedule"} v 0)
Feb  2 06:31:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.twcemg/trash_purge_schedule"} : dispatch
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] setup complete
Feb  2 06:31:10 np0005604943 ceph-mgr[75558]: mgr load Constructed class from module: volumes
Feb  2 06:31:10 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3df939868c76ad090261725226324c2e1543f0295eaea1b050504d73c9b9ed0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3df939868c76ad090261725226324c2e1543f0295eaea1b050504d73c9b9ed0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3df939868c76ad090261725226324c2e1543f0295eaea1b050504d73c9b9ed0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:10 np0005604943 podman[76161]: 2026-02-02 11:31:10.765811092 +0000 UTC m=+0.119902398 container init 1faab1b35f9940316d864e5047dae610f5bef62fd4c7102d9dc7ae8e354e5cf1 (image=quay.io/ceph/ceph:v20, name=eloquent_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 06:31:10 np0005604943 podman[76161]: 2026-02-02 11:31:10.672188409 +0000 UTC m=+0.026279755 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:10 np0005604943 podman[76161]: 2026-02-02 11:31:10.770602051 +0000 UTC m=+0.124693387 container start 1faab1b35f9940316d864e5047dae610f5bef62fd4c7102d9dc7ae8e354e5cf1 (image=quay.io/ceph/ceph:v20, name=eloquent_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 06:31:10 np0005604943 podman[76161]: 2026-02-02 11:31:10.773878721 +0000 UTC m=+0.127970017 container attach 1faab1b35f9940316d864e5047dae610f5bef62fd4c7102d9dc7ae8e354e5cf1 (image=quay.io/ceph/ceph:v20, name=eloquent_lederberg, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 06:31:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Feb  2 06:31:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3284038411' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Feb  2 06:31:11 np0005604943 ceph-mgr[75558]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 06:31:11 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:11 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:11 np0005604943 ceph-mon[75271]: Found migration_current of "None". Setting to last migration.
Feb  2 06:31:11 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:11 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:11 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.twcemg/mirror_snapshot_schedule"} : dispatch
Feb  2 06:31:11 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.twcemg/trash_purge_schedule"} : dispatch
Feb  2 06:31:11 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3284038411' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Feb  2 06:31:11 np0005604943 ceph-mgr[75558]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:31:11] ENGINE Bus STARTING
Feb  2 06:31:11 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:31:11] ENGINE Bus STARTING
Feb  2 06:31:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3284038411' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Feb  2 06:31:11 np0005604943 eloquent_lederberg[76229]: module 'orchestrator' is already enabled (always-on)
Feb  2 06:31:11 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.twcemg(active, since 2s)
Feb  2 06:31:11 np0005604943 systemd[1]: libpod-1faab1b35f9940316d864e5047dae610f5bef62fd4c7102d9dc7ae8e354e5cf1.scope: Deactivated successfully.
Feb  2 06:31:11 np0005604943 podman[76161]: 2026-02-02 11:31:11.731441809 +0000 UTC m=+1.085533145 container died 1faab1b35f9940316d864e5047dae610f5bef62fd4c7102d9dc7ae8e354e5cf1 (image=quay.io/ceph/ceph:v20, name=eloquent_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:11 np0005604943 ceph-mgr[75558]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:31:11] ENGINE Serving on http://192.168.122.100:8765
Feb  2 06:31:11 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:31:11] ENGINE Serving on http://192.168.122.100:8765
Feb  2 06:31:11 np0005604943 systemd[1]: var-lib-containers-storage-overlay-e3df939868c76ad090261725226324c2e1543f0295eaea1b050504d73c9b9ed0-merged.mount: Deactivated successfully.
Feb  2 06:31:11 np0005604943 podman[76161]: 2026-02-02 11:31:11.778261241 +0000 UTC m=+1.132352587 container remove 1faab1b35f9940316d864e5047dae610f5bef62fd4c7102d9dc7ae8e354e5cf1 (image=quay.io/ceph/ceph:v20, name=eloquent_lederberg, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Feb  2 06:31:11 np0005604943 systemd[1]: libpod-conmon-1faab1b35f9940316d864e5047dae610f5bef62fd4c7102d9dc7ae8e354e5cf1.scope: Deactivated successfully.
Feb  2 06:31:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019906051 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:31:11 np0005604943 podman[76306]: 2026-02-02 11:31:11.846136385 +0000 UTC m=+0.049589069 container create 3588e0c80d4e7605c6faa60bf239ebd2a6d1e11de2c6dec292548b8922219f25 (image=quay.io/ceph/ceph:v20, name=nice_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:11 np0005604943 systemd[1]: Started libpod-conmon-3588e0c80d4e7605c6faa60bf239ebd2a6d1e11de2c6dec292548b8922219f25.scope.
Feb  2 06:31:11 np0005604943 ceph-mgr[75558]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:31:11] ENGINE Serving on https://192.168.122.100:7150
Feb  2 06:31:11 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:31:11] ENGINE Serving on https://192.168.122.100:7150
Feb  2 06:31:11 np0005604943 ceph-mgr[75558]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:31:11] ENGINE Bus STARTED
Feb  2 06:31:11 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:31:11] ENGINE Bus STARTED
Feb  2 06:31:11 np0005604943 ceph-mgr[75558]: [cephadm INFO cherrypy.error] [02/Feb/2026:11:31:11] ENGINE Client ('192.168.122.100', 46156) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 06:31:11 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : [02/Feb/2026:11:31:11] ENGINE Client ('192.168.122.100', 46156) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 06:31:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 06:31:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  2 06:31:11 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a619c57682fc64d80b5c30dc8c1d9498f7f9327ada5cdfcd751cef37476c410a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a619c57682fc64d80b5c30dc8c1d9498f7f9327ada5cdfcd751cef37476c410a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a619c57682fc64d80b5c30dc8c1d9498f7f9327ada5cdfcd751cef37476c410a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:11 np0005604943 podman[76306]: 2026-02-02 11:31:11.821878905 +0000 UTC m=+0.025331639 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:11 np0005604943 podman[76306]: 2026-02-02 11:31:11.943525759 +0000 UTC m=+0.146978413 container init 3588e0c80d4e7605c6faa60bf239ebd2a6d1e11de2c6dec292548b8922219f25 (image=quay.io/ceph/ceph:v20, name=nice_goldwasser, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 06:31:11 np0005604943 podman[76306]: 2026-02-02 11:31:11.949903303 +0000 UTC m=+0.153355947 container start 3588e0c80d4e7605c6faa60bf239ebd2a6d1e11de2c6dec292548b8922219f25 (image=quay.io/ceph/ceph:v20, name=nice_goldwasser, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 06:31:11 np0005604943 podman[76306]: 2026-02-02 11:31:11.95239995 +0000 UTC m=+0.155852594 container attach 3588e0c80d4e7605c6faa60bf239ebd2a6d1e11de2c6dec292548b8922219f25 (image=quay.io/ceph/ceph:v20, name=nice_goldwasser, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:31:12 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 06:31:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Feb  2 06:31:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 06:31:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  2 06:31:12 np0005604943 systemd[1]: libpod-3588e0c80d4e7605c6faa60bf239ebd2a6d1e11de2c6dec292548b8922219f25.scope: Deactivated successfully.
Feb  2 06:31:12 np0005604943 conmon[76334]: conmon 3588e0c80d4e7605c6fa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3588e0c80d4e7605c6faa60bf239ebd2a6d1e11de2c6dec292548b8922219f25.scope/container/memory.events
Feb  2 06:31:12 np0005604943 podman[76306]: 2026-02-02 11:31:12.437129697 +0000 UTC m=+0.640582371 container died 3588e0c80d4e7605c6faa60bf239ebd2a6d1e11de2c6dec292548b8922219f25 (image=quay.io/ceph/ceph:v20, name=nice_goldwasser, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 06:31:12 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a619c57682fc64d80b5c30dc8c1d9498f7f9327ada5cdfcd751cef37476c410a-merged.mount: Deactivated successfully.
Feb  2 06:31:12 np0005604943 ceph-mon[75271]: [02/Feb/2026:11:31:11] ENGINE Bus STARTING
Feb  2 06:31:12 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3284038411' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Feb  2 06:31:12 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:12 np0005604943 podman[76306]: 2026-02-02 11:31:12.605947331 +0000 UTC m=+0.809400015 container remove 3588e0c80d4e7605c6faa60bf239ebd2a6d1e11de2c6dec292548b8922219f25 (image=quay.io/ceph/ceph:v20, name=nice_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 06:31:12 np0005604943 systemd[1]: libpod-conmon-3588e0c80d4e7605c6faa60bf239ebd2a6d1e11de2c6dec292548b8922219f25.scope: Deactivated successfully.
Feb  2 06:31:12 np0005604943 podman[76372]: 2026-02-02 11:31:12.687148527 +0000 UTC m=+0.059936399 container create a4eb6b1b60395a135d5d12c6c724bd75c8a36a26faf4a1deb32b1cf498621856 (image=quay.io/ceph/ceph:v20, name=relaxed_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 06:31:12 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:12 np0005604943 systemd[1]: Started libpod-conmon-a4eb6b1b60395a135d5d12c6c724bd75c8a36a26faf4a1deb32b1cf498621856.scope.
Feb  2 06:31:12 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:12 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2222b588b784a8b423515871ebd41ca31e04e8b1c8506140801d25130dd1f862/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:12 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2222b588b784a8b423515871ebd41ca31e04e8b1c8506140801d25130dd1f862/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:12 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2222b588b784a8b423515871ebd41ca31e04e8b1c8506140801d25130dd1f862/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:12 np0005604943 podman[76372]: 2026-02-02 11:31:12.663946317 +0000 UTC m=+0.036734269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:12 np0005604943 podman[76372]: 2026-02-02 11:31:12.764163829 +0000 UTC m=+0.136951731 container init a4eb6b1b60395a135d5d12c6c724bd75c8a36a26faf4a1deb32b1cf498621856 (image=quay.io/ceph/ceph:v20, name=relaxed_zhukovsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:31:12 np0005604943 podman[76372]: 2026-02-02 11:31:12.7686385 +0000 UTC m=+0.141426362 container start a4eb6b1b60395a135d5d12c6c724bd75c8a36a26faf4a1deb32b1cf498621856 (image=quay.io/ceph/ceph:v20, name=relaxed_zhukovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 06:31:12 np0005604943 podman[76372]: 2026-02-02 11:31:12.771790626 +0000 UTC m=+0.144578498 container attach a4eb6b1b60395a135d5d12c6c724bd75c8a36a26faf4a1deb32b1cf498621856 (image=quay.io/ceph/ceph:v20, name=relaxed_zhukovsky, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:13 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 06:31:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Feb  2 06:31:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:13 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Set ssh ssh_user
Feb  2 06:31:13 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Feb  2 06:31:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Feb  2 06:31:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:13 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Set ssh ssh_config
Feb  2 06:31:13 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Feb  2 06:31:13 np0005604943 ceph-mgr[75558]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Feb  2 06:31:13 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Feb  2 06:31:13 np0005604943 relaxed_zhukovsky[76389]: ssh user set to ceph-admin. sudo will be used
Feb  2 06:31:13 np0005604943 systemd[1]: libpod-a4eb6b1b60395a135d5d12c6c724bd75c8a36a26faf4a1deb32b1cf498621856.scope: Deactivated successfully.
Feb  2 06:31:13 np0005604943 podman[76372]: 2026-02-02 11:31:13.201495758 +0000 UTC m=+0.574283610 container died a4eb6b1b60395a135d5d12c6c724bd75c8a36a26faf4a1deb32b1cf498621856 (image=quay.io/ceph/ceph:v20, name=relaxed_zhukovsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 06:31:13 np0005604943 systemd[1]: var-lib-containers-storage-overlay-2222b588b784a8b423515871ebd41ca31e04e8b1c8506140801d25130dd1f862-merged.mount: Deactivated successfully.
Feb  2 06:31:13 np0005604943 podman[76372]: 2026-02-02 11:31:13.231108692 +0000 UTC m=+0.603896544 container remove a4eb6b1b60395a135d5d12c6c724bd75c8a36a26faf4a1deb32b1cf498621856 (image=quay.io/ceph/ceph:v20, name=relaxed_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:13 np0005604943 systemd[1]: libpod-conmon-a4eb6b1b60395a135d5d12c6c724bd75c8a36a26faf4a1deb32b1cf498621856.scope: Deactivated successfully.
Feb  2 06:31:13 np0005604943 podman[76426]: 2026-02-02 11:31:13.298297007 +0000 UTC m=+0.046365941 container create bb6dbf740f4cdb091cb3b402ad15a731b197f34f8e74390e27c1f8c6847fa22f (image=quay.io/ceph/ceph:v20, name=intelligent_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 06:31:13 np0005604943 systemd[1]: Started libpod-conmon-bb6dbf740f4cdb091cb3b402ad15a731b197f34f8e74390e27c1f8c6847fa22f.scope.
Feb  2 06:31:13 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411b8ca31f74bba49162eefc24b43c7bc357f294b1bcac87a8d59cab7bd730ce/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411b8ca31f74bba49162eefc24b43c7bc357f294b1bcac87a8d59cab7bd730ce/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411b8ca31f74bba49162eefc24b43c7bc357f294b1bcac87a8d59cab7bd730ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411b8ca31f74bba49162eefc24b43c7bc357f294b1bcac87a8d59cab7bd730ce/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411b8ca31f74bba49162eefc24b43c7bc357f294b1bcac87a8d59cab7bd730ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:13 np0005604943 podman[76426]: 2026-02-02 11:31:13.282925079 +0000 UTC m=+0.030993983 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:13 np0005604943 podman[76426]: 2026-02-02 11:31:13.382560105 +0000 UTC m=+0.130629079 container init bb6dbf740f4cdb091cb3b402ad15a731b197f34f8e74390e27c1f8c6847fa22f (image=quay.io/ceph/ceph:v20, name=intelligent_feistel, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:13 np0005604943 podman[76426]: 2026-02-02 11:31:13.389897994 +0000 UTC m=+0.137966918 container start bb6dbf740f4cdb091cb3b402ad15a731b197f34f8e74390e27c1f8c6847fa22f (image=quay.io/ceph/ceph:v20, name=intelligent_feistel, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:13 np0005604943 podman[76426]: 2026-02-02 11:31:13.393520503 +0000 UTC m=+0.141589427 container attach bb6dbf740f4cdb091cb3b402ad15a731b197f34f8e74390e27c1f8c6847fa22f (image=quay.io/ceph/ceph:v20, name=intelligent_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:31:13 np0005604943 ceph-mgr[75558]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 06:31:13 np0005604943 ceph-mon[75271]: [02/Feb/2026:11:31:11] ENGINE Serving on http://192.168.122.100:8765
Feb  2 06:31:13 np0005604943 ceph-mon[75271]: [02/Feb/2026:11:31:11] ENGINE Serving on https://192.168.122.100:7150
Feb  2 06:31:13 np0005604943 ceph-mon[75271]: [02/Feb/2026:11:31:11] ENGINE Bus STARTED
Feb  2 06:31:13 np0005604943 ceph-mon[75271]: [02/Feb/2026:11:31:11] ENGINE Client ('192.168.122.100', 46156) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 06:31:13 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:13 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:13 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 06:31:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Feb  2 06:31:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:13 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Set ssh ssh_identity_key
Feb  2 06:31:13 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Feb  2 06:31:13 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Set ssh private key
Feb  2 06:31:13 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Set ssh private key
Feb  2 06:31:13 np0005604943 systemd[1]: libpod-bb6dbf740f4cdb091cb3b402ad15a731b197f34f8e74390e27c1f8c6847fa22f.scope: Deactivated successfully.
Feb  2 06:31:13 np0005604943 podman[76426]: 2026-02-02 11:31:13.860692551 +0000 UTC m=+0.608761455 container died bb6dbf740f4cdb091cb3b402ad15a731b197f34f8e74390e27c1f8c6847fa22f (image=quay.io/ceph/ceph:v20, name=intelligent_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 06:31:13 np0005604943 systemd[1]: var-lib-containers-storage-overlay-411b8ca31f74bba49162eefc24b43c7bc357f294b1bcac87a8d59cab7bd730ce-merged.mount: Deactivated successfully.
Feb  2 06:31:13 np0005604943 podman[76426]: 2026-02-02 11:31:13.889967446 +0000 UTC m=+0.638036340 container remove bb6dbf740f4cdb091cb3b402ad15a731b197f34f8e74390e27c1f8c6847fa22f (image=quay.io/ceph/ceph:v20, name=intelligent_feistel, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 06:31:13 np0005604943 systemd[1]: libpod-conmon-bb6dbf740f4cdb091cb3b402ad15a731b197f34f8e74390e27c1f8c6847fa22f.scope: Deactivated successfully.
Feb  2 06:31:13 np0005604943 podman[76480]: 2026-02-02 11:31:13.941823225 +0000 UTC m=+0.040392807 container create b932932132fd4dc7cfcdaf7ef32f3d99d627883239a2877dd66b46a6dfca5658 (image=quay.io/ceph/ceph:v20, name=serene_lehmann, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:31:13 np0005604943 systemd[1]: Started libpod-conmon-b932932132fd4dc7cfcdaf7ef32f3d99d627883239a2877dd66b46a6dfca5658.scope.
Feb  2 06:31:13 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ffa8af34b71fa046e083daa2d9b7d24cd31ecc50f0a24ce9274323887eadec/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ffa8af34b71fa046e083daa2d9b7d24cd31ecc50f0a24ce9274323887eadec/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ffa8af34b71fa046e083daa2d9b7d24cd31ecc50f0a24ce9274323887eadec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ffa8af34b71fa046e083daa2d9b7d24cd31ecc50f0a24ce9274323887eadec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ffa8af34b71fa046e083daa2d9b7d24cd31ecc50f0a24ce9274323887eadec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:14 np0005604943 podman[76480]: 2026-02-02 11:31:14.004939589 +0000 UTC m=+0.103509181 container init b932932132fd4dc7cfcdaf7ef32f3d99d627883239a2877dd66b46a6dfca5658 (image=quay.io/ceph/ceph:v20, name=serene_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 06:31:14 np0005604943 podman[76480]: 2026-02-02 11:31:13.918879722 +0000 UTC m=+0.017449364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:14 np0005604943 podman[76480]: 2026-02-02 11:31:14.017780869 +0000 UTC m=+0.116350461 container start b932932132fd4dc7cfcdaf7ef32f3d99d627883239a2877dd66b46a6dfca5658 (image=quay.io/ceph/ceph:v20, name=serene_lehmann, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 06:31:14 np0005604943 podman[76480]: 2026-02-02 11:31:14.021335385 +0000 UTC m=+0.119904987 container attach b932932132fd4dc7cfcdaf7ef32f3d99d627883239a2877dd66b46a6dfca5658 (image=quay.io/ceph/ceph:v20, name=serene_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 06:31:14 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 06:31:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Feb  2 06:31:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:14 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Set ssh ssh_identity_pub
Feb  2 06:31:14 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Feb  2 06:31:14 np0005604943 systemd[1]: libpod-b932932132fd4dc7cfcdaf7ef32f3d99d627883239a2877dd66b46a6dfca5658.scope: Deactivated successfully.
Feb  2 06:31:14 np0005604943 podman[76522]: 2026-02-02 11:31:14.483439156 +0000 UTC m=+0.025193105 container died b932932132fd4dc7cfcdaf7ef32f3d99d627883239a2877dd66b46a6dfca5658 (image=quay.io/ceph/ceph:v20, name=serene_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:14 np0005604943 systemd[1]: var-lib-containers-storage-overlay-87ffa8af34b71fa046e083daa2d9b7d24cd31ecc50f0a24ce9274323887eadec-merged.mount: Deactivated successfully.
Feb  2 06:31:14 np0005604943 podman[76522]: 2026-02-02 11:31:14.514449738 +0000 UTC m=+0.056203647 container remove b932932132fd4dc7cfcdaf7ef32f3d99d627883239a2877dd66b46a6dfca5658 (image=quay.io/ceph/ceph:v20, name=serene_lehmann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 06:31:14 np0005604943 systemd[1]: libpod-conmon-b932932132fd4dc7cfcdaf7ef32f3d99d627883239a2877dd66b46a6dfca5658.scope: Deactivated successfully.
Feb  2 06:31:14 np0005604943 ceph-mon[75271]: Set ssh ssh_user
Feb  2 06:31:14 np0005604943 ceph-mon[75271]: Set ssh ssh_config
Feb  2 06:31:14 np0005604943 ceph-mon[75271]: ssh user set to ceph-admin. sudo will be used
Feb  2 06:31:14 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:14 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:14 np0005604943 podman[76537]: 2026-02-02 11:31:14.577544182 +0000 UTC m=+0.041823637 container create cc8304f3f53963ea3e630461ad60620b182fff1af4a133e63c90b99dbe494e28 (image=quay.io/ceph/ceph:v20, name=strange_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 06:31:14 np0005604943 systemd[1]: Started libpod-conmon-cc8304f3f53963ea3e630461ad60620b182fff1af4a133e63c90b99dbe494e28.scope.
Feb  2 06:31:14 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:14 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18df3dc07f553571df6c5fe6a8320bafb2310edcdeb0700c4221794c89956159/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:14 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18df3dc07f553571df6c5fe6a8320bafb2310edcdeb0700c4221794c89956159/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:14 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18df3dc07f553571df6c5fe6a8320bafb2310edcdeb0700c4221794c89956159/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:14 np0005604943 podman[76537]: 2026-02-02 11:31:14.559020209 +0000 UTC m=+0.023299654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:14 np0005604943 podman[76537]: 2026-02-02 11:31:14.661817021 +0000 UTC m=+0.126096466 container init cc8304f3f53963ea3e630461ad60620b182fff1af4a133e63c90b99dbe494e28 (image=quay.io/ceph/ceph:v20, name=strange_heyrovsky, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:14 np0005604943 podman[76537]: 2026-02-02 11:31:14.66509563 +0000 UTC m=+0.129375055 container start cc8304f3f53963ea3e630461ad60620b182fff1af4a133e63c90b99dbe494e28 (image=quay.io/ceph/ceph:v20, name=strange_heyrovsky, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 06:31:14 np0005604943 podman[76537]: 2026-02-02 11:31:14.668093581 +0000 UTC m=+0.132373036 container attach cc8304f3f53963ea3e630461ad60620b182fff1af4a133e63c90b99dbe494e28 (image=quay.io/ceph/ceph:v20, name=strange_heyrovsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:31:14 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:15 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 06:31:15 np0005604943 strange_heyrovsky[76554]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbXUxKMaTMk7n5bHyy+jOkhPNn3YcZdPBfJxcv7ixpmTTSJTg/KfOYpCspgX4M0CDrqy6bWct+sWRH9SaAoOENkEELin4xRCCkTuOnWAx0COrfC0yKss9jseAYUjBm10inHCmxzoJcgaHrvKSZdfrYQ95jPT555lsr9rtNj17D7Rrn4np/gnwNU2F98X08vZKTqBcDM3zmgoZbywxuboYhFTRafTOeLHBFLY+t1hgAG0BqrJ8/37s/SahnszyfFmDwXCW72VPWEsTZp9k3usfrfaQ+47IMFmir8VdOaLcMv33YaItMklqw5WptMORpW1FlJPgJXAC9ITmbpFZ/4mPmnsSWA17sa6NOf4zxUY8Vpk32fLOhMiXpWtnW2dTHwCWaYB8kkvqjLmhxguCtlEMtA85vX7bqhsvJGW2TKY+YoQMCUzfcV8UQo0d+jJG4rJVUmChTVUOHirM8XfSxZgsNAa7dSKyckkJff+OWocuvZw69Hb4dDyUiGmq85OvN1j8= zuul@controller
Feb  2 06:31:15 np0005604943 systemd[1]: libpod-cc8304f3f53963ea3e630461ad60620b182fff1af4a133e63c90b99dbe494e28.scope: Deactivated successfully.
Feb  2 06:31:15 np0005604943 podman[76537]: 2026-02-02 11:31:15.079026323 +0000 UTC m=+0.543305748 container died cc8304f3f53963ea3e630461ad60620b182fff1af4a133e63c90b99dbe494e28 (image=quay.io/ceph/ceph:v20, name=strange_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 06:31:15 np0005604943 systemd[1]: var-lib-containers-storage-overlay-18df3dc07f553571df6c5fe6a8320bafb2310edcdeb0700c4221794c89956159-merged.mount: Deactivated successfully.
Feb  2 06:31:15 np0005604943 podman[76537]: 2026-02-02 11:31:15.111815993 +0000 UTC m=+0.576095418 container remove cc8304f3f53963ea3e630461ad60620b182fff1af4a133e63c90b99dbe494e28 (image=quay.io/ceph/ceph:v20, name=strange_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:15 np0005604943 systemd[1]: libpod-conmon-cc8304f3f53963ea3e630461ad60620b182fff1af4a133e63c90b99dbe494e28.scope: Deactivated successfully.
Feb  2 06:31:15 np0005604943 podman[76594]: 2026-02-02 11:31:15.166354545 +0000 UTC m=+0.040797089 container create c9eeadd47a4663c907fb68aa5579b1c1a431c0a8e3557c12965435dd8c8dd682 (image=quay.io/ceph/ceph:v20, name=peaceful_curran, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:15 np0005604943 systemd[1]: Started libpod-conmon-c9eeadd47a4663c907fb68aa5579b1c1a431c0a8e3557c12965435dd8c8dd682.scope.
Feb  2 06:31:15 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:15 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ed0409e221dcc298b568b0f63cae36161f42ee45959ccfcb0725170f74d973c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:15 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ed0409e221dcc298b568b0f63cae36161f42ee45959ccfcb0725170f74d973c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:15 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ed0409e221dcc298b568b0f63cae36161f42ee45959ccfcb0725170f74d973c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:15 np0005604943 podman[76594]: 2026-02-02 11:31:15.144015808 +0000 UTC m=+0.018458412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:15 np0005604943 podman[76594]: 2026-02-02 11:31:15.257757837 +0000 UTC m=+0.132200421 container init c9eeadd47a4663c907fb68aa5579b1c1a431c0a8e3557c12965435dd8c8dd682 (image=quay.io/ceph/ceph:v20, name=peaceful_curran, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:31:15 np0005604943 podman[76594]: 2026-02-02 11:31:15.265159599 +0000 UTC m=+0.139602153 container start c9eeadd47a4663c907fb68aa5579b1c1a431c0a8e3557c12965435dd8c8dd682 (image=quay.io/ceph/ceph:v20, name=peaceful_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 06:31:15 np0005604943 podman[76594]: 2026-02-02 11:31:15.268753346 +0000 UTC m=+0.143195860 container attach c9eeadd47a4663c907fb68aa5579b1c1a431c0a8e3557c12965435dd8c8dd682 (image=quay.io/ceph/ceph:v20, name=peaceful_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 06:31:15 np0005604943 ceph-mgr[75558]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 06:31:15 np0005604943 ceph-mon[75271]: Set ssh ssh_identity_key
Feb  2 06:31:15 np0005604943 ceph-mon[75271]: Set ssh private key
Feb  2 06:31:15 np0005604943 ceph-mon[75271]: Set ssh ssh_identity_pub
Feb  2 06:31:15 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 06:31:15 np0005604943 systemd-logind[786]: New session 20 of user ceph-admin.
Feb  2 06:31:15 np0005604943 systemd[1]: Created slice User Slice of UID 42477.
Feb  2 06:31:15 np0005604943 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb  2 06:31:15 np0005604943 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb  2 06:31:15 np0005604943 systemd[1]: Starting User Manager for UID 42477...
Feb  2 06:31:16 np0005604943 systemd[76640]: Queued start job for default target Main User Target.
Feb  2 06:31:16 np0005604943 systemd[76640]: Created slice User Application Slice.
Feb  2 06:31:16 np0005604943 systemd[76640]: Started Mark boot as successful after the user session has run 2 minutes.
Feb  2 06:31:16 np0005604943 systemd[76640]: Started Daily Cleanup of User's Temporary Directories.
Feb  2 06:31:16 np0005604943 systemd[76640]: Reached target Paths.
Feb  2 06:31:16 np0005604943 systemd[76640]: Reached target Timers.
Feb  2 06:31:16 np0005604943 systemd[76640]: Starting D-Bus User Message Bus Socket...
Feb  2 06:31:16 np0005604943 systemd[76640]: Starting Create User's Volatile Files and Directories...
Feb  2 06:31:16 np0005604943 systemd[76640]: Finished Create User's Volatile Files and Directories.
Feb  2 06:31:16 np0005604943 systemd[76640]: Listening on D-Bus User Message Bus Socket.
Feb  2 06:31:16 np0005604943 systemd[76640]: Reached target Sockets.
Feb  2 06:31:16 np0005604943 systemd[76640]: Reached target Basic System.
Feb  2 06:31:16 np0005604943 systemd[76640]: Reached target Main User Target.
Feb  2 06:31:16 np0005604943 systemd[76640]: Startup finished in 108ms.
Feb  2 06:31:16 np0005604943 systemd[1]: Started User Manager for UID 42477.
Feb  2 06:31:16 np0005604943 systemd[1]: Started Session 20 of User ceph-admin.
Feb  2 06:31:16 np0005604943 systemd-logind[786]: New session 22 of user ceph-admin.
Feb  2 06:31:16 np0005604943 systemd[1]: Started Session 22 of User ceph-admin.
Feb  2 06:31:16 np0005604943 systemd-logind[786]: New session 23 of user ceph-admin.
Feb  2 06:31:16 np0005604943 systemd[1]: Started Session 23 of User ceph-admin.
Feb  2 06:31:16 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052656 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:31:16 np0005604943 systemd-logind[786]: New session 24 of user ceph-admin.
Feb  2 06:31:16 np0005604943 systemd[1]: Started Session 24 of User ceph-admin.
Feb  2 06:31:16 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Feb  2 06:31:16 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Feb  2 06:31:17 np0005604943 systemd-logind[786]: New session 25 of user ceph-admin.
Feb  2 06:31:17 np0005604943 systemd[1]: Started Session 25 of User ceph-admin.
Feb  2 06:31:17 np0005604943 systemd-logind[786]: New session 26 of user ceph-admin.
Feb  2 06:31:17 np0005604943 systemd[1]: Started Session 26 of User ceph-admin.
Feb  2 06:31:17 np0005604943 ceph-mgr[75558]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 06:31:17 np0005604943 systemd-logind[786]: New session 27 of user ceph-admin.
Feb  2 06:31:17 np0005604943 systemd[1]: Started Session 27 of User ceph-admin.
Feb  2 06:31:17 np0005604943 ceph-mon[75271]: Deploying cephadm binary to compute-0
Feb  2 06:31:18 np0005604943 systemd-logind[786]: New session 28 of user ceph-admin.
Feb  2 06:31:18 np0005604943 systemd[1]: Started Session 28 of User ceph-admin.
Feb  2 06:31:18 np0005604943 systemd-logind[786]: New session 29 of user ceph-admin.
Feb  2 06:31:18 np0005604943 systemd[1]: Started Session 29 of User ceph-admin.
Feb  2 06:31:18 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:18 np0005604943 systemd-logind[786]: New session 30 of user ceph-admin.
Feb  2 06:31:18 np0005604943 systemd[1]: Started Session 30 of User ceph-admin.
Feb  2 06:31:19 np0005604943 ceph-mgr[75558]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 06:31:20 np0005604943 systemd-logind[786]: New session 31 of user ceph-admin.
Feb  2 06:31:20 np0005604943 systemd[1]: Started Session 31 of User ceph-admin.
Feb  2 06:31:20 np0005604943 systemd-logind[786]: New session 32 of user ceph-admin.
Feb  2 06:31:20 np0005604943 systemd[1]: Started Session 32 of User ceph-admin.
Feb  2 06:31:20 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 06:31:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:21 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Added host compute-0
Feb  2 06:31:21 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Added host compute-0
Feb  2 06:31:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 06:31:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  2 06:31:21 np0005604943 peaceful_curran[76610]: Added host 'compute-0' with addr '192.168.122.100'
Feb  2 06:31:21 np0005604943 systemd[1]: libpod-c9eeadd47a4663c907fb68aa5579b1c1a431c0a8e3557c12965435dd8c8dd682.scope: Deactivated successfully.
Feb  2 06:31:21 np0005604943 podman[76594]: 2026-02-02 11:31:21.080591211 +0000 UTC m=+5.955033755 container died c9eeadd47a4663c907fb68aa5579b1c1a431c0a8e3557c12965435dd8c8dd682 (image=quay.io/ceph/ceph:v20, name=peaceful_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:31:21 np0005604943 systemd[1]: var-lib-containers-storage-overlay-8ed0409e221dcc298b568b0f63cae36161f42ee45959ccfcb0725170f74d973c-merged.mount: Deactivated successfully.
Feb  2 06:31:21 np0005604943 podman[76594]: 2026-02-02 11:31:21.132492861 +0000 UTC m=+6.006935395 container remove c9eeadd47a4663c907fb68aa5579b1c1a431c0a8e3557c12965435dd8c8dd682 (image=quay.io/ceph/ceph:v20, name=peaceful_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:31:21 np0005604943 systemd[1]: libpod-conmon-c9eeadd47a4663c907fb68aa5579b1c1a431c0a8e3557c12965435dd8c8dd682.scope: Deactivated successfully.
Feb  2 06:31:21 np0005604943 podman[77037]: 2026-02-02 11:31:21.202819701 +0000 UTC m=+0.050280537 container create 73c81a2bc7c2623679fb172cbabbf1bf65e95d914f50f477e5b22d87743230e2 (image=quay.io/ceph/ceph:v20, name=cool_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:21 np0005604943 systemd[1]: Started libpod-conmon-73c81a2bc7c2623679fb172cbabbf1bf65e95d914f50f477e5b22d87743230e2.scope.
Feb  2 06:31:21 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c331dfa58e9518f46709faadf8ace37c1410edf7bee6e3d14df8efc901eeb883/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c331dfa58e9518f46709faadf8ace37c1410edf7bee6e3d14df8efc901eeb883/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c331dfa58e9518f46709faadf8ace37c1410edf7bee6e3d14df8efc901eeb883/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:21 np0005604943 podman[77037]: 2026-02-02 11:31:21.179037305 +0000 UTC m=+0.026498181 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:21 np0005604943 podman[77037]: 2026-02-02 11:31:21.308643045 +0000 UTC m=+0.156103901 container init 73c81a2bc7c2623679fb172cbabbf1bf65e95d914f50f477e5b22d87743230e2 (image=quay.io/ceph/ceph:v20, name=cool_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb  2 06:31:21 np0005604943 podman[77037]: 2026-02-02 11:31:21.317533687 +0000 UTC m=+0.164994523 container start 73c81a2bc7c2623679fb172cbabbf1bf65e95d914f50f477e5b22d87743230e2 (image=quay.io/ceph/ceph:v20, name=cool_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:21 np0005604943 podman[77037]: 2026-02-02 11:31:21.321451403 +0000 UTC m=+0.168912239 container attach 73c81a2bc7c2623679fb172cbabbf1bf65e95d914f50f477e5b22d87743230e2 (image=quay.io/ceph/ceph:v20, name=cool_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Feb  2 06:31:21 np0005604943 ceph-mgr[75558]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 06:31:21 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 06:31:21 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Saving service mon spec with placement count:5
Feb  2 06:31:21 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Feb  2 06:31:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  2 06:31:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:21 np0005604943 cool_feistel[77074]: Scheduled mon update...
Feb  2 06:31:21 np0005604943 systemd[1]: libpod-73c81a2bc7c2623679fb172cbabbf1bf65e95d914f50f477e5b22d87743230e2.scope: Deactivated successfully.
Feb  2 06:31:21 np0005604943 podman[77037]: 2026-02-02 11:31:21.771159577 +0000 UTC m=+0.618620413 container died 73c81a2bc7c2623679fb172cbabbf1bf65e95d914f50f477e5b22d87743230e2 (image=quay.io/ceph/ceph:v20, name=cool_feistel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Feb  2 06:31:21 np0005604943 systemd[1]: var-lib-containers-storage-overlay-c331dfa58e9518f46709faadf8ace37c1410edf7bee6e3d14df8efc901eeb883-merged.mount: Deactivated successfully.
Feb  2 06:31:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054703 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:31:21 np0005604943 podman[77037]: 2026-02-02 11:31:21.810457465 +0000 UTC m=+0.657918291 container remove 73c81a2bc7c2623679fb172cbabbf1bf65e95d914f50f477e5b22d87743230e2 (image=quay.io/ceph/ceph:v20, name=cool_feistel, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 06:31:21 np0005604943 systemd[1]: libpod-conmon-73c81a2bc7c2623679fb172cbabbf1bf65e95d914f50f477e5b22d87743230e2.scope: Deactivated successfully.
Feb  2 06:31:21 np0005604943 podman[77141]: 2026-02-02 11:31:21.877086205 +0000 UTC m=+0.052800226 container create bf8627d729c4b04065cba02b356eea84696632a0a554f0ab58700c6ffd3b35d7 (image=quay.io/ceph/ceph:v20, name=wizardly_brahmagupta, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 06:31:21 np0005604943 systemd[1]: Started libpod-conmon-bf8627d729c4b04065cba02b356eea84696632a0a554f0ab58700c6ffd3b35d7.scope.
Feb  2 06:31:21 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7651e5286d407b117e9402d8936e776316fe7425f04a4b8d91e009902e4fcc2d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7651e5286d407b117e9402d8936e776316fe7425f04a4b8d91e009902e4fcc2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7651e5286d407b117e9402d8936e776316fe7425f04a4b8d91e009902e4fcc2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:21 np0005604943 podman[77141]: 2026-02-02 11:31:21.850974545 +0000 UTC m=+0.026688636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:21 np0005604943 podman[77141]: 2026-02-02 11:31:21.947025335 +0000 UTC m=+0.122739406 container init bf8627d729c4b04065cba02b356eea84696632a0a554f0ab58700c6ffd3b35d7 (image=quay.io/ceph/ceph:v20, name=wizardly_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:31:21 np0005604943 podman[77141]: 2026-02-02 11:31:21.95499991 +0000 UTC m=+0.130713901 container start bf8627d729c4b04065cba02b356eea84696632a0a554f0ab58700c6ffd3b35d7 (image=quay.io/ceph/ceph:v20, name=wizardly_brahmagupta, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 06:31:21 np0005604943 podman[77141]: 2026-02-02 11:31:21.958712252 +0000 UTC m=+0.134426333 container attach bf8627d729c4b04065cba02b356eea84696632a0a554f0ab58700c6ffd3b35d7 (image=quay.io/ceph/ceph:v20, name=wizardly_brahmagupta, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 06:31:22 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:22 np0005604943 ceph-mon[75271]: Added host compute-0
Feb  2 06:31:22 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:22 np0005604943 podman[77102]: 2026-02-02 11:31:22.195634586 +0000 UTC m=+0.746459575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:22 np0005604943 podman[77195]: 2026-02-02 11:31:22.33307904 +0000 UTC m=+0.056574878 container create 0568b6eac5f568c8c70625784f1df052479c2c99038d31df5ee11cabaee647f9 (image=quay.io/ceph/ceph:v20, name=interesting_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:22 np0005604943 systemd[1]: Started libpod-conmon-0568b6eac5f568c8c70625784f1df052479c2c99038d31df5ee11cabaee647f9.scope.
Feb  2 06:31:22 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 06:31:22 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Saving service mgr spec with placement count:2
Feb  2 06:31:22 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Feb  2 06:31:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 06:31:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:22 np0005604943 wizardly_brahmagupta[77157]: Scheduled mgr update...
Feb  2 06:31:22 np0005604943 podman[77141]: 2026-02-02 11:31:22.399571176 +0000 UTC m=+0.575285187 container died bf8627d729c4b04065cba02b356eea84696632a0a554f0ab58700c6ffd3b35d7 (image=quay.io/ceph/ceph:v20, name=wizardly_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  2 06:31:22 np0005604943 podman[77195]: 2026-02-02 11:31:22.305603044 +0000 UTC m=+0.029098952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:22 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:22 np0005604943 systemd[1]: libpod-bf8627d729c4b04065cba02b356eea84696632a0a554f0ab58700c6ffd3b35d7.scope: Deactivated successfully.
Feb  2 06:31:22 np0005604943 podman[77195]: 2026-02-02 11:31:22.410149734 +0000 UTC m=+0.133645552 container init 0568b6eac5f568c8c70625784f1df052479c2c99038d31df5ee11cabaee647f9 (image=quay.io/ceph/ceph:v20, name=interesting_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 06:31:22 np0005604943 podman[77195]: 2026-02-02 11:31:22.414066109 +0000 UTC m=+0.137561907 container start 0568b6eac5f568c8c70625784f1df052479c2c99038d31df5ee11cabaee647f9 (image=quay.io/ceph/ceph:v20, name=interesting_poitras, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:22 np0005604943 podman[77195]: 2026-02-02 11:31:22.41740097 +0000 UTC m=+0.140896798 container attach 0568b6eac5f568c8c70625784f1df052479c2c99038d31df5ee11cabaee647f9 (image=quay.io/ceph/ceph:v20, name=interesting_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 06:31:22 np0005604943 systemd[1]: var-lib-containers-storage-overlay-7651e5286d407b117e9402d8936e776316fe7425f04a4b8d91e009902e4fcc2d-merged.mount: Deactivated successfully.
Feb  2 06:31:22 np0005604943 podman[77141]: 2026-02-02 11:31:22.442014388 +0000 UTC m=+0.617728379 container remove bf8627d729c4b04065cba02b356eea84696632a0a554f0ab58700c6ffd3b35d7 (image=quay.io/ceph/ceph:v20, name=wizardly_brahmagupta, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:31:22 np0005604943 systemd[1]: libpod-conmon-bf8627d729c4b04065cba02b356eea84696632a0a554f0ab58700c6ffd3b35d7.scope: Deactivated successfully.
Feb  2 06:31:22 np0005604943 interesting_poitras[77212]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Feb  2 06:31:22 np0005604943 systemd[1]: libpod-0568b6eac5f568c8c70625784f1df052479c2c99038d31df5ee11cabaee647f9.scope: Deactivated successfully.
Feb  2 06:31:22 np0005604943 podman[77195]: 2026-02-02 11:31:22.528059726 +0000 UTC m=+0.251555534 container died 0568b6eac5f568c8c70625784f1df052479c2c99038d31df5ee11cabaee647f9 (image=quay.io/ceph/ceph:v20, name=interesting_poitras, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:22 np0005604943 podman[77230]: 2026-02-02 11:31:22.546412704 +0000 UTC m=+0.080174688 container create 536979dc743fc58c335a41603f808f5ccc3d683ce8d754c06ec5add720f7c805 (image=quay.io/ceph/ceph:v20, name=condescending_haibt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 06:31:22 np0005604943 systemd[1]: var-lib-containers-storage-overlay-7346f2562a26f6f9d86e6738078d0ac2f7ed92ad45e938d379d02ff39e08089f-merged.mount: Deactivated successfully.
Feb  2 06:31:22 np0005604943 podman[77195]: 2026-02-02 11:31:22.5735025 +0000 UTC m=+0.296998298 container remove 0568b6eac5f568c8c70625784f1df052479c2c99038d31df5ee11cabaee647f9 (image=quay.io/ceph/ceph:v20, name=interesting_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 06:31:22 np0005604943 systemd[1]: Started libpod-conmon-536979dc743fc58c335a41603f808f5ccc3d683ce8d754c06ec5add720f7c805.scope.
Feb  2 06:31:22 np0005604943 systemd[1]: libpod-conmon-0568b6eac5f568c8c70625784f1df052479c2c99038d31df5ee11cabaee647f9.scope: Deactivated successfully.
Feb  2 06:31:22 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:22 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0968ff17ab17203491aa960ca0d39f4eb57ca26e9d1437e11266443676cd830/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:22 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0968ff17ab17203491aa960ca0d39f4eb57ca26e9d1437e11266443676cd830/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:22 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0968ff17ab17203491aa960ca0d39f4eb57ca26e9d1437e11266443676cd830/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:22 np0005604943 podman[77230]: 2026-02-02 11:31:22.510431647 +0000 UTC m=+0.044193671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:22 np0005604943 podman[77230]: 2026-02-02 11:31:22.615928983 +0000 UTC m=+0.149691057 container init 536979dc743fc58c335a41603f808f5ccc3d683ce8d754c06ec5add720f7c805 (image=quay.io/ceph/ceph:v20, name=condescending_haibt, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 06:31:22 np0005604943 podman[77230]: 2026-02-02 11:31:22.620826675 +0000 UTC m=+0.154588659 container start 536979dc743fc58c335a41603f808f5ccc3d683ce8d754c06ec5add720f7c805 (image=quay.io/ceph/ceph:v20, name=condescending_haibt, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:31:22 np0005604943 podman[77230]: 2026-02-02 11:31:22.625270756 +0000 UTC m=+0.159032790 container attach 536979dc743fc58c335a41603f808f5ccc3d683ce8d754c06ec5add720f7c805 (image=quay.io/ceph/ceph:v20, name=condescending_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 06:31:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Feb  2 06:31:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:22 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:23 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 06:31:23 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Saving service crash spec with placement *
Feb  2 06:31:23 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Feb  2 06:31:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb  2 06:31:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:31:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:23 np0005604943 condescending_haibt[77257]: Scheduled crash update...
Feb  2 06:31:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:23 np0005604943 systemd[1]: libpod-536979dc743fc58c335a41603f808f5ccc3d683ce8d754c06ec5add720f7c805.scope: Deactivated successfully.
Feb  2 06:31:23 np0005604943 podman[77230]: 2026-02-02 11:31:23.178980326 +0000 UTC m=+0.712742300 container died 536979dc743fc58c335a41603f808f5ccc3d683ce8d754c06ec5add720f7c805 (image=quay.io/ceph/ceph:v20, name=condescending_haibt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 06:31:23 np0005604943 systemd[1]: var-lib-containers-storage-overlay-c0968ff17ab17203491aa960ca0d39f4eb57ca26e9d1437e11266443676cd830-merged.mount: Deactivated successfully.
Feb  2 06:31:23 np0005604943 podman[77230]: 2026-02-02 11:31:23.218252542 +0000 UTC m=+0.752014506 container remove 536979dc743fc58c335a41603f808f5ccc3d683ce8d754c06ec5add720f7c805 (image=quay.io/ceph/ceph:v20, name=condescending_haibt, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:23 np0005604943 systemd[1]: libpod-conmon-536979dc743fc58c335a41603f808f5ccc3d683ce8d754c06ec5add720f7c805.scope: Deactivated successfully.
Feb  2 06:31:23 np0005604943 podman[77391]: 2026-02-02 11:31:23.277770508 +0000 UTC m=+0.041469876 container create 68c2ddf6e0c0df4561d2b01ab59e4f3a5aa77d9e5f62e5c91b824f97f496eb9b (image=quay.io/ceph/ceph:v20, name=practical_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 06:31:23 np0005604943 systemd[1]: Started libpod-conmon-68c2ddf6e0c0df4561d2b01ab59e4f3a5aa77d9e5f62e5c91b824f97f496eb9b.scope.
Feb  2 06:31:23 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:23 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb68ad30dd7f2e9d0343dfa37ad92071dc2aa089da87d0df3930786d60e48129/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:23 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb68ad30dd7f2e9d0343dfa37ad92071dc2aa089da87d0df3930786d60e48129/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:23 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb68ad30dd7f2e9d0343dfa37ad92071dc2aa089da87d0df3930786d60e48129/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:23 np0005604943 podman[77391]: 2026-02-02 11:31:23.258702921 +0000 UTC m=+0.022402319 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:23 np0005604943 podman[77391]: 2026-02-02 11:31:23.356137247 +0000 UTC m=+0.119836635 container init 68c2ddf6e0c0df4561d2b01ab59e4f3a5aa77d9e5f62e5c91b824f97f496eb9b (image=quay.io/ceph/ceph:v20, name=practical_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Feb  2 06:31:23 np0005604943 podman[77391]: 2026-02-02 11:31:23.36178527 +0000 UTC m=+0.125484638 container start 68c2ddf6e0c0df4561d2b01ab59e4f3a5aa77d9e5f62e5c91b824f97f496eb9b (image=quay.io/ceph/ceph:v20, name=practical_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:31:23 np0005604943 podman[77391]: 2026-02-02 11:31:23.365035349 +0000 UTC m=+0.128734737 container attach 68c2ddf6e0c0df4561d2b01ab59e4f3a5aa77d9e5f62e5c91b824f97f496eb9b (image=quay.io/ceph/ceph:v20, name=practical_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:31:23 np0005604943 ceph-mon[75271]: Saving service mon spec with placement count:5
Feb  2 06:31:23 np0005604943 ceph-mon[75271]: Saving service mgr spec with placement count:2
Feb  2 06:31:23 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:23 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:23 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:23 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:23 np0005604943 ceph-mgr[75558]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 06:31:23 np0005604943 podman[77501]: 2026-02-02 11:31:23.710255645 +0000 UTC m=+0.079866100 container exec fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Feb  2 06:31:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3779412048' entity='client.admin' 
Feb  2 06:31:23 np0005604943 systemd[1]: libpod-68c2ddf6e0c0df4561d2b01ab59e4f3a5aa77d9e5f62e5c91b824f97f496eb9b.scope: Deactivated successfully.
Feb  2 06:31:23 np0005604943 conmon[77432]: conmon 68c2ddf6e0c0df4561d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-68c2ddf6e0c0df4561d2b01ab59e4f3a5aa77d9e5f62e5c91b824f97f496eb9b.scope/container/memory.events
Feb  2 06:31:23 np0005604943 podman[77391]: 2026-02-02 11:31:23.766292157 +0000 UTC m=+0.529991515 container died 68c2ddf6e0c0df4561d2b01ab59e4f3a5aa77d9e5f62e5c91b824f97f496eb9b (image=quay.io/ceph/ceph:v20, name=practical_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 06:31:23 np0005604943 podman[77501]: 2026-02-02 11:31:23.783658549 +0000 UTC m=+0.153268974 container exec_died fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:31:23 np0005604943 systemd[1]: var-lib-containers-storage-overlay-fb68ad30dd7f2e9d0343dfa37ad92071dc2aa089da87d0df3930786d60e48129-merged.mount: Deactivated successfully.
Feb  2 06:31:23 np0005604943 podman[77391]: 2026-02-02 11:31:23.802675085 +0000 UTC m=+0.566374443 container remove 68c2ddf6e0c0df4561d2b01ab59e4f3a5aa77d9e5f62e5c91b824f97f496eb9b (image=quay.io/ceph/ceph:v20, name=practical_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:31:23 np0005604943 systemd[1]: libpod-conmon-68c2ddf6e0c0df4561d2b01ab59e4f3a5aa77d9e5f62e5c91b824f97f496eb9b.scope: Deactivated successfully.
Feb  2 06:31:23 np0005604943 podman[77548]: 2026-02-02 11:31:23.858721088 +0000 UTC m=+0.037995993 container create 91832e4345b7a5fe67bbcf9808e2ae8a519bdee67849ebf1c8648e09e1b06ae9 (image=quay.io/ceph/ceph:v20, name=cool_kilby, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 06:31:23 np0005604943 systemd[1]: Started libpod-conmon-91832e4345b7a5fe67bbcf9808e2ae8a519bdee67849ebf1c8648e09e1b06ae9.scope.
Feb  2 06:31:23 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:23 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfffd156fa8f508cc4f764c5791b033f353077ed5cf88b857c167f9fa41c2e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:23 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfffd156fa8f508cc4f764c5791b033f353077ed5cf88b857c167f9fa41c2e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:23 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfffd156fa8f508cc4f764c5791b033f353077ed5cf88b857c167f9fa41c2e0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:23 np0005604943 podman[77548]: 2026-02-02 11:31:23.844265685 +0000 UTC m=+0.023540610 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:23 np0005604943 podman[77548]: 2026-02-02 11:31:23.960760629 +0000 UTC m=+0.140035584 container init 91832e4345b7a5fe67bbcf9808e2ae8a519bdee67849ebf1c8648e09e1b06ae9 (image=quay.io/ceph/ceph:v20, name=cool_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 06:31:23 np0005604943 podman[77548]: 2026-02-02 11:31:23.968605233 +0000 UTC m=+0.147880178 container start 91832e4345b7a5fe67bbcf9808e2ae8a519bdee67849ebf1c8648e09e1b06ae9 (image=quay.io/ceph/ceph:v20, name=cool_kilby, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:23 np0005604943 podman[77548]: 2026-02-02 11:31:23.972522938 +0000 UTC m=+0.151797923 container attach 91832e4345b7a5fe67bbcf9808e2ae8a519bdee67849ebf1c8648e09e1b06ae9 (image=quay.io/ceph/ceph:v20, name=cool_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:31:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:31:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:24 np0005604943 ceph-mon[75271]: Saving service crash spec with placement *
Feb  2 06:31:24 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3779412048' entity='client.admin' 
Feb  2 06:31:24 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:24 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 06:31:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Feb  2 06:31:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:24 np0005604943 podman[77548]: 2026-02-02 11:31:24.428886843 +0000 UTC m=+0.608161748 container died 91832e4345b7a5fe67bbcf9808e2ae8a519bdee67849ebf1c8648e09e1b06ae9 (image=quay.io/ceph/ceph:v20, name=cool_kilby, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb  2 06:31:24 np0005604943 systemd[1]: libpod-91832e4345b7a5fe67bbcf9808e2ae8a519bdee67849ebf1c8648e09e1b06ae9.scope: Deactivated successfully.
Feb  2 06:31:24 np0005604943 systemd[1]: var-lib-containers-storage-overlay-9bfffd156fa8f508cc4f764c5791b033f353077ed5cf88b857c167f9fa41c2e0-merged.mount: Deactivated successfully.
Feb  2 06:31:24 np0005604943 podman[77548]: 2026-02-02 11:31:24.483672251 +0000 UTC m=+0.662947186 container remove 91832e4345b7a5fe67bbcf9808e2ae8a519bdee67849ebf1c8648e09e1b06ae9 (image=quay.io/ceph/ceph:v20, name=cool_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:24 np0005604943 systemd[1]: libpod-conmon-91832e4345b7a5fe67bbcf9808e2ae8a519bdee67849ebf1c8648e09e1b06ae9.scope: Deactivated successfully.
Feb  2 06:31:24 np0005604943 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77716 (sysctl)
Feb  2 06:31:24 np0005604943 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Feb  2 06:31:24 np0005604943 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Feb  2 06:31:24 np0005604943 podman[77699]: 2026-02-02 11:31:24.571802135 +0000 UTC m=+0.065380566 container create 3d4316f266e754e9879cf820d9a5ea1dcbb109444640336bf9aac187b2469224 (image=quay.io/ceph/ceph:v20, name=elastic_pare, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 06:31:24 np0005604943 systemd[1]: Started libpod-conmon-3d4316f266e754e9879cf820d9a5ea1dcbb109444640336bf9aac187b2469224.scope.
Feb  2 06:31:24 np0005604943 podman[77699]: 2026-02-02 11:31:24.540419403 +0000 UTC m=+0.033997834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:24 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:24 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc6f558a283d7f246f7ce0a927eb5d7342bb52fa0bd66bccdb6cc67238e187d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:24 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc6f558a283d7f246f7ce0a927eb5d7342bb52fa0bd66bccdb6cc67238e187d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:24 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc6f558a283d7f246f7ce0a927eb5d7342bb52fa0bd66bccdb6cc67238e187d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:24 np0005604943 podman[77699]: 2026-02-02 11:31:24.660625898 +0000 UTC m=+0.154204389 container init 3d4316f266e754e9879cf820d9a5ea1dcbb109444640336bf9aac187b2469224 (image=quay.io/ceph/ceph:v20, name=elastic_pare, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:31:24 np0005604943 podman[77699]: 2026-02-02 11:31:24.667637039 +0000 UTC m=+0.161215470 container start 3d4316f266e754e9879cf820d9a5ea1dcbb109444640336bf9aac187b2469224 (image=quay.io/ceph/ceph:v20, name=elastic_pare, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 06:31:24 np0005604943 podman[77699]: 2026-02-02 11:31:24.67137568 +0000 UTC m=+0.164954121 container attach 3d4316f266e754e9879cf820d9a5ea1dcbb109444640336bf9aac187b2469224 (image=quay.io/ceph/ceph:v20, name=elastic_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 06:31:24 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:25 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 06:31:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 06:31:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:25 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Added label _admin to host compute-0
Feb  2 06:31:25 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Feb  2 06:31:25 np0005604943 elastic_pare[77730]: Added label _admin to host compute-0
Feb  2 06:31:25 np0005604943 systemd[1]: libpod-3d4316f266e754e9879cf820d9a5ea1dcbb109444640336bf9aac187b2469224.scope: Deactivated successfully.
Feb  2 06:31:25 np0005604943 podman[77699]: 2026-02-02 11:31:25.097563736 +0000 UTC m=+0.591142127 container died 3d4316f266e754e9879cf820d9a5ea1dcbb109444640336bf9aac187b2469224 (image=quay.io/ceph/ceph:v20, name=elastic_pare, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:31:25 np0005604943 systemd[1]: var-lib-containers-storage-overlay-1bc6f558a283d7f246f7ce0a927eb5d7342bb52fa0bd66bccdb6cc67238e187d-merged.mount: Deactivated successfully.
Feb  2 06:31:25 np0005604943 podman[77699]: 2026-02-02 11:31:25.150188585 +0000 UTC m=+0.643767026 container remove 3d4316f266e754e9879cf820d9a5ea1dcbb109444640336bf9aac187b2469224 (image=quay.io/ceph/ceph:v20, name=elastic_pare, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:31:25 np0005604943 systemd[1]: libpod-conmon-3d4316f266e754e9879cf820d9a5ea1dcbb109444640336bf9aac187b2469224.scope: Deactivated successfully.
Feb  2 06:31:25 np0005604943 podman[77837]: 2026-02-02 11:31:25.222023696 +0000 UTC m=+0.048243482 container create 7ca8d53968a5a7d98f730f8813a7efc2a062156c66034b5de08226f421eec4bc (image=quay.io/ceph/ceph:v20, name=admiring_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:31:25 np0005604943 systemd[1]: Started libpod-conmon-7ca8d53968a5a7d98f730f8813a7efc2a062156c66034b5de08226f421eec4bc.scope.
Feb  2 06:31:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:31:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:25 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:25 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0a6217be0c9cf1190775c780730e590b51f42b2f04ac5c06a0095c587137c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:25 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0a6217be0c9cf1190775c780730e590b51f42b2f04ac5c06a0095c587137c2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:25 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0a6217be0c9cf1190775c780730e590b51f42b2f04ac5c06a0095c587137c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:25 np0005604943 podman[77837]: 2026-02-02 11:31:25.204431977 +0000 UTC m=+0.030651813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:25 np0005604943 podman[77837]: 2026-02-02 11:31:25.319710479 +0000 UTC m=+0.145930355 container init 7ca8d53968a5a7d98f730f8813a7efc2a062156c66034b5de08226f421eec4bc (image=quay.io/ceph/ceph:v20, name=admiring_gates, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 06:31:25 np0005604943 podman[77837]: 2026-02-02 11:31:25.329024981 +0000 UTC m=+0.155244777 container start 7ca8d53968a5a7d98f730f8813a7efc2a062156c66034b5de08226f421eec4bc (image=quay.io/ceph/ceph:v20, name=admiring_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:31:25 np0005604943 podman[77837]: 2026-02-02 11:31:25.33301579 +0000 UTC m=+0.159235616 container attach 7ca8d53968a5a7d98f730f8813a7efc2a062156c66034b5de08226f421eec4bc (image=quay.io/ceph/ceph:v20, name=admiring_gates, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:25 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:25 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:25 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:25 np0005604943 ceph-mgr[75558]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 06:31:25 np0005604943 podman[77956]: 2026-02-02 11:31:25.689963775 +0000 UTC m=+0.048719854 container create 96034a564d4162585669d1409f2b47fedcac77467acf71ad83272df71ffca85b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chatelet, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:31:25 np0005604943 systemd[1]: Started libpod-conmon-96034a564d4162585669d1409f2b47fedcac77467acf71ad83272df71ffca85b.scope.
Feb  2 06:31:25 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:25 np0005604943 podman[77956]: 2026-02-02 11:31:25.758482676 +0000 UTC m=+0.117238805 container init 96034a564d4162585669d1409f2b47fedcac77467acf71ad83272df71ffca85b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chatelet, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 06:31:25 np0005604943 podman[77956]: 2026-02-02 11:31:25.672171501 +0000 UTC m=+0.030927610 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:25 np0005604943 podman[77956]: 2026-02-02 11:31:25.7667302 +0000 UTC m=+0.125486319 container start 96034a564d4162585669d1409f2b47fedcac77467acf71ad83272df71ffca85b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:25 np0005604943 priceless_chatelet[77972]: 167 167
Feb  2 06:31:25 np0005604943 systemd[1]: libpod-96034a564d4162585669d1409f2b47fedcac77467acf71ad83272df71ffca85b.scope: Deactivated successfully.
Feb  2 06:31:25 np0005604943 conmon[77972]: conmon 96034a564d4162585669 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96034a564d4162585669d1409f2b47fedcac77467acf71ad83272df71ffca85b.scope/container/memory.events
Feb  2 06:31:25 np0005604943 podman[77956]: 2026-02-02 11:31:25.771260323 +0000 UTC m=+0.130016462 container attach 96034a564d4162585669d1409f2b47fedcac77467acf71ad83272df71ffca85b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 06:31:25 np0005604943 podman[77956]: 2026-02-02 11:31:25.771588852 +0000 UTC m=+0.130344971 container died 96034a564d4162585669d1409f2b47fedcac77467acf71ad83272df71ffca85b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:25 np0005604943 systemd[1]: var-lib-containers-storage-overlay-0f6d300c3bd4ddf9b4f120a162238a7083e5345ffd5f2619c3ee53533f058ba9-merged.mount: Deactivated successfully.
Feb  2 06:31:25 np0005604943 podman[77956]: 2026-02-02 11:31:25.819569375 +0000 UTC m=+0.178325454 container remove 96034a564d4162585669d1409f2b47fedcac77467acf71ad83272df71ffca85b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 06:31:25 np0005604943 systemd[1]: libpod-conmon-96034a564d4162585669d1409f2b47fedcac77467acf71ad83272df71ffca85b.scope: Deactivated successfully.
Feb  2 06:31:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Feb  2 06:31:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4195599501' entity='client.admin' 
Feb  2 06:31:25 np0005604943 admiring_gates[77869]: set mgr/dashboard/cluster/status
Feb  2 06:31:25 np0005604943 systemd[1]: libpod-7ca8d53968a5a7d98f730f8813a7efc2a062156c66034b5de08226f421eec4bc.scope: Deactivated successfully.
Feb  2 06:31:25 np0005604943 podman[77837]: 2026-02-02 11:31:25.951434907 +0000 UTC m=+0.777654763 container died 7ca8d53968a5a7d98f730f8813a7efc2a062156c66034b5de08226f421eec4bc (image=quay.io/ceph/ceph:v20, name=admiring_gates, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 06:31:25 np0005604943 systemd[1]: var-lib-containers-storage-overlay-ba0a6217be0c9cf1190775c780730e590b51f42b2f04ac5c06a0095c587137c2-merged.mount: Deactivated successfully.
Feb  2 06:31:25 np0005604943 podman[77837]: 2026-02-02 11:31:25.986283713 +0000 UTC m=+0.812503529 container remove 7ca8d53968a5a7d98f730f8813a7efc2a062156c66034b5de08226f421eec4bc (image=quay.io/ceph/ceph:v20, name=admiring_gates, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Feb  2 06:31:25 np0005604943 systemd[1]: libpod-conmon-7ca8d53968a5a7d98f730f8813a7efc2a062156c66034b5de08226f421eec4bc.scope: Deactivated successfully.
Feb  2 06:31:26 np0005604943 systemd[1]: Reloading.
Feb  2 06:31:26 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:31:26 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:31:26 np0005604943 ceph-mon[75271]: Added label _admin to host compute-0
Feb  2 06:31:26 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/4195599501' entity='client.admin' 
Feb  2 06:31:26 np0005604943 podman[78049]: 2026-02-02 11:31:26.435016891 +0000 UTC m=+0.036762279 container create 649209c2968a79ae7b0f783efdfa2c003c008f090c58ac19ad5f6c0a09823ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_moser, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 06:31:26 np0005604943 systemd[1]: Started libpod-conmon-649209c2968a79ae7b0f783efdfa2c003c008f090c58ac19ad5f6c0a09823ce8.scope.
Feb  2 06:31:26 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:26 np0005604943 podman[78049]: 2026-02-02 11:31:26.417305861 +0000 UTC m=+0.019051239 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8a7539977cb47b4303783cf02b17c167c7866ff19869015e4fe482cafe57c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8a7539977cb47b4303783cf02b17c167c7866ff19869015e4fe482cafe57c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8a7539977cb47b4303783cf02b17c167c7866ff19869015e4fe482cafe57c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8a7539977cb47b4303783cf02b17c167c7866ff19869015e4fe482cafe57c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:26 np0005604943 podman[78049]: 2026-02-02 11:31:26.538477021 +0000 UTC m=+0.140222439 container init 649209c2968a79ae7b0f783efdfa2c003c008f090c58ac19ad5f6c0a09823ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 06:31:26 np0005604943 podman[78049]: 2026-02-02 11:31:26.553997583 +0000 UTC m=+0.155742971 container start 649209c2968a79ae7b0f783efdfa2c003c008f090c58ac19ad5f6c0a09823ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_moser, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:31:26 np0005604943 podman[78049]: 2026-02-02 11:31:26.558504695 +0000 UTC m=+0.160250173 container attach 649209c2968a79ae7b0f783efdfa2c003c008f090c58ac19ad5f6c0a09823ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 06:31:26 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:31:26 np0005604943 python3[78095]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:31:26 np0005604943 podman[78101]: 2026-02-02 11:31:26.913971572 +0000 UTC m=+0.061006878 container create 4dd05e76ee4b9a7ab075334ae46bfba84e9be6d94eed2ee8e4c228dd9e3e431e (image=quay.io/ceph/ceph:v20, name=friendly_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:26 np0005604943 systemd[1]: Started libpod-conmon-4dd05e76ee4b9a7ab075334ae46bfba84e9be6d94eed2ee8e4c228dd9e3e431e.scope.
Feb  2 06:31:26 np0005604943 podman[78101]: 2026-02-02 11:31:26.889867844 +0000 UTC m=+0.036903200 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:26 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd174a925374d63baa2ac146e13389a783a8ff6789f08fd89089a875573ab550/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd174a925374d63baa2ac146e13389a783a8ff6789f08fd89089a875573ab550/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:27 np0005604943 podman[78101]: 2026-02-02 11:31:27.015200626 +0000 UTC m=+0.162235992 container init 4dd05e76ee4b9a7ab075334ae46bfba84e9be6d94eed2ee8e4c228dd9e3e431e (image=quay.io/ceph/ceph:v20, name=friendly_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 06:31:27 np0005604943 podman[78101]: 2026-02-02 11:31:27.021296142 +0000 UTC m=+0.168331478 container start 4dd05e76ee4b9a7ab075334ae46bfba84e9be6d94eed2ee8e4c228dd9e3e431e (image=quay.io/ceph/ceph:v20, name=friendly_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:31:27 np0005604943 podman[78101]: 2026-02-02 11:31:27.025317369 +0000 UTC m=+0.172352715 container attach 4dd05e76ee4b9a7ab075334ae46bfba84e9be6d94eed2ee8e4c228dd9e3e431e (image=quay.io/ceph/ceph:v20, name=friendly_sutherland, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:27 np0005604943 laughing_moser[78065]: [
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:    {
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:        "available": false,
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:        "being_replaced": false,
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:        "ceph_device_lvm": false,
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:        "lsm_data": {},
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:        "lvs": [],
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:        "path": "/dev/sr0",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:        "rejected_reasons": [
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "Has a FileSystem",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "Insufficient space (<5GB)"
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:        ],
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:        "sys_api": {
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "actuators": null,
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "device_nodes": [
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:                "sr0"
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            ],
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "devname": "sr0",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "human_readable_size": "482.00 KB",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "id_bus": "ata",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "model": "QEMU DVD-ROM",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "nr_requests": "2",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "parent": "/dev/sr0",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "partitions": {},
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "path": "/dev/sr0",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "removable": "1",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "rev": "2.5+",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "ro": "0",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "rotational": "1",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "sas_address": "",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "sas_device_handle": "",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "scheduler_mode": "mq-deadline",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "sectors": 0,
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "sectorsize": "2048",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "size": 493568.0,
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "support_discard": "2048",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "type": "disk",
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:            "vendor": "QEMU"
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:        }
Feb  2 06:31:27 np0005604943 laughing_moser[78065]:    }
Feb  2 06:31:27 np0005604943 laughing_moser[78065]: ]
Feb  2 06:31:27 np0005604943 systemd[1]: libpod-649209c2968a79ae7b0f783efdfa2c003c008f090c58ac19ad5f6c0a09823ce8.scope: Deactivated successfully.
Feb  2 06:31:27 np0005604943 podman[78868]: 2026-02-02 11:31:27.131830525 +0000 UTC m=+0.030599508 container died 649209c2968a79ae7b0f783efdfa2c003c008f090c58ac19ad5f6c0a09823ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_moser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 06:31:27 np0005604943 systemd[1]: var-lib-containers-storage-overlay-df8a7539977cb47b4303783cf02b17c167c7866ff19869015e4fe482cafe57c1-merged.mount: Deactivated successfully.
Feb  2 06:31:27 np0005604943 podman[78868]: 2026-02-02 11:31:27.17203673 +0000 UTC m=+0.070805653 container remove 649209c2968a79ae7b0f783efdfa2c003c008f090c58ac19ad5f6c0a09823ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_moser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 06:31:27 np0005604943 systemd[1]: libpod-conmon-649209c2968a79ae7b0f783efdfa2c003c008f090c58ac19ad5f6c0a09823ce8.scope: Deactivated successfully.
Feb  2 06:31:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:31:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:31:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Feb  2 06:31:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:31:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3601160295' entity='client.admin' 
Feb  2 06:31:27 np0005604943 ceph-mgr[75558]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 06:31:27 np0005604943 systemd[1]: libpod-4dd05e76ee4b9a7ab075334ae46bfba84e9be6d94eed2ee8e4c228dd9e3e431e.scope: Deactivated successfully.
Feb  2 06:31:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:31:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  2 06:31:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  2 06:31:27 np0005604943 podman[78902]: 2026-02-02 11:31:27.619887816 +0000 UTC m=+0.041333448 container died 4dd05e76ee4b9a7ab075334ae46bfba84e9be6d94eed2ee8e4c228dd9e3e431e (image=quay.io/ceph/ceph:v20, name=friendly_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 06:31:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:31:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:31:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:31:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:31:27 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb  2 06:31:27 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb  2 06:31:27 np0005604943 systemd[1]: var-lib-containers-storage-overlay-dd174a925374d63baa2ac146e13389a783a8ff6789f08fd89089a875573ab550-merged.mount: Deactivated successfully.
Feb  2 06:31:27 np0005604943 podman[78902]: 2026-02-02 11:31:27.676546538 +0000 UTC m=+0.097992130 container remove 4dd05e76ee4b9a7ab075334ae46bfba84e9be6d94eed2ee8e4c228dd9e3e431e (image=quay.io/ceph/ceph:v20, name=friendly_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  2 06:31:27 np0005604943 systemd[1]: libpod-conmon-4dd05e76ee4b9a7ab075334ae46bfba84e9be6d94eed2ee8e4c228dd9e3e431e.scope: Deactivated successfully.
Feb  2 06:31:28 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/4548a36b-7cdc-5e3e-a814-4e1571be1fae/config/ceph.conf
Feb  2 06:31:28 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/4548a36b-7cdc-5e3e-a814-4e1571be1fae/config/ceph.conf
Feb  2 06:31:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:28 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3601160295' entity='client.admin' 
Feb  2 06:31:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  2 06:31:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:31:28 np0005604943 ceph-mon[75271]: Updating compute-0:/etc/ceph/ceph.conf
Feb  2 06:31:28 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:28 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 06:31:28 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 06:31:28 np0005604943 ansible-async_wrapper.py[79520]: Invoked with j366815747818 30 /home/zuul/.ansible/tmp/ansible-tmp-1770031888.324933-36555-272878428057528/AnsiballZ_command.py _
Feb  2 06:31:28 np0005604943 ansible-async_wrapper.py[79613]: Starting module and watcher
Feb  2 06:31:28 np0005604943 ansible-async_wrapper.py[79613]: Start watching 79614 (30)
Feb  2 06:31:28 np0005604943 ansible-async_wrapper.py[79614]: Start module (79614)
Feb  2 06:31:28 np0005604943 ansible-async_wrapper.py[79520]: Return async_wrapper task started.
Feb  2 06:31:29 np0005604943 python3[79615]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:31:29 np0005604943 podman[79689]: 2026-02-02 11:31:29.074283358 +0000 UTC m=+0.049155045 container create c359d315f2fcd44b96bca83db1743025327345c50ca920f20d913963ec4b9e0d (image=quay.io/ceph/ceph:v20, name=wonderful_franklin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:29 np0005604943 systemd[1]: Started libpod-conmon-c359d315f2fcd44b96bca83db1743025327345c50ca920f20d913963ec4b9e0d.scope.
Feb  2 06:31:29 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/4548a36b-7cdc-5e3e-a814-4e1571be1fae/config/ceph.client.admin.keyring
Feb  2 06:31:29 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/4548a36b-7cdc-5e3e-a814-4e1571be1fae/config/ceph.client.admin.keyring
Feb  2 06:31:29 np0005604943 podman[79689]: 2026-02-02 11:31:29.051547899 +0000 UTC m=+0.026419566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:29 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:29 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc7a3afe4df07cfe7c29b9ae85464c03b0b1bc563e9cf1a04383fd914b2dd66/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:29 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc7a3afe4df07cfe7c29b9ae85464c03b0b1bc563e9cf1a04383fd914b2dd66/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:29 np0005604943 podman[79689]: 2026-02-02 11:31:29.178886159 +0000 UTC m=+0.153757896 container init c359d315f2fcd44b96bca83db1743025327345c50ca920f20d913963ec4b9e0d (image=quay.io/ceph/ceph:v20, name=wonderful_franklin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:29 np0005604943 podman[79689]: 2026-02-02 11:31:29.187112888 +0000 UTC m=+0.161984525 container start c359d315f2fcd44b96bca83db1743025327345c50ca920f20d913963ec4b9e0d (image=quay.io/ceph/ceph:v20, name=wonderful_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 06:31:29 np0005604943 podman[79689]: 2026-02-02 11:31:29.191092743 +0000 UTC m=+0.165964410 container attach c359d315f2fcd44b96bca83db1743025327345c50ca920f20d913963ec4b9e0d (image=quay.io/ceph/ceph:v20, name=wonderful_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:29 np0005604943 ceph-mon[75271]: Updating compute-0:/var/lib/ceph/4548a36b-7cdc-5e3e-a814-4e1571be1fae/config/ceph.conf
Feb  2 06:31:29 np0005604943 ceph-mgr[75558]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Feb  2 06:31:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 06:31:29 np0005604943 ceph-mon[75271]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb  2 06:31:29 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 06:31:29 np0005604943 wonderful_franklin[79755]: 
Feb  2 06:31:29 np0005604943 wonderful_franklin[79755]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb  2 06:31:29 np0005604943 systemd[1]: libpod-c359d315f2fcd44b96bca83db1743025327345c50ca920f20d913963ec4b9e0d.scope: Deactivated successfully.
Feb  2 06:31:29 np0005604943 podman[79689]: 2026-02-02 11:31:29.613026658 +0000 UTC m=+0.587898325 container died c359d315f2fcd44b96bca83db1743025327345c50ca920f20d913963ec4b9e0d (image=quay.io/ceph/ceph:v20, name=wonderful_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Feb  2 06:31:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:31:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:31:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:31:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:29 np0005604943 ceph-mgr[75558]: [progress INFO root] update: starting ev 699f12e1-c262-49ca-bda7-69a45a41b247 (Updating crash deployment (+1 -> 1))
Feb  2 06:31:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb  2 06:31:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb  2 06:31:29 np0005604943 systemd[1]: var-lib-containers-storage-overlay-fbc7a3afe4df07cfe7c29b9ae85464c03b0b1bc563e9cf1a04383fd914b2dd66-merged.mount: Deactivated successfully.
Feb  2 06:31:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb  2 06:31:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:31:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:31:29 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Feb  2 06:31:29 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Feb  2 06:31:29 np0005604943 podman[79689]: 2026-02-02 11:31:29.656851478 +0000 UTC m=+0.631723115 container remove c359d315f2fcd44b96bca83db1743025327345c50ca920f20d913963ec4b9e0d (image=quay.io/ceph/ceph:v20, name=wonderful_franklin, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:29 np0005604943 systemd[1]: libpod-conmon-c359d315f2fcd44b96bca83db1743025327345c50ca920f20d913963ec4b9e0d.scope: Deactivated successfully.
Feb  2 06:31:29 np0005604943 ansible-async_wrapper.py[79614]: Module complete (79614)
Feb  2 06:31:30 np0005604943 podman[80130]: 2026-02-02 11:31:30.154401725 +0000 UTC m=+0.060799663 container create cdd5840d6ec8f25d1f42f84d1b5e650c3b16d8bab7805265c6e1ec3502a0086a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_taussig, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 06:31:30 np0005604943 systemd[1]: Started libpod-conmon-cdd5840d6ec8f25d1f42f84d1b5e650c3b16d8bab7805265c6e1ec3502a0086a.scope.
Feb  2 06:31:30 np0005604943 podman[80130]: 2026-02-02 11:31:30.125723654 +0000 UTC m=+0.032121622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:30 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:30 np0005604943 podman[80130]: 2026-02-02 11:31:30.249518081 +0000 UTC m=+0.155916069 container init cdd5840d6ec8f25d1f42f84d1b5e650c3b16d8bab7805265c6e1ec3502a0086a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_taussig, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 06:31:30 np0005604943 podman[80130]: 2026-02-02 11:31:30.259228332 +0000 UTC m=+0.165626220 container start cdd5840d6ec8f25d1f42f84d1b5e650c3b16d8bab7805265c6e1ec3502a0086a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_taussig, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Feb  2 06:31:30 np0005604943 podman[80130]: 2026-02-02 11:31:30.263398543 +0000 UTC m=+0.169796521 container attach cdd5840d6ec8f25d1f42f84d1b5e650c3b16d8bab7805265c6e1ec3502a0086a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_taussig, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 06:31:30 np0005604943 happy_taussig[80172]: 167 167
Feb  2 06:31:30 np0005604943 systemd[1]: libpod-cdd5840d6ec8f25d1f42f84d1b5e650c3b16d8bab7805265c6e1ec3502a0086a.scope: Deactivated successfully.
Feb  2 06:31:30 np0005604943 podman[80130]: 2026-02-02 11:31:30.265946467 +0000 UTC m=+0.172344395 container died cdd5840d6ec8f25d1f42f84d1b5e650c3b16d8bab7805265c6e1ec3502a0086a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 06:31:30 np0005604943 systemd[1]: var-lib-containers-storage-overlay-96d3cc78131cac3291eb46c61c1a51c4706c401bb28d2f9d10799c3bb207bfb0-merged.mount: Deactivated successfully.
Feb  2 06:31:30 np0005604943 podman[80130]: 2026-02-02 11:31:30.306648736 +0000 UTC m=+0.213046634 container remove cdd5840d6ec8f25d1f42f84d1b5e650c3b16d8bab7805265c6e1ec3502a0086a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 06:31:30 np0005604943 python3[80169]: ansible-ansible.legacy.async_status Invoked with jid=j366815747818.79520 mode=status _async_dir=/root/.ansible_async
Feb  2 06:31:30 np0005604943 systemd[1]: libpod-conmon-cdd5840d6ec8f25d1f42f84d1b5e650c3b16d8bab7805265c6e1ec3502a0086a.scope: Deactivated successfully.
Feb  2 06:31:30 np0005604943 systemd[1]: Reloading.
Feb  2 06:31:30 np0005604943 ceph-mon[75271]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 06:31:30 np0005604943 ceph-mon[75271]: Updating compute-0:/var/lib/ceph/4548a36b-7cdc-5e3e-a814-4e1571be1fae/config/ceph.client.admin.keyring
Feb  2 06:31:30 np0005604943 ceph-mon[75271]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb  2 06:31:30 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:30 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:30 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:30 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb  2 06:31:30 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb  2 06:31:30 np0005604943 ceph-mon[75271]: Deploying daemon crash.compute-0 on compute-0
Feb  2 06:31:30 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:31:30 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:31:30 np0005604943 systemd[1]: Reloading.
Feb  2 06:31:30 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:30 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:31:30 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:31:30 np0005604943 python3[80277]: ansible-ansible.legacy.async_status Invoked with jid=j366815747818.79520 mode=cleanup _async_dir=/root/.ansible_async
Feb  2 06:31:30 np0005604943 systemd[1]: Starting Ceph crash.compute-0 for 4548a36b-7cdc-5e3e-a814-4e1571be1fae...
Feb  2 06:31:31 np0005604943 podman[80368]: 2026-02-02 11:31:31.157052967 +0000 UTC m=+0.045978803 container create 61b0483497dcb2f7f58b3253d407131b9b241fdd95d0c46b378db53811c0e7e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 06:31:31 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0c70f67ce5d3d1e60f45fe4f202579317f457f6bff89c7a5fc4897816948b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:31 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0c70f67ce5d3d1e60f45fe4f202579317f457f6bff89c7a5fc4897816948b9/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:31 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0c70f67ce5d3d1e60f45fe4f202579317f457f6bff89c7a5fc4897816948b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:31 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0c70f67ce5d3d1e60f45fe4f202579317f457f6bff89c7a5fc4897816948b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:31 np0005604943 podman[80368]: 2026-02-02 11:31:31.139970191 +0000 UTC m=+0.028896077 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:31 np0005604943 podman[80368]: 2026-02-02 11:31:31.243982546 +0000 UTC m=+0.132908412 container init 61b0483497dcb2f7f58b3253d407131b9b241fdd95d0c46b378db53811c0e7e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-crash-compute-0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:31 np0005604943 podman[80368]: 2026-02-02 11:31:31.248775164 +0000 UTC m=+0.137701000 container start 61b0483497dcb2f7f58b3253d407131b9b241fdd95d0c46b378db53811c0e7e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-crash-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 06:31:31 np0005604943 bash[80368]: 61b0483497dcb2f7f58b3253d407131b9b241fdd95d0c46b378db53811c0e7e0
Feb  2 06:31:31 np0005604943 systemd[1]: Started Ceph crash.compute-0 for 4548a36b-7cdc-5e3e-a814-4e1571be1fae.
Feb  2 06:31:31 np0005604943 python3[80405]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 06:31:31 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-crash-compute-0[80408]: INFO:ceph-crash:pinging cluster to exercise our key
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:31 np0005604943 ceph-mgr[75558]: [progress INFO root] complete: finished ev 699f12e1-c262-49ca-bda7-69a45a41b247 (Updating crash deployment (+1 -> 1))
Feb  2 06:31:31 np0005604943 ceph-mgr[75558]: [progress INFO root] Completed event 699f12e1-c262-49ca-bda7-69a45a41b247 (Updating crash deployment (+1 -> 1)) in 2 seconds
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:31 np0005604943 ceph-mgr[75558]: [progress INFO root] update: starting ev d7408636-044d-49ec-bb39-515e2ac83158 (Updating mgr deployment (+1 -> 2))
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.nvrhyg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.nvrhyg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.nvrhyg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "mgr services"} : dispatch
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:31:31 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.nvrhyg on compute-0
Feb  2 06:31:31 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.nvrhyg on compute-0
Feb  2 06:31:31 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-crash-compute-0[80408]: 2026-02-02T11:31:31.404+0000 7fed15abd640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb  2 06:31:31 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-crash-compute-0[80408]: 2026-02-02T11:31:31.404+0000 7fed15abd640 -1 AuthRegistry(0x7fed10052930) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb  2 06:31:31 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-crash-compute-0[80408]: 2026-02-02T11:31:31.408+0000 7fed15abd640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb  2 06:31:31 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-crash-compute-0[80408]: 2026-02-02T11:31:31.408+0000 7fed15abd640 -1 AuthRegistry(0x7fed15abbfe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb  2 06:31:31 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-crash-compute-0[80408]: 2026-02-02T11:31:31.409+0000 7fed0f7fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Feb  2 06:31:31 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-crash-compute-0[80408]: 2026-02-02T11:31:31.410+0000 7fed15abd640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Feb  2 06:31:31 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-crash-compute-0[80408]: [errno 13] RADOS permission denied (error connecting to the cluster)
Feb  2 06:31:31 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-crash-compute-0[80408]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Feb  2 06:31:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 06:31:31 np0005604943 python3[80502]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:31:31 np0005604943 podman[80517]: 2026-02-02 11:31:31.765650151 +0000 UTC m=+0.049880396 container create 8be1d3b1240938419ae7b59a61af5565dede55916283b2cac457cf60c90a6747 (image=quay.io/ceph/ceph:v20, name=xenodochial_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:31 np0005604943 systemd[1]: Started libpod-conmon-8be1d3b1240938419ae7b59a61af5565dede55916283b2cac457cf60c90a6747.scope.
Feb  2 06:31:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:31:31 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:31 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e62a882ca151cb9541a6047d646a3f58fdeb076bc571294c7934d8ffaedb66/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:31 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e62a882ca151cb9541a6047d646a3f58fdeb076bc571294c7934d8ffaedb66/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:31 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e62a882ca151cb9541a6047d646a3f58fdeb076bc571294c7934d8ffaedb66/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:31 np0005604943 podman[80558]: 2026-02-02 11:31:31.823370654 +0000 UTC m=+0.030745722 container create 8f7414e7ca0b4f677ae83875d81bfcf79c9b5fc177895256ce2b797eb401160b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb  2 06:31:31 np0005604943 systemd[1]: Started libpod-conmon-8f7414e7ca0b4f677ae83875d81bfcf79c9b5fc177895256ce2b797eb401160b.scope.
Feb  2 06:31:31 np0005604943 podman[80517]: 2026-02-02 11:31:31.839270145 +0000 UTC m=+0.123500500 container init 8be1d3b1240938419ae7b59a61af5565dede55916283b2cac457cf60c90a6747 (image=quay.io/ceph/ceph:v20, name=xenodochial_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb  2 06:31:31 np0005604943 podman[80517]: 2026-02-02 11:31:31.742612004 +0000 UTC m=+0.026842299 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:31 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:31 np0005604943 podman[80517]: 2026-02-02 11:31:31.847699079 +0000 UTC m=+0.131929324 container start 8be1d3b1240938419ae7b59a61af5565dede55916283b2cac457cf60c90a6747 (image=quay.io/ceph/ceph:v20, name=xenodochial_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 06:31:31 np0005604943 podman[80517]: 2026-02-02 11:31:31.853520697 +0000 UTC m=+0.137750942 container attach 8be1d3b1240938419ae7b59a61af5565dede55916283b2cac457cf60c90a6747 (image=quay.io/ceph/ceph:v20, name=xenodochial_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  2 06:31:31 np0005604943 podman[80558]: 2026-02-02 11:31:31.859332025 +0000 UTC m=+0.066707123 container init 8f7414e7ca0b4f677ae83875d81bfcf79c9b5fc177895256ce2b797eb401160b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_swirles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 06:31:31 np0005604943 podman[80558]: 2026-02-02 11:31:31.867545713 +0000 UTC m=+0.074920791 container start 8f7414e7ca0b4f677ae83875d81bfcf79c9b5fc177895256ce2b797eb401160b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 06:31:31 np0005604943 gracious_swirles[80578]: 167 167
Feb  2 06:31:31 np0005604943 podman[80558]: 2026-02-02 11:31:31.870322374 +0000 UTC m=+0.077697472 container attach 8f7414e7ca0b4f677ae83875d81bfcf79c9b5fc177895256ce2b797eb401160b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_swirles, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 06:31:31 np0005604943 systemd[1]: libpod-8f7414e7ca0b4f677ae83875d81bfcf79c9b5fc177895256ce2b797eb401160b.scope: Deactivated successfully.
Feb  2 06:31:31 np0005604943 podman[80558]: 2026-02-02 11:31:31.871635942 +0000 UTC m=+0.079011030 container died 8f7414e7ca0b4f677ae83875d81bfcf79c9b5fc177895256ce2b797eb401160b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:31:31 np0005604943 systemd[1]: var-lib-containers-storage-overlay-8ca5bf13f8c0a77a65810151e7d61233df9eedde45102e803ca0da3a2996de9e-merged.mount: Deactivated successfully.
Feb  2 06:31:31 np0005604943 podman[80558]: 2026-02-02 11:31:31.809154921 +0000 UTC m=+0.016530019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:31 np0005604943 podman[80558]: 2026-02-02 11:31:31.905814762 +0000 UTC m=+0.113189840 container remove 8f7414e7ca0b4f677ae83875d81bfcf79c9b5fc177895256ce2b797eb401160b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:31 np0005604943 systemd[1]: libpod-conmon-8f7414e7ca0b4f677ae83875d81bfcf79c9b5fc177895256ce2b797eb401160b.scope: Deactivated successfully.
Feb  2 06:31:31 np0005604943 systemd[1]: Reloading.
Feb  2 06:31:32 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:31:32 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:31:32 np0005604943 systemd[1]: Reloading.
Feb  2 06:31:32 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:31:32 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 06:31:32 np0005604943 xenodochial_visvesvaraya[80565]: 
Feb  2 06:31:32 np0005604943 xenodochial_visvesvaraya[80565]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb  2 06:31:32 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:31:32 np0005604943 podman[80517]: 2026-02-02 11:31:32.300520779 +0000 UTC m=+0.584751024 container died 8be1d3b1240938419ae7b59a61af5565dede55916283b2cac457cf60c90a6747 (image=quay.io/ceph/ceph:v20, name=xenodochial_visvesvaraya, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Feb  2 06:31:32 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:32 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:32 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:32 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:32 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:32 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.nvrhyg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb  2 06:31:32 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.nvrhyg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb  2 06:31:32 np0005604943 ceph-mon[75271]: Deploying daemon mgr.compute-0.nvrhyg on compute-0
Feb  2 06:31:32 np0005604943 systemd[1]: libpod-8be1d3b1240938419ae7b59a61af5565dede55916283b2cac457cf60c90a6747.scope: Deactivated successfully.
Feb  2 06:31:32 np0005604943 systemd[1]: var-lib-containers-storage-overlay-81e62a882ca151cb9541a6047d646a3f58fdeb076bc571294c7934d8ffaedb66-merged.mount: Deactivated successfully.
Feb  2 06:31:32 np0005604943 podman[80517]: 2026-02-02 11:31:32.450232407 +0000 UTC m=+0.734462682 container remove 8be1d3b1240938419ae7b59a61af5565dede55916283b2cac457cf60c90a6747 (image=quay.io/ceph/ceph:v20, name=xenodochial_visvesvaraya, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 06:31:32 np0005604943 systemd[1]: Starting Ceph mgr.compute-0.nvrhyg for 4548a36b-7cdc-5e3e-a814-4e1571be1fae...
Feb  2 06:31:32 np0005604943 systemd[1]: libpod-conmon-8be1d3b1240938419ae7b59a61af5565dede55916283b2cac457cf60c90a6747.scope: Deactivated successfully.
Feb  2 06:31:32 np0005604943 podman[80756]: 2026-02-02 11:31:32.693556127 +0000 UTC m=+0.052171172 container create 98d684c7aceb0055b056bb4528279f3ce39cd13a5558dc1fece0ecef5bc3cda9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-nvrhyg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 06:31:32 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd1469e5d400ba152334eb141e4e61d7efb0b4b88608db5e9a43504498d5127/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd1469e5d400ba152334eb141e4e61d7efb0b4b88608db5e9a43504498d5127/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd1469e5d400ba152334eb141e4e61d7efb0b4b88608db5e9a43504498d5127/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd1469e5d400ba152334eb141e4e61d7efb0b4b88608db5e9a43504498d5127/merged/var/lib/ceph/mgr/ceph-compute-0.nvrhyg supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:32 np0005604943 podman[80756]: 2026-02-02 11:31:32.66360622 +0000 UTC m=+0.022221315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:32 np0005604943 podman[80756]: 2026-02-02 11:31:32.760984431 +0000 UTC m=+0.119599476 container init 98d684c7aceb0055b056bb4528279f3ce39cd13a5558dc1fece0ecef5bc3cda9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-nvrhyg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  2 06:31:32 np0005604943 podman[80756]: 2026-02-02 11:31:32.771727873 +0000 UTC m=+0.130342898 container start 98d684c7aceb0055b056bb4528279f3ce39cd13a5558dc1fece0ecef5bc3cda9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-nvrhyg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:32 np0005604943 bash[80756]: 98d684c7aceb0055b056bb4528279f3ce39cd13a5558dc1fece0ecef5bc3cda9
Feb  2 06:31:32 np0005604943 systemd[1]: Started Ceph mgr.compute-0.nvrhyg for 4548a36b-7cdc-5e3e-a814-4e1571be1fae.
Feb  2 06:31:32 np0005604943 ceph-mgr[80802]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 06:31:32 np0005604943 ceph-mgr[80802]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb  2 06:31:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:31:32 np0005604943 ceph-mgr[80802]: pidfile_write: ignore empty --pid-file
Feb  2 06:31:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:31:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 06:31:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:32 np0005604943 ceph-mgr[75558]: [progress INFO root] complete: finished ev d7408636-044d-49ec-bb39-515e2ac83158 (Updating mgr deployment (+1 -> 2))
Feb  2 06:31:32 np0005604943 ceph-mgr[75558]: [progress INFO root] Completed event d7408636-044d-49ec-bb39-515e2ac83158 (Updating mgr deployment (+1 -> 2)) in 1 seconds
Feb  2 06:31:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 06:31:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:32 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'alerts'
Feb  2 06:31:32 np0005604943 python3[80801]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:31:32 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'balancer'
Feb  2 06:31:32 np0005604943 podman[80846]: 2026-02-02 11:31:32.983043056 +0000 UTC m=+0.052553945 container create 71ee9f582846e2323c6bcd45698e0c0ca9d5a6b274b31291c3a3bffbb851d7ae (image=quay.io/ceph/ceph:v20, name=brave_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:33 np0005604943 systemd[1]: Started libpod-conmon-71ee9f582846e2323c6bcd45698e0c0ca9d5a6b274b31291c3a3bffbb851d7ae.scope.
Feb  2 06:31:33 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f2991413dc1e907ce68543b076b9f7944b4947902dd8c5eeb2cc7f226464f3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f2991413dc1e907ce68543b076b9f7944b4947902dd8c5eeb2cc7f226464f3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f2991413dc1e907ce68543b076b9f7944b4947902dd8c5eeb2cc7f226464f3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:33 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'cephadm'
Feb  2 06:31:33 np0005604943 podman[80846]: 2026-02-02 11:31:32.963599612 +0000 UTC m=+0.033110531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:33 np0005604943 podman[80846]: 2026-02-02 11:31:33.059025307 +0000 UTC m=+0.128536216 container init 71ee9f582846e2323c6bcd45698e0c0ca9d5a6b274b31291c3a3bffbb851d7ae (image=quay.io/ceph/ceph:v20, name=brave_bassi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 06:31:33 np0005604943 podman[80846]: 2026-02-02 11:31:33.06501726 +0000 UTC m=+0.134528149 container start 71ee9f582846e2323c6bcd45698e0c0ca9d5a6b274b31291c3a3bffbb851d7ae (image=quay.io/ceph/ceph:v20, name=brave_bassi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 06:31:33 np0005604943 podman[80846]: 2026-02-02 11:31:33.068688467 +0000 UTC m=+0.138199356 container attach 71ee9f582846e2323c6bcd45698e0c0ca9d5a6b274b31291c3a3bffbb851d7ae (image=quay.io/ceph/ceph:v20, name=brave_bassi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:33 np0005604943 podman[80977]: 2026-02-02 11:31:33.411396407 +0000 UTC m=+0.051621797 container exec fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2007180546' entity='client.admin' 
Feb  2 06:31:33 np0005604943 systemd[1]: libpod-71ee9f582846e2323c6bcd45698e0c0ca9d5a6b274b31291c3a3bffbb851d7ae.scope: Deactivated successfully.
Feb  2 06:31:33 np0005604943 podman[80846]: 2026-02-02 11:31:33.51849344 +0000 UTC m=+0.588004329 container died 71ee9f582846e2323c6bcd45698e0c0ca9d5a6b274b31291c3a3bffbb851d7ae (image=quay.io/ceph/ceph:v20, name=brave_bassi, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:31:33 np0005604943 podman[80977]: 2026-02-02 11:31:33.527899943 +0000 UTC m=+0.168125303 container exec_died fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 06:31:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 06:31:33 np0005604943 systemd[1]: var-lib-containers-storage-overlay-99f2991413dc1e907ce68543b076b9f7944b4947902dd8c5eeb2cc7f226464f3-merged.mount: Deactivated successfully.
Feb  2 06:31:33 np0005604943 podman[80846]: 2026-02-02 11:31:33.570834007 +0000 UTC m=+0.640344926 container remove 71ee9f582846e2323c6bcd45698e0c0ca9d5a6b274b31291c3a3bffbb851d7ae (image=quay.io/ceph/ceph:v20, name=brave_bassi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 06:31:33 np0005604943 systemd[1]: libpod-conmon-71ee9f582846e2323c6bcd45698e0c0ca9d5a6b274b31291c3a3bffbb851d7ae.scope: Deactivated successfully.
Feb  2 06:31:33 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'crash'
Feb  2 06:31:33 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'dashboard'
Feb  2 06:31:33 np0005604943 python3[81110]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:31:33 np0005604943 ansible-async_wrapper.py[79613]: Done in kid B.
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:31:33 np0005604943 podman[81143]: 2026-02-02 11:31:33.904218017 +0000 UTC m=+0.040004451 container create c1939635923123aa7b7118ca7f1023b14292cd69988c68704d4ceefd7aa8c890 (image=quay.io/ceph/ceph:v20, name=jolly_allen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:33 np0005604943 systemd[1]: Started libpod-conmon-c1939635923123aa7b7118ca7f1023b14292cd69988c68704d4ceefd7aa8c890.scope.
Feb  2 06:31:33 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88de4897deb7ab885a3da49df6074f8e490c8d3d806efc7ebcf608b7c1f5ff4f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88de4897deb7ab885a3da49df6074f8e490c8d3d806efc7ebcf608b7c1f5ff4f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88de4897deb7ab885a3da49df6074f8e490c8d3d806efc7ebcf608b7c1f5ff4f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:33 np0005604943 podman[81143]: 2026-02-02 11:31:33.885127273 +0000 UTC m=+0.020913747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:33 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Feb  2 06:31:33 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:31:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:31:33 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Feb  2 06:31:33 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Feb  2 06:31:33 np0005604943 podman[81143]: 2026-02-02 11:31:33.98855481 +0000 UTC m=+0.124341284 container init c1939635923123aa7b7118ca7f1023b14292cd69988c68704d4ceefd7aa8c890 (image=quay.io/ceph/ceph:v20, name=jolly_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 06:31:33 np0005604943 podman[81143]: 2026-02-02 11:31:33.996976994 +0000 UTC m=+0.132763448 container start c1939635923123aa7b7118ca7f1023b14292cd69988c68704d4ceefd7aa8c890 (image=quay.io/ceph/ceph:v20, name=jolly_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 06:31:34 np0005604943 podman[81143]: 2026-02-02 11:31:34.000497097 +0000 UTC m=+0.136283561 container attach c1939635923123aa7b7118ca7f1023b14292cd69988c68704d4ceefd7aa8c890 (image=quay.io/ceph/ceph:v20, name=jolly_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 06:31:34 np0005604943 podman[81273]: 2026-02-02 11:31:34.34747996 +0000 UTC m=+0.052152552 container create 01843a31f79323a8a0a80c086b368570e51d99bd8497aede038f3360d8310275 (image=quay.io/ceph/ceph:v20, name=loving_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:31:34 np0005604943 systemd[1]: Started libpod-conmon-01843a31f79323a8a0a80c086b368570e51d99bd8497aede038f3360d8310275.scope.
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2463246506' entity='client.admin' 
Feb  2 06:31:34 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:34 np0005604943 podman[81273]: 2026-02-02 11:31:34.414803821 +0000 UTC m=+0.119476383 container init 01843a31f79323a8a0a80c086b368570e51d99bd8497aede038f3360d8310275 (image=quay.io/ceph/ceph:v20, name=loving_johnson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:34 np0005604943 systemd[1]: libpod-c1939635923123aa7b7118ca7f1023b14292cd69988c68704d4ceefd7aa8c890.scope: Deactivated successfully.
Feb  2 06:31:34 np0005604943 podman[81143]: 2026-02-02 11:31:34.41994131 +0000 UTC m=+0.555727784 container died c1939635923123aa7b7118ca7f1023b14292cd69988c68704d4ceefd7aa8c890 (image=quay.io/ceph/ceph:v20, name=jolly_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:34 np0005604943 podman[81273]: 2026-02-02 11:31:34.322868337 +0000 UTC m=+0.027540909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:34 np0005604943 podman[81273]: 2026-02-02 11:31:34.423926145 +0000 UTC m=+0.128598737 container start 01843a31f79323a8a0a80c086b368570e51d99bd8497aede038f3360d8310275 (image=quay.io/ceph/ceph:v20, name=loving_johnson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 06:31:34 np0005604943 loving_johnson[81289]: 167 167
Feb  2 06:31:34 np0005604943 systemd[1]: libpod-01843a31f79323a8a0a80c086b368570e51d99bd8497aede038f3360d8310275.scope: Deactivated successfully.
Feb  2 06:31:34 np0005604943 podman[81273]: 2026-02-02 11:31:34.435426978 +0000 UTC m=+0.140099550 container attach 01843a31f79323a8a0a80c086b368570e51d99bd8497aede038f3360d8310275 (image=quay.io/ceph/ceph:v20, name=loving_johnson, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 06:31:34 np0005604943 podman[81273]: 2026-02-02 11:31:34.435922082 +0000 UTC m=+0.140594634 container died 01843a31f79323a8a0a80c086b368570e51d99bd8497aede038f3360d8310275 (image=quay.io/ceph/ceph:v20, name=loving_johnson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:31:34 np0005604943 systemd[1]: var-lib-containers-storage-overlay-88de4897deb7ab885a3da49df6074f8e490c8d3d806efc7ebcf608b7c1f5ff4f-merged.mount: Deactivated successfully.
Feb  2 06:31:34 np0005604943 systemd[1]: var-lib-containers-storage-overlay-1ad6234ac9687de938c5f295f211b4d4c36fcfb7a1c14912379c2eb43643c825-merged.mount: Deactivated successfully.
Feb  2 06:31:34 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'devicehealth'
Feb  2 06:31:34 np0005604943 podman[81143]: 2026-02-02 11:31:34.464793679 +0000 UTC m=+0.600580123 container remove c1939635923123aa7b7118ca7f1023b14292cd69988c68704d4ceefd7aa8c890 (image=quay.io/ceph/ceph:v20, name=jolly_allen, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Feb  2 06:31:34 np0005604943 systemd[1]: libpod-conmon-c1939635923123aa7b7118ca7f1023b14292cd69988c68704d4ceefd7aa8c890.scope: Deactivated successfully.
Feb  2 06:31:34 np0005604943 podman[81273]: 2026-02-02 11:31:34.478557388 +0000 UTC m=+0.183229940 container remove 01843a31f79323a8a0a80c086b368570e51d99bd8497aede038f3360d8310275 (image=quay.io/ceph/ceph:v20, name=loving_johnson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Feb  2 06:31:34 np0005604943 systemd[1]: libpod-conmon-01843a31f79323a8a0a80c086b368570e51d99bd8497aede038f3360d8310275.scope: Deactivated successfully.
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/2007180546' entity='client.admin' 
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/2463246506' entity='client.admin' 
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:34 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.twcemg (unknown last config time)...
Feb  2 06:31:34 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.twcemg (unknown last config time)...
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.twcemg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.twcemg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "mgr services"} : dispatch
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:31:34 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:31:34 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.twcemg on compute-0
Feb  2 06:31:34 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.twcemg on compute-0
Feb  2 06:31:34 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'diskprediction_local'
Feb  2 06:31:34 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-nvrhyg[80792]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb  2 06:31:34 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-nvrhyg[80792]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb  2 06:31:34 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-nvrhyg[80792]:  from numpy import show_config as show_numpy_config
Feb  2 06:31:34 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'influx'
Feb  2 06:31:34 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:34 np0005604943 python3[81396]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:31:34 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'insights'
Feb  2 06:31:34 np0005604943 podman[81397]: 2026-02-02 11:31:34.821000881 +0000 UTC m=+0.045032066 container create 84eed69fde37eec6bb84dc3077dfe1d929b9ad9eca11b78b579baa00e95c1894 (image=quay.io/ceph/ceph:v20, name=jovial_ardinghelli, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:31:34 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'iostat'
Feb  2 06:31:34 np0005604943 systemd[1]: Started libpod-conmon-84eed69fde37eec6bb84dc3077dfe1d929b9ad9eca11b78b579baa00e95c1894.scope.
Feb  2 06:31:34 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:34 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746243a2f84bbbe09084db70d98e1b14b60f0a124c279e7aaaba7d3889adb5b4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:34 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746243a2f84bbbe09084db70d98e1b14b60f0a124c279e7aaaba7d3889adb5b4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:34 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746243a2f84bbbe09084db70d98e1b14b60f0a124c279e7aaaba7d3889adb5b4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:34 np0005604943 podman[81397]: 2026-02-02 11:31:34.797930132 +0000 UTC m=+0.021961327 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:34 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'k8sevents'
Feb  2 06:31:34 np0005604943 podman[81397]: 2026-02-02 11:31:34.908837255 +0000 UTC m=+0.132868440 container init 84eed69fde37eec6bb84dc3077dfe1d929b9ad9eca11b78b579baa00e95c1894 (image=quay.io/ceph/ceph:v20, name=jovial_ardinghelli, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:31:34 np0005604943 podman[81397]: 2026-02-02 11:31:34.913190202 +0000 UTC m=+0.137221407 container start 84eed69fde37eec6bb84dc3077dfe1d929b9ad9eca11b78b579baa00e95c1894 (image=quay.io/ceph/ceph:v20, name=jovial_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:31:34 np0005604943 podman[81397]: 2026-02-02 11:31:34.916762966 +0000 UTC m=+0.140794141 container attach 84eed69fde37eec6bb84dc3077dfe1d929b9ad9eca11b78b579baa00e95c1894 (image=quay.io/ceph/ceph:v20, name=jovial_ardinghelli, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:31:34 np0005604943 podman[81428]: 2026-02-02 11:31:34.926966601 +0000 UTC m=+0.054783288 container create e02d09360d60fe8cab2f89dcd759ab81ac9e7f66046dca68d694d1bc99ba5f12 (image=quay.io/ceph/ceph:v20, name=zealous_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Feb  2 06:31:34 np0005604943 systemd[1]: Started libpod-conmon-e02d09360d60fe8cab2f89dcd759ab81ac9e7f66046dca68d694d1bc99ba5f12.scope.
Feb  2 06:31:34 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:34 np0005604943 podman[81428]: 2026-02-02 11:31:34.9835167 +0000 UTC m=+0.111333377 container init e02d09360d60fe8cab2f89dcd759ab81ac9e7f66046dca68d694d1bc99ba5f12 (image=quay.io/ceph/ceph:v20, name=zealous_jones, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 06:31:34 np0005604943 podman[81428]: 2026-02-02 11:31:34.892132071 +0000 UTC m=+0.019948828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:34 np0005604943 podman[81428]: 2026-02-02 11:31:34.987775453 +0000 UTC m=+0.115592110 container start e02d09360d60fe8cab2f89dcd759ab81ac9e7f66046dca68d694d1bc99ba5f12 (image=quay.io/ceph/ceph:v20, name=zealous_jones, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:34 np0005604943 zealous_jones[81445]: 167 167
Feb  2 06:31:34 np0005604943 systemd[1]: libpod-e02d09360d60fe8cab2f89dcd759ab81ac9e7f66046dca68d694d1bc99ba5f12.scope: Deactivated successfully.
Feb  2 06:31:34 np0005604943 conmon[81445]: conmon e02d09360d60fe8cab2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e02d09360d60fe8cab2f89dcd759ab81ac9e7f66046dca68d694d1bc99ba5f12.scope/container/memory.events
Feb  2 06:31:34 np0005604943 podman[81428]: 2026-02-02 11:31:34.991095669 +0000 UTC m=+0.118912316 container attach e02d09360d60fe8cab2f89dcd759ab81ac9e7f66046dca68d694d1bc99ba5f12 (image=quay.io/ceph/ceph:v20, name=zealous_jones, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 06:31:34 np0005604943 podman[81428]: 2026-02-02 11:31:34.991368017 +0000 UTC m=+0.119184684 container died e02d09360d60fe8cab2f89dcd759ab81ac9e7f66046dca68d694d1bc99ba5f12 (image=quay.io/ceph/ceph:v20, name=zealous_jones, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:35 np0005604943 podman[81428]: 2026-02-02 11:31:35.024805146 +0000 UTC m=+0.152621793 container remove e02d09360d60fe8cab2f89dcd759ab81ac9e7f66046dca68d694d1bc99ba5f12 (image=quay.io/ceph/ceph:v20, name=zealous_jones, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 06:31:35 np0005604943 systemd[1]: libpod-conmon-e02d09360d60fe8cab2f89dcd759ab81ac9e7f66046dca68d694d1bc99ba5f12.scope: Deactivated successfully.
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:35 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'localpool'
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/328478135' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Feb  2 06:31:35 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'mds_autoscaler'
Feb  2 06:31:35 np0005604943 systemd[1]: var-lib-containers-storage-overlay-8dee852ae3b182024388924b7d761d7ab98232d33b5849c73c0a6eb10c01d8a4-merged.mount: Deactivated successfully.
Feb  2 06:31:35 np0005604943 podman[81576]: 2026-02-02 11:31:35.529411827 +0000 UTC m=+0.049256248 container exec fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: Reconfiguring mon.compute-0 (unknown last config time)...
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: Reconfiguring daemon mon.compute-0 on compute-0
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: Reconfiguring mgr.compute-0.twcemg (unknown last config time)...
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.twcemg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: Reconfiguring daemon mgr.compute-0.twcemg on compute-0
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/328478135' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Feb  2 06:31:35 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'mirroring'
Feb  2 06:31:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 06:31:35 np0005604943 podman[81576]: 2026-02-02 11:31:35.625552173 +0000 UTC m=+0.145396554 container exec_died fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 06:31:35 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'nfs'
Feb  2 06:31:35 np0005604943 ceph-mgr[75558]: [progress INFO root] Writing back 2 completed events
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:35 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'orchestrator'
Feb  2 06:31:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/328478135' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Feb  2 06:31:36 np0005604943 jovial_ardinghelli[81426]: set require_min_compat_client to mimic
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Feb  2 06:31:36 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'osd_perf_query'
Feb  2 06:31:36 np0005604943 systemd[1]: libpod-84eed69fde37eec6bb84dc3077dfe1d929b9ad9eca11b78b579baa00e95c1894.scope: Deactivated successfully.
Feb  2 06:31:36 np0005604943 podman[81397]: 2026-02-02 11:31:36.109366442 +0000 UTC m=+1.333397607 container died 84eed69fde37eec6bb84dc3077dfe1d929b9ad9eca11b78b579baa00e95c1894 (image=quay.io/ceph/ceph:v20, name=jovial_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:36 np0005604943 systemd[1]: var-lib-containers-storage-overlay-746243a2f84bbbe09084db70d98e1b14b60f0a124c279e7aaaba7d3889adb5b4-merged.mount: Deactivated successfully.
Feb  2 06:31:36 np0005604943 podman[81397]: 2026-02-02 11:31:36.156725984 +0000 UTC m=+1.380757149 container remove 84eed69fde37eec6bb84dc3077dfe1d929b9ad9eca11b78b579baa00e95c1894 (image=quay.io/ceph/ceph:v20, name=jovial_ardinghelli, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 06:31:36 np0005604943 systemd[1]: libpod-conmon-84eed69fde37eec6bb84dc3077dfe1d929b9ad9eca11b78b579baa00e95c1894.scope: Deactivated successfully.
Feb  2 06:31:36 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'osd_support'
Feb  2 06:31:36 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'pg_autoscaler'
Feb  2 06:31:36 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'progress'
Feb  2 06:31:36 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'prometheus'
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/328478135' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb  2 06:31:36 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:36 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'rbd_support'
Feb  2 06:31:36 np0005604943 python3[81755]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:31:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:31:36 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'rgw'
Feb  2 06:31:36 np0005604943 podman[81756]: 2026-02-02 11:31:36.85653201 +0000 UTC m=+0.038634439 container create b19e0a6574af5b8c6e60a1285d95a8990b9f693d913d0a4c0ae22b7f9706f9ba (image=quay.io/ceph/ceph:v20, name=nice_jones, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:31:36 np0005604943 systemd[1]: Started libpod-conmon-b19e0a6574af5b8c6e60a1285d95a8990b9f693d913d0a4c0ae22b7f9706f9ba.scope.
Feb  2 06:31:36 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:36 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40956237519efc5736990243772a32c3e25a5335d8b1788f0f60cab92672b034/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:36 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40956237519efc5736990243772a32c3e25a5335d8b1788f0f60cab92672b034/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:36 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40956237519efc5736990243772a32c3e25a5335d8b1788f0f60cab92672b034/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:36 np0005604943 podman[81756]: 2026-02-02 11:31:36.923849581 +0000 UTC m=+0.105952030 container init b19e0a6574af5b8c6e60a1285d95a8990b9f693d913d0a4c0ae22b7f9706f9ba (image=quay.io/ceph/ceph:v20, name=nice_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 06:31:36 np0005604943 podman[81756]: 2026-02-02 11:31:36.928428064 +0000 UTC m=+0.110530523 container start b19e0a6574af5b8c6e60a1285d95a8990b9f693d913d0a4c0ae22b7f9706f9ba (image=quay.io/ceph/ceph:v20, name=nice_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:31:36 np0005604943 podman[81756]: 2026-02-02 11:31:36.93241742 +0000 UTC m=+0.114519869 container attach b19e0a6574af5b8c6e60a1285d95a8990b9f693d913d0a4c0ae22b7f9706f9ba (image=quay.io/ceph/ceph:v20, name=nice_jones, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:36 np0005604943 podman[81756]: 2026-02-02 11:31:36.838916341 +0000 UTC m=+0.021018820 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:37 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'rook'
Feb  2 06:31:37 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 06:31:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 06:31:37 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'selftest'
Feb  2 06:31:37 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'smb'
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:37 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Added host compute-0
Feb  2 06:31:37 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Added host compute-0
Feb  2 06:31:37 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Saving service mon spec with placement compute-0
Feb  2 06:31:37 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:31:37 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Feb  2 06:31:37 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:37 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Feb  2 06:31:37 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Feb  2 06:31:37 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Feb  2 06:31:37 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:37 np0005604943 ceph-mgr[75558]: [progress INFO root] update: starting ev 19ab880b-92e8-4b58-b42b-ef024e386dbf (Updating mgr deployment (-1 -> 1))
Feb  2 06:31:37 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.nvrhyg from compute-0 -- ports [8765]
Feb  2 06:31:37 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.nvrhyg from compute-0 -- ports [8765]
Feb  2 06:31:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:37 np0005604943 nice_jones[81772]: Added host 'compute-0' with addr '192.168.122.100'
Feb  2 06:31:37 np0005604943 nice_jones[81772]: Scheduled mon update...
Feb  2 06:31:37 np0005604943 nice_jones[81772]: Scheduled mgr update...
Feb  2 06:31:37 np0005604943 nice_jones[81772]: Scheduled osd.default_drive_group update...
Feb  2 06:31:37 np0005604943 systemd[1]: libpod-b19e0a6574af5b8c6e60a1285d95a8990b9f693d913d0a4c0ae22b7f9706f9ba.scope: Deactivated successfully.
Feb  2 06:31:37 np0005604943 podman[81756]: 2026-02-02 11:31:37.860541422 +0000 UTC m=+1.042643861 container died b19e0a6574af5b8c6e60a1285d95a8990b9f693d913d0a4c0ae22b7f9706f9ba (image=quay.io/ceph/ceph:v20, name=nice_jones, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 06:31:37 np0005604943 systemd[1]: var-lib-containers-storage-overlay-40956237519efc5736990243772a32c3e25a5335d8b1788f0f60cab92672b034-merged.mount: Deactivated successfully.
Feb  2 06:31:37 np0005604943 podman[81756]: 2026-02-02 11:31:37.911310134 +0000 UTC m=+1.093412563 container remove b19e0a6574af5b8c6e60a1285d95a8990b9f693d913d0a4c0ae22b7f9706f9ba (image=quay.io/ceph/ceph:v20, name=nice_jones, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:37 np0005604943 systemd[1]: libpod-conmon-b19e0a6574af5b8c6e60a1285d95a8990b9f693d913d0a4c0ae22b7f9706f9ba.scope: Deactivated successfully.
Feb  2 06:31:37 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'snap_schedule'
Feb  2 06:31:38 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'stats'
Feb  2 06:31:38 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'status'
Feb  2 06:31:38 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'telegraf'
Feb  2 06:31:38 np0005604943 systemd[1]: Stopping Ceph mgr.compute-0.nvrhyg for 4548a36b-7cdc-5e3e-a814-4e1571be1fae...
Feb  2 06:31:38 np0005604943 ceph-mgr[80802]: mgr[py] Loading python module 'telemetry'
Feb  2 06:31:38 np0005604943 python3[81969]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:31:38 np0005604943 podman[82000]: 2026-02-02 11:31:38.37365069 +0000 UTC m=+0.041067921 container create 8dd988cc64d4ce1e7564f6ac4709d93ab823c3fb6f3ade3bd67a75de0ad085f4 (image=quay.io/ceph/ceph:v20, name=serene_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 06:31:38 np0005604943 systemd[1]: Started libpod-conmon-8dd988cc64d4ce1e7564f6ac4709d93ab823c3fb6f3ade3bd67a75de0ad085f4.scope.
Feb  2 06:31:38 np0005604943 podman[81998]: 2026-02-02 11:31:38.419297042 +0000 UTC m=+0.093317554 container died 98d684c7aceb0055b056bb4528279f3ce39cd13a5558dc1fece0ecef5bc3cda9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-nvrhyg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 06:31:38 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:38 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8434b38508d4a75271b45b75d0bf0fc8523ea5cb57733e17abbf25a3d54faeae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:38 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8434b38508d4a75271b45b75d0bf0fc8523ea5cb57733e17abbf25a3d54faeae/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:38 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8434b38508d4a75271b45b75d0bf0fc8523ea5cb57733e17abbf25a3d54faeae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:38 np0005604943 systemd[1]: var-lib-containers-storage-overlay-bcd1469e5d400ba152334eb141e4e61d7efb0b4b88608db5e9a43504498d5127-merged.mount: Deactivated successfully.
Feb  2 06:31:38 np0005604943 podman[82000]: 2026-02-02 11:31:38.447643734 +0000 UTC m=+0.115060995 container init 8dd988cc64d4ce1e7564f6ac4709d93ab823c3fb6f3ade3bd67a75de0ad085f4 (image=quay.io/ceph/ceph:v20, name=serene_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 06:31:38 np0005604943 podman[82000]: 2026-02-02 11:31:38.356502913 +0000 UTC m=+0.023920164 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:31:38 np0005604943 podman[82000]: 2026-02-02 11:31:38.45439446 +0000 UTC m=+0.121811701 container start 8dd988cc64d4ce1e7564f6ac4709d93ab823c3fb6f3ade3bd67a75de0ad085f4 (image=quay.io/ceph/ceph:v20, name=serene_merkle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 06:31:38 np0005604943 podman[82000]: 2026-02-02 11:31:38.465774839 +0000 UTC m=+0.133192070 container attach 8dd988cc64d4ce1e7564f6ac4709d93ab823c3fb6f3ade3bd67a75de0ad085f4 (image=quay.io/ceph/ceph:v20, name=serene_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 06:31:38 np0005604943 podman[81998]: 2026-02-02 11:31:38.47269294 +0000 UTC m=+0.146713462 container remove 98d684c7aceb0055b056bb4528279f3ce39cd13a5558dc1fece0ecef5bc3cda9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-nvrhyg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 06:31:38 np0005604943 bash[81998]: ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-nvrhyg
Feb  2 06:31:38 np0005604943 systemd[1]: ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae@mgr.compute-0.nvrhyg.service: Main process exited, code=exited, status=143/n/a
Feb  2 06:31:38 np0005604943 systemd[1]: ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae@mgr.compute-0.nvrhyg.service: Failed with result 'exit-code'.
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:38 np0005604943 systemd[1]: Stopped Ceph mgr.compute-0.nvrhyg for 4548a36b-7cdc-5e3e-a814-4e1571be1fae.
Feb  2 06:31:38 np0005604943 systemd[1]: ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae@mgr.compute-0.nvrhyg.service: Consumed 6.409s CPU time, 414.8M memory peak, read 0B from disk, written 161.0K to disk.
Feb  2 06:31:38 np0005604943 systemd[1]: Reloading.
Feb  2 06:31:38 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:31:38 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:31:38 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:38 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.nvrhyg
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.nvrhyg"} v 0)
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.nvrhyg"} : dispatch
Feb  2 06:31:38 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.nvrhyg
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.nvrhyg"}]': finished
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165650771' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb  2 06:31:38 np0005604943 ceph-mgr[75558]: [progress INFO root] complete: finished ev 19ab880b-92e8-4b58-b42b-ef024e386dbf (Updating mgr deployment (-1 -> 1))
Feb  2 06:31:38 np0005604943 ceph-mgr[75558]: [progress INFO root] Completed event 19ab880b-92e8-4b58-b42b-ef024e386dbf (Updating mgr deployment (-1 -> 1)) in 1 seconds
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 06:31:38 np0005604943 serene_merkle[82028]: 
Feb  2 06:31:38 np0005604943 serene_merkle[82028]: {"fsid":"4548a36b-7cdc-5e3e-a814-4e1571be1fae","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":47,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-02-02T11:30:49:598633+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-02-02T11:30:49.600559+0000","services":{}},"progress_events":{}}
Feb  2 06:31:38 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:38 np0005604943 systemd[1]: libpod-8dd988cc64d4ce1e7564f6ac4709d93ab823c3fb6f3ade3bd67a75de0ad085f4.scope: Deactivated successfully.
Feb  2 06:31:38 np0005604943 podman[82000]: 2026-02-02 11:31:38.978097034 +0000 UTC m=+0.645514295 container died 8dd988cc64d4ce1e7564f6ac4709d93ab823c3fb6f3ade3bd67a75de0ad085f4 (image=quay.io/ceph/ceph:v20, name=serene_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 06:31:39 np0005604943 systemd[1]: var-lib-containers-storage-overlay-8434b38508d4a75271b45b75d0bf0fc8523ea5cb57733e17abbf25a3d54faeae-merged.mount: Deactivated successfully.
Feb  2 06:31:39 np0005604943 podman[82000]: 2026-02-02 11:31:39.016508987 +0000 UTC m=+0.683926208 container remove 8dd988cc64d4ce1e7564f6ac4709d93ab823c3fb6f3ade3bd67a75de0ad085f4 (image=quay.io/ceph/ceph:v20, name=serene_merkle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:31:39 np0005604943 systemd[1]: libpod-conmon-8dd988cc64d4ce1e7564f6ac4709d93ab823c3fb6f3ade3bd67a75de0ad085f4.scope: Deactivated successfully.
Feb  2 06:31:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 06:31:39 np0005604943 podman[82266]: 2026-02-02 11:31:39.592538097 +0000 UTC m=+0.063915542 container exec fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 06:31:39 np0005604943 ceph-mon[75271]: Added host compute-0
Feb  2 06:31:39 np0005604943 ceph-mon[75271]: Saving service mon spec with placement compute-0
Feb  2 06:31:39 np0005604943 ceph-mon[75271]: Saving service mgr spec with placement compute-0
Feb  2 06:31:39 np0005604943 ceph-mon[75271]: Marking host: compute-0 for OSDSpec preview refresh.
Feb  2 06:31:39 np0005604943 ceph-mon[75271]: Saving service osd.default_drive_group spec with placement compute-0
Feb  2 06:31:39 np0005604943 ceph-mon[75271]: Removing daemon mgr.compute-0.nvrhyg from compute-0 -- ports [8765]
Feb  2 06:31:39 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.nvrhyg"} : dispatch
Feb  2 06:31:39 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.nvrhyg"}]': finished
Feb  2 06:31:39 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:39 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:39 np0005604943 podman[82266]: 2026-02-02 11:31:39.70893959 +0000 UTC m=+0.180317035 container exec_died fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:31:40 np0005604943 podman[82420]: 2026-02-02 11:31:40.548147876 +0000 UTC m=+0.052322316 container create e886a5f77fbfed80ef2272ec13072047305c9899c3fd6240823a4b51c109fa0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 06:31:40 np0005604943 systemd[1]: Started libpod-conmon-e886a5f77fbfed80ef2272ec13072047305c9899c3fd6240823a4b51c109fa0e.scope.
Feb  2 06:31:40 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: Removing key for mgr.compute-0.nvrhyg
Feb  2 06:31:40 np0005604943 podman[82420]: 2026-02-02 11:31:40.522658728 +0000 UTC m=+0.026833208 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:31:40 np0005604943 podman[82420]: 2026-02-02 11:31:40.628193235 +0000 UTC m=+0.132367675 container init e886a5f77fbfed80ef2272ec13072047305c9899c3fd6240823a4b51c109fa0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:40 np0005604943 podman[82420]: 2026-02-02 11:31:40.634831978 +0000 UTC m=+0.139006388 container start e886a5f77fbfed80ef2272ec13072047305c9899c3fd6240823a4b51c109fa0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 06:31:40 np0005604943 musing_engelbart[82436]: 167 167
Feb  2 06:31:40 np0005604943 podman[82420]: 2026-02-02 11:31:40.63940051 +0000 UTC m=+0.143574930 container attach e886a5f77fbfed80ef2272ec13072047305c9899c3fd6240823a4b51c109fa0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 06:31:40 np0005604943 systemd[1]: libpod-e886a5f77fbfed80ef2272ec13072047305c9899c3fd6240823a4b51c109fa0e.scope: Deactivated successfully.
Feb  2 06:31:40 np0005604943 podman[82420]: 2026-02-02 11:31:40.639723229 +0000 UTC m=+0.143897639 container died e886a5f77fbfed80ef2272ec13072047305c9899c3fd6240823a4b51c109fa0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:40 np0005604943 systemd[1]: var-lib-containers-storage-overlay-dddd18b50f7e9fa63711a871c23b36dec42723be30a80d57d8e2feeeac0cbb65-merged.mount: Deactivated successfully.
Feb  2 06:31:40 np0005604943 podman[82420]: 2026-02-02 11:31:40.683485788 +0000 UTC m=+0.187660198 container remove e886a5f77fbfed80ef2272ec13072047305c9899c3fd6240823a4b51c109fa0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:40 np0005604943 systemd[1]: libpod-conmon-e886a5f77fbfed80ef2272ec13072047305c9899c3fd6240823a4b51c109fa0e.scope: Deactivated successfully.
Feb  2 06:31:40 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:31:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:31:40 np0005604943 ceph-mgr[75558]: [progress INFO root] Writing back 3 completed events
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 06:31:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:31:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:31:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:31:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:31:40 np0005604943 podman[82461]: 2026-02-02 11:31:40.830470817 +0000 UTC m=+0.056206960 container create 37c7a568572ab4df9d6f0bf31c1e0aa9e901f506d52b214adc135e353180b82e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_brown, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:31:40 np0005604943 systemd[1]: Started libpod-conmon-37c7a568572ab4df9d6f0bf31c1e0aa9e901f506d52b214adc135e353180b82e.scope.
Feb  2 06:31:40 np0005604943 podman[82461]: 2026-02-02 11:31:40.805752341 +0000 UTC m=+0.031488544 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:40 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a5c2f1a9a1ba53b35f6448e2078254fd0fe809a31ff478a6542e55a32a4c8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a5c2f1a9a1ba53b35f6448e2078254fd0fe809a31ff478a6542e55a32a4c8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a5c2f1a9a1ba53b35f6448e2078254fd0fe809a31ff478a6542e55a32a4c8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a5c2f1a9a1ba53b35f6448e2078254fd0fe809a31ff478a6542e55a32a4c8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a5c2f1a9a1ba53b35f6448e2078254fd0fe809a31ff478a6542e55a32a4c8a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:40 np0005604943 podman[82461]: 2026-02-02 11:31:40.924641865 +0000 UTC m=+0.150377968 container init 37c7a568572ab4df9d6f0bf31c1e0aa9e901f506d52b214adc135e353180b82e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_brown, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 06:31:40 np0005604943 podman[82461]: 2026-02-02 11:31:40.931101952 +0000 UTC m=+0.156838055 container start 37c7a568572ab4df9d6f0bf31c1e0aa9e901f506d52b214adc135e353180b82e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_brown, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 06:31:40 np0005604943 podman[82461]: 2026-02-02 11:31:40.93481377 +0000 UTC m=+0.160549873 container attach 37c7a568572ab4df9d6f0bf31c1e0aa9e901f506d52b214adc135e353180b82e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  2 06:31:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 06:31:41 np0005604943 naughty_brown[82477]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:31:41 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:41 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:41 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e474a366-92f2-422d-9a63-15528361045b
Feb  2 06:31:41 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:31:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "e474a366-92f2-422d-9a63-15528361045b"} v 0)
Feb  2 06:31:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2523588204' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "e474a366-92f2-422d-9a63-15528361045b"} : dispatch
Feb  2 06:31:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Feb  2 06:31:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 06:31:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2523588204' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e474a366-92f2-422d-9a63-15528361045b"}]': finished
Feb  2 06:31:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Feb  2 06:31:42 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Feb  2 06:31:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 06:31:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 06:31:42 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 06:31:42 np0005604943 naughty_brown[82477]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Feb  2 06:31:42 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Feb  2 06:31:42 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb  2 06:31:42 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:42 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Feb  2 06:31:42 np0005604943 lvm[82570]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:31:42 np0005604943 lvm[82570]: VG ceph_vg0 finished
Feb  2 06:31:42 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:42 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/2523588204' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "e474a366-92f2-422d-9a63-15528361045b"} : dispatch
Feb  2 06:31:42 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/2523588204' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e474a366-92f2-422d-9a63-15528361045b"}]': finished
Feb  2 06:31:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb  2 06:31:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1775626015' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb  2 06:31:42 np0005604943 naughty_brown[82477]: stderr: got monmap epoch 1
Feb  2 06:31:42 np0005604943 naughty_brown[82477]: --> Creating keyring file for osd.0
Feb  2 06:31:42 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Feb  2 06:31:42 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Feb  2 06:31:42 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid e474a366-92f2-422d-9a63-15528361045b --setuser ceph --setgroup ceph
Feb  2 06:31:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 06:31:43 np0005604943 ceph-mon[75271]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb  2 06:31:43 np0005604943 ceph-mon[75271]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb  2 06:31:43 np0005604943 naughty_brown[82477]: stderr: 2026-02-02T11:31:42.941+0000 7fcd7a9a38c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Feb  2 06:31:43 np0005604943 naughty_brown[82477]: stderr: 2026-02-02T11:31:42.966+0000 7fcd7a9a38c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Feb  2 06:31:43 np0005604943 naughty_brown[82477]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Feb  2 06:31:43 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  2 06:31:43 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb  2 06:31:44 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:44 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:44 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb  2 06:31:44 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  2 06:31:44 np0005604943 naughty_brown[82477]: --> ceph-volume lvm activate successful for osd ID: 0
Feb  2 06:31:44 np0005604943 naughty_brown[82477]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Feb  2 06:31:44 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:44 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:44 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 6e5a583e-2cb6-47b2-abc4-810fb33b121b
Feb  2 06:31:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b"} v 0)
Feb  2 06:31:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/754856039' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b"} : dispatch
Feb  2 06:31:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Feb  2 06:31:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 06:31:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/754856039' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b"}]': finished
Feb  2 06:31:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Feb  2 06:31:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Feb  2 06:31:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 06:31:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 06:31:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 06:31:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 06:31:44 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 06:31:44 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 06:31:44 np0005604943 lvm[83515]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:31:44 np0005604943 lvm[83515]: VG ceph_vg1 finished
Feb  2 06:31:44 np0005604943 naughty_brown[82477]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Feb  2 06:31:44 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Feb  2 06:31:44 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb  2 06:31:44 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:44 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Feb  2 06:31:44 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:44 np0005604943 ceph-mon[75271]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb  2 06:31:44 np0005604943 ceph-mon[75271]: Cluster is now healthy
Feb  2 06:31:44 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/754856039' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b"} : dispatch
Feb  2 06:31:44 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/754856039' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b"}]': finished
Feb  2 06:31:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb  2 06:31:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2219616785' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb  2 06:31:45 np0005604943 naughty_brown[82477]: stderr: got monmap epoch 1
Feb  2 06:31:45 np0005604943 naughty_brown[82477]: --> Creating keyring file for osd.1
Feb  2 06:31:45 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Feb  2 06:31:45 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Feb  2 06:31:45 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 6e5a583e-2cb6-47b2-abc4-810fb33b121b --setuser ceph --setgroup ceph
Feb  2 06:31:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: stderr: 2026-02-02T11:31:45.299+0000 7f4ce468a8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: stderr: 2026-02-02T11:31:45.320+0000 7f4ce468a8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: --> ceph-volume lvm activate successful for osd ID: 1
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5
Feb  2 06:31:46 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5"} v 0)
Feb  2 06:31:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3446177158' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5"} : dispatch
Feb  2 06:31:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Feb  2 06:31:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 06:31:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3446177158' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5"}]': finished
Feb  2 06:31:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Feb  2 06:31:46 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Feb  2 06:31:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 06:31:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 06:31:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 06:31:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 06:31:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 06:31:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 06:31:46 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 06:31:46 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 06:31:46 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 06:31:46 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3446177158' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5"} : dispatch
Feb  2 06:31:46 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3446177158' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5"}]': finished
Feb  2 06:31:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:31:46 np0005604943 lvm[84470]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:31:46 np0005604943 lvm[84470]: VG ceph_vg2 finished
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb  2 06:31:46 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Feb  2 06:31:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb  2 06:31:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3136645385' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb  2 06:31:47 np0005604943 naughty_brown[82477]: stderr: got monmap epoch 1
Feb  2 06:31:47 np0005604943 naughty_brown[82477]: --> Creating keyring file for osd.2
Feb  2 06:31:47 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Feb  2 06:31:47 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Feb  2 06:31:47 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5 --setuser ceph --setgroup ceph
Feb  2 06:31:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 06:31:48 np0005604943 naughty_brown[82477]: stderr: 2026-02-02T11:31:47.505+0000 7f84607588c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Feb  2 06:31:48 np0005604943 naughty_brown[82477]: stderr: 2026-02-02T11:31:47.530+0000 7f84607588c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Feb  2 06:31:48 np0005604943 naughty_brown[82477]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Feb  2 06:31:48 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  2 06:31:48 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb  2 06:31:48 np0005604943 naughty_brown[82477]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb  2 06:31:48 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb  2 06:31:48 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb  2 06:31:48 np0005604943 naughty_brown[82477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  2 06:31:48 np0005604943 naughty_brown[82477]: --> ceph-volume lvm activate successful for osd ID: 2
Feb  2 06:31:48 np0005604943 naughty_brown[82477]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Feb  2 06:31:48 np0005604943 systemd[1]: libpod-37c7a568572ab4df9d6f0bf31c1e0aa9e901f506d52b214adc135e353180b82e.scope: Deactivated successfully.
Feb  2 06:31:48 np0005604943 systemd[1]: libpod-37c7a568572ab4df9d6f0bf31c1e0aa9e901f506d52b214adc135e353180b82e.scope: Consumed 5.813s CPU time.
Feb  2 06:31:48 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:48 np0005604943 podman[85389]: 2026-02-02 11:31:48.732481639 +0000 UTC m=+0.029329661 container died 37c7a568572ab4df9d6f0bf31c1e0aa9e901f506d52b214adc135e353180b82e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 06:31:48 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b2a5c2f1a9a1ba53b35f6448e2078254fd0fe809a31ff478a6542e55a32a4c8a-merged.mount: Deactivated successfully.
Feb  2 06:31:48 np0005604943 podman[85389]: 2026-02-02 11:31:48.778144303 +0000 UTC m=+0.074992335 container remove 37c7a568572ab4df9d6f0bf31c1e0aa9e901f506d52b214adc135e353180b82e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_brown, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:48 np0005604943 systemd[1]: libpod-conmon-37c7a568572ab4df9d6f0bf31c1e0aa9e901f506d52b214adc135e353180b82e.scope: Deactivated successfully.
Feb  2 06:31:49 np0005604943 podman[85466]: 2026-02-02 11:31:49.223993011 +0000 UTC m=+0.037637892 container create ae11cfcf60cae7710d058fba2bc7b75a633f9a6c71dcf4e7d53c3d958b29911b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:49 np0005604943 systemd[1]: Started libpod-conmon-ae11cfcf60cae7710d058fba2bc7b75a633f9a6c71dcf4e7d53c3d958b29911b.scope.
Feb  2 06:31:49 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:49 np0005604943 podman[85466]: 2026-02-02 11:31:49.207531885 +0000 UTC m=+0.021176806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:49 np0005604943 podman[85466]: 2026-02-02 11:31:49.304124154 +0000 UTC m=+0.117769105 container init ae11cfcf60cae7710d058fba2bc7b75a633f9a6c71dcf4e7d53c3d958b29911b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wright, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:31:49 np0005604943 podman[85466]: 2026-02-02 11:31:49.313190466 +0000 UTC m=+0.126835347 container start ae11cfcf60cae7710d058fba2bc7b75a633f9a6c71dcf4e7d53c3d958b29911b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wright, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 06:31:49 np0005604943 podman[85466]: 2026-02-02 11:31:49.316611034 +0000 UTC m=+0.130256035 container attach ae11cfcf60cae7710d058fba2bc7b75a633f9a6c71dcf4e7d53c3d958b29911b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wright, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 06:31:49 np0005604943 pedantic_wright[85483]: 167 167
Feb  2 06:31:49 np0005604943 systemd[1]: libpod-ae11cfcf60cae7710d058fba2bc7b75a633f9a6c71dcf4e7d53c3d958b29911b.scope: Deactivated successfully.
Feb  2 06:31:49 np0005604943 podman[85488]: 2026-02-02 11:31:49.364955026 +0000 UTC m=+0.031318838 container died ae11cfcf60cae7710d058fba2bc7b75a633f9a6c71dcf4e7d53c3d958b29911b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wright, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 06:31:49 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a3253941ade2f62d9e463f123eaaddec0b8ff279223af3246188b8623de35275-merged.mount: Deactivated successfully.
Feb  2 06:31:49 np0005604943 podman[85488]: 2026-02-02 11:31:49.398071796 +0000 UTC m=+0.064435608 container remove ae11cfcf60cae7710d058fba2bc7b75a633f9a6c71dcf4e7d53c3d958b29911b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wright, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:31:49 np0005604943 systemd[1]: libpod-conmon-ae11cfcf60cae7710d058fba2bc7b75a633f9a6c71dcf4e7d53c3d958b29911b.scope: Deactivated successfully.
Feb  2 06:31:49 np0005604943 podman[85510]: 2026-02-02 11:31:49.513060527 +0000 UTC m=+0.034078849 container create 0d3b79b972c5b6e74d8fb02c0b31399d4769d976cda36644dc78b8b37b6cd1f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 06:31:49 np0005604943 systemd[1]: Started libpod-conmon-0d3b79b972c5b6e74d8fb02c0b31399d4769d976cda36644dc78b8b37b6cd1f5.scope.
Feb  2 06:31:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 06:31:49 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/606b6d4ce1ce4244996fded8ea0c2ef33e76caeb626e4b14ed23c69e4e49f083/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/606b6d4ce1ce4244996fded8ea0c2ef33e76caeb626e4b14ed23c69e4e49f083/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/606b6d4ce1ce4244996fded8ea0c2ef33e76caeb626e4b14ed23c69e4e49f083/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/606b6d4ce1ce4244996fded8ea0c2ef33e76caeb626e4b14ed23c69e4e49f083/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:49 np0005604943 podman[85510]: 2026-02-02 11:31:49.585502046 +0000 UTC m=+0.106520458 container init 0d3b79b972c5b6e74d8fb02c0b31399d4769d976cda36644dc78b8b37b6cd1f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_zhukovsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:49 np0005604943 podman[85510]: 2026-02-02 11:31:49.59048574 +0000 UTC m=+0.111504062 container start 0d3b79b972c5b6e74d8fb02c0b31399d4769d976cda36644dc78b8b37b6cd1f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 06:31:49 np0005604943 podman[85510]: 2026-02-02 11:31:49.593526858 +0000 UTC m=+0.114545290 container attach 0d3b79b972c5b6e74d8fb02c0b31399d4769d976cda36644dc78b8b37b6cd1f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 06:31:49 np0005604943 podman[85510]: 2026-02-02 11:31:49.500095311 +0000 UTC m=+0.021113653 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]: {
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:    "0": [
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:        {
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "devices": [
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "/dev/loop3"
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            ],
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "lv_name": "ceph_lv0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "lv_size": "21470642176",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "name": "ceph_lv0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "tags": {
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.cluster_name": "ceph",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.crush_device_class": "",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.encrypted": "0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.objectstore": "bluestore",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.osd_id": "0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.type": "block",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.vdo": "0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.with_tpm": "0"
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            },
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "type": "block",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "vg_name": "ceph_vg0"
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:        }
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:    ],
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:    "1": [
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:        {
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "devices": [
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "/dev/loop4"
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            ],
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "lv_name": "ceph_lv1",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "lv_size": "21470642176",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "name": "ceph_lv1",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "tags": {
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.cluster_name": "ceph",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.crush_device_class": "",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.encrypted": "0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.objectstore": "bluestore",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.osd_id": "1",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.type": "block",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.vdo": "0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.with_tpm": "0"
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            },
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "type": "block",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "vg_name": "ceph_vg1"
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:        }
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:    ],
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:    "2": [
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:        {
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "devices": [
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "/dev/loop5"
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            ],
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "lv_name": "ceph_lv2",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "lv_size": "21470642176",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "name": "ceph_lv2",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "tags": {
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.cluster_name": "ceph",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.crush_device_class": "",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.encrypted": "0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.objectstore": "bluestore",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.osd_id": "2",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.type": "block",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.vdo": "0",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:                "ceph.with_tpm": "0"
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            },
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "type": "block",
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:            "vg_name": "ceph_vg2"
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:        }
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]:    ]
Feb  2 06:31:49 np0005604943 priceless_zhukovsky[85526]: }
Feb  2 06:31:49 np0005604943 systemd[1]: libpod-0d3b79b972c5b6e74d8fb02c0b31399d4769d976cda36644dc78b8b37b6cd1f5.scope: Deactivated successfully.
Feb  2 06:31:49 np0005604943 podman[85510]: 2026-02-02 11:31:49.865508599 +0000 UTC m=+0.386526971 container died 0d3b79b972c5b6e74d8fb02c0b31399d4769d976cda36644dc78b8b37b6cd1f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 06:31:49 np0005604943 systemd[1]: var-lib-containers-storage-overlay-606b6d4ce1ce4244996fded8ea0c2ef33e76caeb626e4b14ed23c69e4e49f083-merged.mount: Deactivated successfully.
Feb  2 06:31:49 np0005604943 podman[85510]: 2026-02-02 11:31:49.904893751 +0000 UTC m=+0.425912083 container remove 0d3b79b972c5b6e74d8fb02c0b31399d4769d976cda36644dc78b8b37b6cd1f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:49 np0005604943 systemd[1]: libpod-conmon-0d3b79b972c5b6e74d8fb02c0b31399d4769d976cda36644dc78b8b37b6cd1f5.scope: Deactivated successfully.
Feb  2 06:31:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Feb  2 06:31:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb  2 06:31:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:31:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:31:49 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Feb  2 06:31:49 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Feb  2 06:31:50 np0005604943 podman[85637]: 2026-02-02 11:31:50.376226047 +0000 UTC m=+0.042903673 container create ea1cd92b0f968eae0ab519555fa95a3666ca07fe92a21d3b7b857f8e297f8fea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hugle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  2 06:31:50 np0005604943 systemd[1]: Started libpod-conmon-ea1cd92b0f968eae0ab519555fa95a3666ca07fe92a21d3b7b857f8e297f8fea.scope.
Feb  2 06:31:50 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:50 np0005604943 podman[85637]: 2026-02-02 11:31:50.44639652 +0000 UTC m=+0.113074246 container init ea1cd92b0f968eae0ab519555fa95a3666ca07fe92a21d3b7b857f8e297f8fea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 06:31:50 np0005604943 podman[85637]: 2026-02-02 11:31:50.453901448 +0000 UTC m=+0.120579104 container start ea1cd92b0f968eae0ab519555fa95a3666ca07fe92a21d3b7b857f8e297f8fea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 06:31:50 np0005604943 systemd[1]: libpod-ea1cd92b0f968eae0ab519555fa95a3666ca07fe92a21d3b7b857f8e297f8fea.scope: Deactivated successfully.
Feb  2 06:31:50 np0005604943 condescending_hugle[85653]: 167 167
Feb  2 06:31:50 np0005604943 podman[85637]: 2026-02-02 11:31:50.36079121 +0000 UTC m=+0.027468896 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:50 np0005604943 conmon[85653]: conmon ea1cd92b0f968eae0ab5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea1cd92b0f968eae0ab519555fa95a3666ca07fe92a21d3b7b857f8e297f8fea.scope/container/memory.events
Feb  2 06:31:50 np0005604943 podman[85637]: 2026-02-02 11:31:50.457280076 +0000 UTC m=+0.123957792 container attach ea1cd92b0f968eae0ab519555fa95a3666ca07fe92a21d3b7b857f8e297f8fea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:50 np0005604943 podman[85637]: 2026-02-02 11:31:50.458359057 +0000 UTC m=+0.125036723 container died ea1cd92b0f968eae0ab519555fa95a3666ca07fe92a21d3b7b857f8e297f8fea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hugle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 06:31:50 np0005604943 systemd[1]: var-lib-containers-storage-overlay-1ee060e08e3f4165e1bb6a27e03d0c5bb3fcbf373540098866c4ab5086458880-merged.mount: Deactivated successfully.
Feb  2 06:31:50 np0005604943 podman[85637]: 2026-02-02 11:31:50.49467755 +0000 UTC m=+0.161355186 container remove ea1cd92b0f968eae0ab519555fa95a3666ca07fe92a21d3b7b857f8e297f8fea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hugle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:31:50 np0005604943 systemd[1]: libpod-conmon-ea1cd92b0f968eae0ab519555fa95a3666ca07fe92a21d3b7b857f8e297f8fea.scope: Deactivated successfully.
Feb  2 06:31:50 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb  2 06:31:50 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:50 np0005604943 podman[85684]: 2026-02-02 11:31:50.742298925 +0000 UTC m=+0.057532478 container create 23a65fc0efeebe20a75c7c3db01eea5a2535d6480fba3858a2be32633375c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate-test, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:31:50 np0005604943 systemd[1]: Started libpod-conmon-23a65fc0efeebe20a75c7c3db01eea5a2535d6480fba3858a2be32633375c0d2.scope.
Feb  2 06:31:50 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e370eea81accac406e8e02421582cb35afeb6ebdc7f549718e39c9f210c57249/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e370eea81accac406e8e02421582cb35afeb6ebdc7f549718e39c9f210c57249/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e370eea81accac406e8e02421582cb35afeb6ebdc7f549718e39c9f210c57249/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e370eea81accac406e8e02421582cb35afeb6ebdc7f549718e39c9f210c57249/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e370eea81accac406e8e02421582cb35afeb6ebdc7f549718e39c9f210c57249/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:50 np0005604943 podman[85684]: 2026-02-02 11:31:50.715944041 +0000 UTC m=+0.031177664 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:50 np0005604943 podman[85684]: 2026-02-02 11:31:50.824250759 +0000 UTC m=+0.139484302 container init 23a65fc0efeebe20a75c7c3db01eea5a2535d6480fba3858a2be32633375c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate-test, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:50 np0005604943 podman[85684]: 2026-02-02 11:31:50.831647123 +0000 UTC m=+0.146880646 container start 23a65fc0efeebe20a75c7c3db01eea5a2535d6480fba3858a2be32633375c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate-test, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:31:50 np0005604943 podman[85684]: 2026-02-02 11:31:50.834192827 +0000 UTC m=+0.149426350 container attach 23a65fc0efeebe20a75c7c3db01eea5a2535d6480fba3858a2be32633375c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate-test, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 06:31:50 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate-test[85700]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb  2 06:31:50 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate-test[85700]:                            [--no-systemd] [--no-tmpfs]
Feb  2 06:31:50 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate-test[85700]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb  2 06:31:50 np0005604943 systemd[1]: libpod-23a65fc0efeebe20a75c7c3db01eea5a2535d6480fba3858a2be32633375c0d2.scope: Deactivated successfully.
Feb  2 06:31:50 np0005604943 podman[85684]: 2026-02-02 11:31:50.993508543 +0000 UTC m=+0.308742066 container died 23a65fc0efeebe20a75c7c3db01eea5a2535d6480fba3858a2be32633375c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate-test, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 06:31:51 np0005604943 systemd[1]: var-lib-containers-storage-overlay-e370eea81accac406e8e02421582cb35afeb6ebdc7f549718e39c9f210c57249-merged.mount: Deactivated successfully.
Feb  2 06:31:51 np0005604943 podman[85684]: 2026-02-02 11:31:51.03653769 +0000 UTC m=+0.351771223 container remove 23a65fc0efeebe20a75c7c3db01eea5a2535d6480fba3858a2be32633375c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 06:31:51 np0005604943 systemd[1]: libpod-conmon-23a65fc0efeebe20a75c7c3db01eea5a2535d6480fba3858a2be32633375c0d2.scope: Deactivated successfully.
Feb  2 06:31:51 np0005604943 systemd[1]: Reloading.
Feb  2 06:31:51 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:31:51 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:31:51 np0005604943 systemd[1]: Reloading.
Feb  2 06:31:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 06:31:51 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:31:51 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:31:51 np0005604943 ceph-mon[75271]: Deploying daemon osd.0 on compute-0
Feb  2 06:31:51 np0005604943 systemd[1]: Starting Ceph osd.0 for 4548a36b-7cdc-5e3e-a814-4e1571be1fae...
Feb  2 06:31:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:31:51 np0005604943 podman[85859]: 2026-02-02 11:31:51.868853466 +0000 UTC m=+0.044017975 container create f347decca1561ae9f2bc7e91be25925b114cadfed4670f43a6368951692a894f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 06:31:51 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6e47dd740d1c5b46888fdace13bd95d1f1e4b793bf909128d0f1eb8e94766a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6e47dd740d1c5b46888fdace13bd95d1f1e4b793bf909128d0f1eb8e94766a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6e47dd740d1c5b46888fdace13bd95d1f1e4b793bf909128d0f1eb8e94766a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6e47dd740d1c5b46888fdace13bd95d1f1e4b793bf909128d0f1eb8e94766a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6e47dd740d1c5b46888fdace13bd95d1f1e4b793bf909128d0f1eb8e94766a/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:51 np0005604943 podman[85859]: 2026-02-02 11:31:51.845818549 +0000 UTC m=+0.020983098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:51 np0005604943 podman[85859]: 2026-02-02 11:31:51.950880604 +0000 UTC m=+0.126045073 container init f347decca1561ae9f2bc7e91be25925b114cadfed4670f43a6368951692a894f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 06:31:51 np0005604943 podman[85859]: 2026-02-02 11:31:51.95799347 +0000 UTC m=+0.133157969 container start f347decca1561ae9f2bc7e91be25925b114cadfed4670f43a6368951692a894f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:31:51 np0005604943 podman[85859]: 2026-02-02 11:31:51.96148034 +0000 UTC m=+0.136644829 container attach f347decca1561ae9f2bc7e91be25925b114cadfed4670f43a6368951692a894f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:31:52 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate[85875]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:52 np0005604943 bash[85859]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:52 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate[85875]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:52 np0005604943 bash[85859]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:52 np0005604943 lvm[85961]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:31:52 np0005604943 lvm[85961]: VG ceph_vg1 finished
Feb  2 06:31:52 np0005604943 lvm[85960]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:31:52 np0005604943 lvm[85960]: VG ceph_vg0 finished
Feb  2 06:31:52 np0005604943 lvm[85963]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:31:52 np0005604943 lvm[85963]: VG ceph_vg2 finished
Feb  2 06:31:52 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate[85875]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  2 06:31:52 np0005604943 bash[85859]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  2 06:31:52 np0005604943 bash[85859]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:52 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate[85875]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:52 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:52 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate[85875]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:52 np0005604943 bash[85859]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:52 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate[85875]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  2 06:31:52 np0005604943 bash[85859]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  2 06:31:52 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate[85875]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb  2 06:31:52 np0005604943 bash[85859]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb  2 06:31:52 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate[85875]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:52 np0005604943 bash[85859]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:52 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate[85875]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:52 np0005604943 bash[85859]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:52 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate[85875]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb  2 06:31:52 np0005604943 bash[85859]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb  2 06:31:52 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate[85875]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  2 06:31:52 np0005604943 bash[85859]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb  2 06:31:52 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate[85875]: --> ceph-volume lvm activate successful for osd ID: 0
Feb  2 06:31:52 np0005604943 bash[85859]: --> ceph-volume lvm activate successful for osd ID: 0
Feb  2 06:31:52 np0005604943 systemd[1]: libpod-f347decca1561ae9f2bc7e91be25925b114cadfed4670f43a6368951692a894f.scope: Deactivated successfully.
Feb  2 06:31:52 np0005604943 systemd[1]: libpod-f347decca1561ae9f2bc7e91be25925b114cadfed4670f43a6368951692a894f.scope: Consumed 1.278s CPU time.
Feb  2 06:31:53 np0005604943 podman[86068]: 2026-02-02 11:31:53.049116925 +0000 UTC m=+0.038681351 container died f347decca1561ae9f2bc7e91be25925b114cadfed4670f43a6368951692a894f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Feb  2 06:31:53 np0005604943 systemd[1]: var-lib-containers-storage-overlay-ee6e47dd740d1c5b46888fdace13bd95d1f1e4b793bf909128d0f1eb8e94766a-merged.mount: Deactivated successfully.
Feb  2 06:31:53 np0005604943 podman[86068]: 2026-02-02 11:31:53.097926089 +0000 UTC m=+0.087490455 container remove f347decca1561ae9f2bc7e91be25925b114cadfed4670f43a6368951692a894f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0-activate, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 06:31:53 np0005604943 podman[86124]: 2026-02-02 11:31:53.319422337 +0000 UTC m=+0.036354004 container create 409c17664cc02736f3933d20ee5231e4ffe9782f6c64da4adfb864281bfbf962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419fe5696336b414acb6689bf176f8fb4ea53130542549a4584ae7ea825cd301/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419fe5696336b414acb6689bf176f8fb4ea53130542549a4584ae7ea825cd301/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419fe5696336b414acb6689bf176f8fb4ea53130542549a4584ae7ea825cd301/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419fe5696336b414acb6689bf176f8fb4ea53130542549a4584ae7ea825cd301/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419fe5696336b414acb6689bf176f8fb4ea53130542549a4584ae7ea825cd301/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:53 np0005604943 podman[86124]: 2026-02-02 11:31:53.380111396 +0000 UTC m=+0.097043123 container init 409c17664cc02736f3933d20ee5231e4ffe9782f6c64da4adfb864281bfbf962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 06:31:53 np0005604943 podman[86124]: 2026-02-02 11:31:53.392572327 +0000 UTC m=+0.109504024 container start 409c17664cc02736f3933d20ee5231e4ffe9782f6c64da4adfb864281bfbf962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:53 np0005604943 bash[86124]: 409c17664cc02736f3933d20ee5231e4ffe9782f6c64da4adfb864281bfbf962
Feb  2 06:31:53 np0005604943 podman[86124]: 2026-02-02 11:31:53.302575439 +0000 UTC m=+0.019507136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:53 np0005604943 systemd[1]: Started Ceph osd.0 for 4548a36b-7cdc-5e3e-a814-4e1571be1fae.
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: pidfile_write: ignore empty --pid-file
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 06:31:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:31:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 06:31:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Feb  2 06:31:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb  2 06:31:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:31:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:31:53 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Feb  2 06:31:53 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0400 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 06:31:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b0000 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: load: jerasure load: lrc 
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69e6b1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69f351800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69f351800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69f351800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69f351800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs mount
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs mount shared_bdev_used = 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: RocksDB version: 7.9.2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Git sha 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: DB SUMMARY
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: DB Session ID:  44LL4DEK0ZCN1MJGAPX6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: CURRENT file:  CURRENT
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                         Options.error_if_exists: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.create_if_missing: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                                     Options.env: 0x55d69e541ea0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                                Options.info_log: 0x55d69f59c8a0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                              Options.statistics: (nil)
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.use_fsync: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                              Options.db_log_dir: 
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                                 Options.wal_dir: db.wal
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.write_buffer_manager: 0x55d69f442b40
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.unordered_write: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.row_cache: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                              Options.wal_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.two_write_queues: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.wal_compression: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.atomic_flush: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.max_background_jobs: 4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.max_background_compactions: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.max_subcompactions: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.max_open_files: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Compression algorithms supported:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: #011kZSTD supported: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: #011kXpressCompression supported: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: #011kZlibCompression supported: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f59cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e5458d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f59cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e5458d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f59cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e5458d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f59cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e5458d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f59cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e5458d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f59cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e5458d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f59cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e5458d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f59cc80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e545a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f59cc80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e545a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f59cc80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e545a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3baa9361-ee9d-4fda-a264-eac88c99899f
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031913766583, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031913768343, "job": 1, "event": "recovery_finished"}
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: freelist init
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: freelist _read_cfg
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs umount
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69f351800 /var/lib/ceph/osd/ceph-0/block) close
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69f351800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69f351800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69f351800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bdev(0x55d69f351800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs mount
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluefs mount shared_bdev_used = 27262976
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: RocksDB version: 7.9.2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Git sha 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: DB SUMMARY
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: DB Session ID:  44LL4DEK0ZCN1MJGAPX7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: CURRENT file:  CURRENT
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                         Options.error_if_exists: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.create_if_missing: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                                     Options.env: 0x55d69e541a40
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                                Options.info_log: 0x55d69f59db00
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                              Options.statistics: (nil)
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.use_fsync: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                              Options.db_log_dir: 
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                                 Options.wal_dir: db.wal
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.write_buffer_manager: 0x55d69f443900
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.unordered_write: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.row_cache: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                              Options.wal_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.two_write_queues: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.wal_compression: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.atomic_flush: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.max_background_jobs: 4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.max_background_compactions: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.max_subcompactions: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.max_open_files: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Compression algorithms supported:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: #011kZSTD supported: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: #011kXpressCompression supported: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: #011kZlibCompression supported: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f5ec220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e545a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f5ec220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e545a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f5ec220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e545a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f5ec220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e545a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f5ec220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e545a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f5ec220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e545a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f5ec220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e545a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f5ec300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e5454b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f5ec300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e5454b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d69f5ec300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d69e5454b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3baa9361-ee9d-4fda-a264-eac88c99899f
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031913835768, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031913840367, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031913, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3baa9361-ee9d-4fda-a264-eac88c99899f", "db_session_id": "44LL4DEK0ZCN1MJGAPX7", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031913843934, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031913, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3baa9361-ee9d-4fda-a264-eac88c99899f", "db_session_id": "44LL4DEK0ZCN1MJGAPX7", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031913847068, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031913, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3baa9361-ee9d-4fda-a264-eac88c99899f", "db_session_id": "44LL4DEK0ZCN1MJGAPX7", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031913848878, "job": 1, "event": "recovery_finished"}
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d69f59fc00
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: DB pointer 0x55d69f756000
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d69e545a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d69e545a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d69e545a30#2 capacity: 460.80 MB usag
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: _get_class not permitted to load lua
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: _get_class not permitted to load sdk
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: osd.0 0 load_pgs
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: osd.0 0 load_pgs opened 0 pgs
Feb  2 06:31:53 np0005604943 ceph-osd[86144]: osd.0 0 log_to_monitors true
Feb  2 06:31:53 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0[86140]: 2026-02-02T11:31:53.887+0000 7fb97d0028c0 -1 osd.0 0 log_to_monitors true
Feb  2 06:31:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Feb  2 06:31:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3086364335,v1:192.168.122.100:6803/3086364335]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Feb  2 06:31:54 np0005604943 podman[86683]: 2026-02-02 11:31:54.008103622 +0000 UTC m=+0.044941533 container create 7ffd41b9afcbb233ddef2907446f292b036a7011cfe4d02410ed700668b6c108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carver, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 06:31:54 np0005604943 systemd[1]: Started libpod-conmon-7ffd41b9afcbb233ddef2907446f292b036a7011cfe4d02410ed700668b6c108.scope.
Feb  2 06:31:54 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:54 np0005604943 podman[86683]: 2026-02-02 11:31:54.069740938 +0000 UTC m=+0.106578879 container init 7ffd41b9afcbb233ddef2907446f292b036a7011cfe4d02410ed700668b6c108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:54 np0005604943 podman[86683]: 2026-02-02 11:31:54.080474509 +0000 UTC m=+0.117312450 container start 7ffd41b9afcbb233ddef2907446f292b036a7011cfe4d02410ed700668b6c108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:31:54 np0005604943 podman[86683]: 2026-02-02 11:31:53.989655078 +0000 UTC m=+0.026493049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:54 np0005604943 podman[86683]: 2026-02-02 11:31:54.084337751 +0000 UTC m=+0.121175712 container attach 7ffd41b9afcbb233ddef2907446f292b036a7011cfe4d02410ed700668b6c108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 06:31:54 np0005604943 naughty_carver[86699]: 167 167
Feb  2 06:31:54 np0005604943 systemd[1]: libpod-7ffd41b9afcbb233ddef2907446f292b036a7011cfe4d02410ed700668b6c108.scope: Deactivated successfully.
Feb  2 06:31:54 np0005604943 podman[86683]: 2026-02-02 11:31:54.087464882 +0000 UTC m=+0.124302833 container died 7ffd41b9afcbb233ddef2907446f292b036a7011cfe4d02410ed700668b6c108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 06:31:54 np0005604943 systemd[1]: var-lib-containers-storage-overlay-02a743368693fb98a047848e2535e9adf3d2fd059916c0e1dfe32e987ad81225-merged.mount: Deactivated successfully.
Feb  2 06:31:54 np0005604943 podman[86683]: 2026-02-02 11:31:54.132372513 +0000 UTC m=+0.169210424 container remove 7ffd41b9afcbb233ddef2907446f292b036a7011cfe4d02410ed700668b6c108 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carver, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:54 np0005604943 systemd[1]: libpod-conmon-7ffd41b9afcbb233ddef2907446f292b036a7011cfe4d02410ed700668b6c108.scope: Deactivated successfully.
Feb  2 06:31:54 np0005604943 podman[86729]: 2026-02-02 11:31:54.388660619 +0000 UTC m=+0.051206515 container create 5128f603fc719eddc76f5b11fda6941338ddc960add9552390777d092b33db11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate-test, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:31:54 np0005604943 systemd[1]: Started libpod-conmon-5128f603fc719eddc76f5b11fda6941338ddc960add9552390777d092b33db11.scope.
Feb  2 06:31:54 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:54 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8cceed86e765b389efc1eb863b97a627deba1eb22a05f8940a238624727d80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:54 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8cceed86e765b389efc1eb863b97a627deba1eb22a05f8940a238624727d80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:54 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8cceed86e765b389efc1eb863b97a627deba1eb22a05f8940a238624727d80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:54 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8cceed86e765b389efc1eb863b97a627deba1eb22a05f8940a238624727d80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:54 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8cceed86e765b389efc1eb863b97a627deba1eb22a05f8940a238624727d80/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:54 np0005604943 podman[86729]: 2026-02-02 11:31:54.368595968 +0000 UTC m=+0.031141944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:54 np0005604943 podman[86729]: 2026-02-02 11:31:54.471089457 +0000 UTC m=+0.133635363 container init 5128f603fc719eddc76f5b11fda6941338ddc960add9552390777d092b33db11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate-test, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:31:54 np0005604943 podman[86729]: 2026-02-02 11:31:54.475546776 +0000 UTC m=+0.138092702 container start 5128f603fc719eddc76f5b11fda6941338ddc960add9552390777d092b33db11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:31:54 np0005604943 podman[86729]: 2026-02-02 11:31:54.479414969 +0000 UTC m=+0.141960855 container attach 5128f603fc719eddc76f5b11fda6941338ddc960add9552390777d092b33db11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: Deploying daemon osd.1 on compute-0
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: from='osd.0 [v2:192.168.122.100:6802/3086364335,v1:192.168.122.100:6803/3086364335]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3086364335,v1:192.168.122.100:6803/3086364335]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3086364335,v1:192.168.122.100:6803/3086364335]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 06:31:54 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 06:31:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 06:31:54 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 06:31:54 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 06:31:54 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate-test[86745]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb  2 06:31:54 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate-test[86745]:                            [--no-systemd] [--no-tmpfs]
Feb  2 06:31:54 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate-test[86745]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb  2 06:31:54 np0005604943 systemd[1]: libpod-5128f603fc719eddc76f5b11fda6941338ddc960add9552390777d092b33db11.scope: Deactivated successfully.
Feb  2 06:31:54 np0005604943 podman[86729]: 2026-02-02 11:31:54.683278045 +0000 UTC m=+0.345823941 container died 5128f603fc719eddc76f5b11fda6941338ddc960add9552390777d092b33db11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:31:54 np0005604943 systemd[1]: var-lib-containers-storage-overlay-8b8cceed86e765b389efc1eb863b97a627deba1eb22a05f8940a238624727d80-merged.mount: Deactivated successfully.
Feb  2 06:31:54 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:54 np0005604943 podman[86729]: 2026-02-02 11:31:54.730423241 +0000 UTC m=+0.392969157 container remove 5128f603fc719eddc76f5b11fda6941338ddc960add9552390777d092b33db11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate-test, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 06:31:54 np0005604943 systemd[1]: libpod-conmon-5128f603fc719eddc76f5b11fda6941338ddc960add9552390777d092b33db11.scope: Deactivated successfully.
Feb  2 06:31:54 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb  2 06:31:54 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb  2 06:31:54 np0005604943 systemd[1]: Reloading.
Feb  2 06:31:55 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:31:55 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:31:55 np0005604943 systemd[1]: Reloading.
Feb  2 06:31:55 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:31:55 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:31:55 np0005604943 systemd[1]: Starting Ceph osd.1 for 4548a36b-7cdc-5e3e-a814-4e1571be1fae...
Feb  2 06:31:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Feb  2 06:31:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 06:31:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3086364335,v1:192.168.122.100:6803/3086364335]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  2 06:31:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Feb  2 06:31:55 np0005604943 ceph-osd[86144]: osd.0 0 done with init, starting boot process
Feb  2 06:31:55 np0005604943 ceph-osd[86144]: osd.0 0 start_boot
Feb  2 06:31:55 np0005604943 ceph-osd[86144]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb  2 06:31:55 np0005604943 ceph-osd[86144]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb  2 06:31:55 np0005604943 ceph-osd[86144]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb  2 06:31:55 np0005604943 ceph-osd[86144]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb  2 06:31:55 np0005604943 ceph-osd[86144]: osd.0 0  bench count 12288000 bsize 4 KiB
Feb  2 06:31:55 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Feb  2 06:31:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 06:31:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 06:31:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 06:31:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 06:31:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 06:31:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 06:31:55 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 06:31:55 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 06:31:55 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 06:31:55 np0005604943 ceph-mgr[75558]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3086364335; not ready for session (expect reconnect)
Feb  2 06:31:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 06:31:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 06:31:55 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 06:31:55 np0005604943 ceph-mon[75271]: from='osd.0 [v2:192.168.122.100:6802/3086364335,v1:192.168.122.100:6803/3086364335]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb  2 06:31:55 np0005604943 ceph-mon[75271]: from='osd.0 [v2:192.168.122.100:6802/3086364335,v1:192.168.122.100:6803/3086364335]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  2 06:31:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 06:31:55 np0005604943 podman[86908]: 2026-02-02 11:31:55.713738413 +0000 UTC m=+0.067131016 container create a0726b85a23be34cbb2facc17a75a4111d844c95664c602a52bce5ced2f8628b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 06:31:55 np0005604943 podman[86908]: 2026-02-02 11:31:55.678044119 +0000 UTC m=+0.031436712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:55 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/564e8d22ba39b41ee331c9fba647fec6043bae3bd68d5bfb1ddf656ddbe8b680/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/564e8d22ba39b41ee331c9fba647fec6043bae3bd68d5bfb1ddf656ddbe8b680/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/564e8d22ba39b41ee331c9fba647fec6043bae3bd68d5bfb1ddf656ddbe8b680/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/564e8d22ba39b41ee331c9fba647fec6043bae3bd68d5bfb1ddf656ddbe8b680/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/564e8d22ba39b41ee331c9fba647fec6043bae3bd68d5bfb1ddf656ddbe8b680/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:55 np0005604943 podman[86908]: 2026-02-02 11:31:55.796882403 +0000 UTC m=+0.150275006 container init a0726b85a23be34cbb2facc17a75a4111d844c95664c602a52bce5ced2f8628b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:55 np0005604943 podman[86908]: 2026-02-02 11:31:55.807930873 +0000 UTC m=+0.161323436 container start a0726b85a23be34cbb2facc17a75a4111d844c95664c602a52bce5ced2f8628b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:55 np0005604943 podman[86908]: 2026-02-02 11:31:55.811895597 +0000 UTC m=+0.165288260 container attach a0726b85a23be34cbb2facc17a75a4111d844c95664c602a52bce5ced2f8628b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:31:55 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate[86923]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:55 np0005604943 bash[86908]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:55 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate[86923]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:55 np0005604943 bash[86908]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:56 np0005604943 lvm[87010]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:31:56 np0005604943 lvm[87008]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:31:56 np0005604943 lvm[87010]: VG ceph_vg1 finished
Feb  2 06:31:56 np0005604943 lvm[87008]: VG ceph_vg0 finished
Feb  2 06:31:56 np0005604943 lvm[87012]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:31:56 np0005604943 lvm[87012]: VG ceph_vg2 finished
Feb  2 06:31:56 np0005604943 ceph-mgr[75558]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3086364335; not ready for session (expect reconnect)
Feb  2 06:31:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 06:31:56 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 06:31:56 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 06:31:56 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate[86923]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  2 06:31:56 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate[86923]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:56 np0005604943 bash[86908]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  2 06:31:56 np0005604943 bash[86908]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:56 np0005604943 ceph-mon[75271]: from='osd.0 [v2:192.168.122.100:6802/3086364335,v1:192.168.122.100:6803/3086364335]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  2 06:31:56 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate[86923]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:56 np0005604943 bash[86908]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:56 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate[86923]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 06:31:56 np0005604943 bash[86908]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 06:31:56 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate[86923]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb  2 06:31:56 np0005604943 bash[86908]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb  2 06:31:56 np0005604943 ceph-mgr[75558]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb  2 06:31:56 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate[86923]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:56 np0005604943 bash[86908]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:56 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate[86923]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:56 np0005604943 bash[86908]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:56 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate[86923]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb  2 06:31:56 np0005604943 bash[86908]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb  2 06:31:56 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate[86923]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 06:31:56 np0005604943 bash[86908]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 06:31:56 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate[86923]: --> ceph-volume lvm activate successful for osd ID: 1
Feb  2 06:31:56 np0005604943 bash[86908]: --> ceph-volume lvm activate successful for osd ID: 1
Feb  2 06:31:56 np0005604943 systemd[1]: libpod-a0726b85a23be34cbb2facc17a75a4111d844c95664c602a52bce5ced2f8628b.scope: Deactivated successfully.
Feb  2 06:31:56 np0005604943 systemd[1]: libpod-a0726b85a23be34cbb2facc17a75a4111d844c95664c602a52bce5ced2f8628b.scope: Consumed 1.228s CPU time.
Feb  2 06:31:56 np0005604943 podman[86908]: 2026-02-02 11:31:56.794351614 +0000 UTC m=+1.147744257 container died a0726b85a23be34cbb2facc17a75a4111d844c95664c602a52bce5ced2f8628b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:31:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:31:56 np0005604943 systemd[1]: var-lib-containers-storage-overlay-564e8d22ba39b41ee331c9fba647fec6043bae3bd68d5bfb1ddf656ddbe8b680-merged.mount: Deactivated successfully.
Feb  2 06:31:56 np0005604943 podman[86908]: 2026-02-02 11:31:56.88116007 +0000 UTC m=+1.234552643 container remove a0726b85a23be34cbb2facc17a75a4111d844c95664c602a52bce5ced2f8628b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1-activate, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:31:57 np0005604943 podman[87173]: 2026-02-02 11:31:57.119885567 +0000 UTC m=+0.051413901 container create 8937a933e50696556fe35cb340d4ec6a1d14744b5d0b97eff32c66fdd41a97e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 06:31:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac1f82a2d35df53911fd9c77142b2264c8717db91494bc4629e033e4a4a7809/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac1f82a2d35df53911fd9c77142b2264c8717db91494bc4629e033e4a4a7809/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac1f82a2d35df53911fd9c77142b2264c8717db91494bc4629e033e4a4a7809/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac1f82a2d35df53911fd9c77142b2264c8717db91494bc4629e033e4a4a7809/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac1f82a2d35df53911fd9c77142b2264c8717db91494bc4629e033e4a4a7809/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:57 np0005604943 podman[87173]: 2026-02-02 11:31:57.103088301 +0000 UTC m=+0.034616625 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:57 np0005604943 podman[87173]: 2026-02-02 11:31:57.200688848 +0000 UTC m=+0.132217242 container init 8937a933e50696556fe35cb340d4ec6a1d14744b5d0b97eff32c66fdd41a97e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:57 np0005604943 podman[87173]: 2026-02-02 11:31:57.206776225 +0000 UTC m=+0.138304549 container start 8937a933e50696556fe35cb340d4ec6a1d14744b5d0b97eff32c66fdd41a97e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:31:57 np0005604943 bash[87173]: 8937a933e50696556fe35cb340d4ec6a1d14744b5d0b97eff32c66fdd41a97e0
Feb  2 06:31:57 np0005604943 systemd[1]: Started Ceph osd.1 for 4548a36b-7cdc-5e3e-a814-4e1571be1fae.
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: pidfile_write: ignore empty --pid-file
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 06:31:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 06:31:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:31:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Feb  2 06:31:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb  2 06:31:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:31:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:31:57 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Feb  2 06:31:57 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Feb  2 06:31:57 np0005604943 ceph-osd[86144]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 48.995 iops: 12542.716 elapsed_sec: 0.239
Feb  2 06:31:57 np0005604943 ceph-osd[86144]: log_channel(cluster) log [WRN] : OSD bench result of 12542.716258 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  2 06:31:57 np0005604943 ceph-osd[86144]: osd.0 0 waiting for initial osdmap
Feb  2 06:31:57 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0[86140]: 2026-02-02T11:31:57.297+0000 7fb978f84640 -1 osd.0 0 waiting for initial osdmap
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 06:31:57 np0005604943 ceph-osd[86144]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Feb  2 06:31:57 np0005604943 ceph-osd[86144]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Feb  2 06:31:57 np0005604943 ceph-osd[86144]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Feb  2 06:31:57 np0005604943 ceph-osd[86144]: osd.0 8 check_osdmap_features require_osd_release unknown -> tentacle
Feb  2 06:31:57 np0005604943 ceph-osd[86144]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  2 06:31:57 np0005604943 ceph-osd[86144]: osd.0 8 set_numa_affinity not setting numa affinity
Feb  2 06:31:57 np0005604943 ceph-osd[86144]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Feb  2 06:31:57 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-0[86140]: 2026-02-02T11:31:57.321+0000 7fb973d89640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c400 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55c000 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: load: jerasure load: lrc 
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 06:31:57 np0005604943 ceph-mgr[75558]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3086364335; not ready for session (expect reconnect)
Feb  2 06:31:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 06:31:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 06:31:57 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 06:31:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0e55dc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0f1fd800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0f1fd800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0f1fd800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0f1fd800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs mount
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs mount shared_bdev_used = 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: RocksDB version: 7.9.2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Git sha 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: DB SUMMARY
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: DB Session ID:  SU16DWM5TR9OID8GGJWP
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: CURRENT file:  CURRENT
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                         Options.error_if_exists: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.create_if_missing: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                                     Options.env: 0x558f0e3edea0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                                Options.info_log: 0x558f0f47e8a0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                              Options.statistics: (nil)
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.use_fsync: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                              Options.db_log_dir: 
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                                 Options.wal_dir: db.wal
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.write_buffer_manager: 0x558f0e452b40
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.unordered_write: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.row_cache: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                              Options.wal_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.two_write_queues: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.wal_compression: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.atomic_flush: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.max_background_jobs: 4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.max_background_compactions: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.max_subcompactions: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.max_open_files: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Compression algorithms supported:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: #011kZSTD supported: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: #011kXpressCompression supported: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: #011kZlibCompression supported: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f47ec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f47ec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f47ec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f47ec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f47ec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f47ec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f47ec60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f47ec80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f1a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f47ec80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f1a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f47ec80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f1a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 65df7295-a059-4555-8706-dbc70252f0ee
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031917594416, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031917595882, "job": 1, "event": "recovery_finished"}
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: freelist init
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: freelist _read_cfg
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs umount
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0f1fd800 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0f1fd800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0f1fd800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0f1fd800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bdev(0x558f0f1fd800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs mount
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluefs mount shared_bdev_used = 27262976
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: RocksDB version: 7.9.2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Git sha 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: DB SUMMARY
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: DB Session ID:  SU16DWM5TR9OID8GGJWO
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: CURRENT file:  CURRENT
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                         Options.error_if_exists: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.create_if_missing: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                                     Options.env: 0x558f0e3edd50
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                                Options.info_log: 0x558f0f47fb00
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                              Options.statistics: (nil)
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.use_fsync: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                              Options.db_log_dir: 
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                                 Options.wal_dir: db.wal
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.write_buffer_manager: 0x558f0e453900
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.unordered_write: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.row_cache: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                              Options.wal_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.two_write_queues: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.wal_compression: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.atomic_flush: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.max_background_jobs: 4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.max_background_compactions: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.max_subcompactions: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.max_open_files: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Compression algorithms supported:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: #011kZSTD supported: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: #011kXpressCompression supported: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: #011kZlibCompression supported: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f4a8220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f1a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f4a8220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f1a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f4a8220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f1a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f4a8220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f1a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f4a8220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f1a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f4a8220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f1a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f4a8220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f1a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f4a8300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f14b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f4a8300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f14b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:           Options.merge_operator: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558f0f4a8300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558f0e3f14b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.compression: LZ4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.num_levels: 7
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 65df7295-a059-4555-8706-dbc70252f0ee
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031917647690, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031917651978, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031917, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "65df7295-a059-4555-8706-dbc70252f0ee", "db_session_id": "SU16DWM5TR9OID8GGJWO", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031917655043, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031917, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "65df7295-a059-4555-8706-dbc70252f0ee", "db_session_id": "SU16DWM5TR9OID8GGJWO", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031917658021, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031917, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "65df7295-a059-4555-8706-dbc70252f0ee", "db_session_id": "SU16DWM5TR9OID8GGJWO", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031917659541, "job": 1, "event": "recovery_finished"}
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558f0f481c00
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: DB pointer 0x558f0f638000
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0e3f1a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0e3f1a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0e3f1a30#2 capacity: 460.80 MB usag
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: _get_class not permitted to load lua
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: _get_class not permitted to load sdk
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: osd.1 0 load_pgs
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: osd.1 0 load_pgs opened 0 pgs
Feb  2 06:31:57 np0005604943 ceph-osd[87192]: osd.1 0 log_to_monitors true
Feb  2 06:31:57 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1[87188]: 2026-02-02T11:31:57.704+0000 7ff40dde68c0 -1 osd.1 0 log_to_monitors true
Feb  2 06:31:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Feb  2 06:31:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/707724241,v1:192.168.122.100:6807/707724241]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Feb  2 06:31:57 np0005604943 podman[87731]: 2026-02-02 11:31:57.798939723 +0000 UTC m=+0.053694027 container create e162dfcf2060f57615bc8fb68c830d30ca2ba1c52739778ee84fc09f9cff6aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_hugle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 06:31:57 np0005604943 systemd[1]: Started libpod-conmon-e162dfcf2060f57615bc8fb68c830d30ca2ba1c52739778ee84fc09f9cff6aca.scope.
Feb  2 06:31:57 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:57 np0005604943 podman[87731]: 2026-02-02 11:31:57.778409278 +0000 UTC m=+0.033163612 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:57 np0005604943 podman[87731]: 2026-02-02 11:31:57.876436468 +0000 UTC m=+0.131190772 container init e162dfcf2060f57615bc8fb68c830d30ca2ba1c52739778ee84fc09f9cff6aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:31:57 np0005604943 podman[87731]: 2026-02-02 11:31:57.885925193 +0000 UTC m=+0.140679487 container start e162dfcf2060f57615bc8fb68c830d30ca2ba1c52739778ee84fc09f9cff6aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:57 np0005604943 podman[87731]: 2026-02-02 11:31:57.889519507 +0000 UTC m=+0.144273771 container attach e162dfcf2060f57615bc8fb68c830d30ca2ba1c52739778ee84fc09f9cff6aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_hugle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 06:31:57 np0005604943 sweet_hugle[87747]: 167 167
Feb  2 06:31:57 np0005604943 systemd[1]: libpod-e162dfcf2060f57615bc8fb68c830d30ca2ba1c52739778ee84fc09f9cff6aca.scope: Deactivated successfully.
Feb  2 06:31:57 np0005604943 conmon[87747]: conmon e162dfcf2060f57615bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e162dfcf2060f57615bc8fb68c830d30ca2ba1c52739778ee84fc09f9cff6aca.scope/container/memory.events
Feb  2 06:31:57 np0005604943 podman[87731]: 2026-02-02 11:31:57.891750152 +0000 UTC m=+0.146504406 container died e162dfcf2060f57615bc8fb68c830d30ca2ba1c52739778ee84fc09f9cff6aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_hugle, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Feb  2 06:31:57 np0005604943 systemd[1]: var-lib-containers-storage-overlay-db86d4451621ba71e3b23f4fbc6079f92e8d318779f88b4e54e788f64b365c3f-merged.mount: Deactivated successfully.
Feb  2 06:31:57 np0005604943 podman[87731]: 2026-02-02 11:31:57.930296978 +0000 UTC m=+0.185051232 container remove e162dfcf2060f57615bc8fb68c830d30ca2ba1c52739778ee84fc09f9cff6aca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:31:57 np0005604943 systemd[1]: libpod-conmon-e162dfcf2060f57615bc8fb68c830d30ca2ba1c52739778ee84fc09f9cff6aca.scope: Deactivated successfully.
Feb  2 06:31:58 np0005604943 podman[87777]: 2026-02-02 11:31:58.1108646 +0000 UTC m=+0.039384482 container create 13a108780a7732f9b22548c7c46492aab10f350dd64c12abfd38cae7d854e5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate-test, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:31:58 np0005604943 systemd[1]: Started libpod-conmon-13a108780a7732f9b22548c7c46492aab10f350dd64c12abfd38cae7d854e5fa.scope.
Feb  2 06:31:58 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f5af1257cf00eb70e6c85182cfb3db55e2d593f9daba92308a7977e364a48e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f5af1257cf00eb70e6c85182cfb3db55e2d593f9daba92308a7977e364a48e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f5af1257cf00eb70e6c85182cfb3db55e2d593f9daba92308a7977e364a48e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f5af1257cf00eb70e6c85182cfb3db55e2d593f9daba92308a7977e364a48e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f5af1257cf00eb70e6c85182cfb3db55e2d593f9daba92308a7977e364a48e/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:58 np0005604943 podman[87777]: 2026-02-02 11:31:58.171114856 +0000 UTC m=+0.099634748 container init 13a108780a7732f9b22548c7c46492aab10f350dd64c12abfd38cae7d854e5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate-test, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:31:58 np0005604943 podman[87777]: 2026-02-02 11:31:58.182912199 +0000 UTC m=+0.111432071 container start 13a108780a7732f9b22548c7c46492aab10f350dd64c12abfd38cae7d854e5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate-test, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 06:31:58 np0005604943 podman[87777]: 2026-02-02 11:31:58.092375115 +0000 UTC m=+0.020895077 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:58 np0005604943 podman[87777]: 2026-02-02 11:31:58.18608488 +0000 UTC m=+0.114604762 container attach 13a108780a7732f9b22548c7c46492aab10f350dd64c12abfd38cae7d854e5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/707724241,v1:192.168.122.100:6807/707724241]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3086364335,v1:192.168.122.100:6803/3086364335] boot
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/707724241,v1:192.168.122.100:6807/707724241]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 06:31:58 np0005604943 ceph-osd[86144]: osd.0 9 state: booting -> active
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 06:31:58 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 06:31:58 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 06:31:58 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate-test[87793]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb  2 06:31:58 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate-test[87793]:                            [--no-systemd] [--no-tmpfs]
Feb  2 06:31:58 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate-test[87793]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb  2 06:31:58 np0005604943 systemd[1]: libpod-13a108780a7732f9b22548c7c46492aab10f350dd64c12abfd38cae7d854e5fa.scope: Deactivated successfully.
Feb  2 06:31:58 np0005604943 podman[87777]: 2026-02-02 11:31:58.391606045 +0000 UTC m=+0.320125927 container died 13a108780a7732f9b22548c7c46492aab10f350dd64c12abfd38cae7d854e5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 06:31:58 np0005604943 systemd[1]: var-lib-containers-storage-overlay-19f5af1257cf00eb70e6c85182cfb3db55e2d593f9daba92308a7977e364a48e-merged.mount: Deactivated successfully.
Feb  2 06:31:58 np0005604943 podman[87777]: 2026-02-02 11:31:58.446161416 +0000 UTC m=+0.374681298 container remove 13a108780a7732f9b22548c7c46492aab10f350dd64c12abfd38cae7d854e5fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate-test, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:31:58 np0005604943 systemd[1]: libpod-conmon-13a108780a7732f9b22548c7c46492aab10f350dd64c12abfd38cae7d854e5fa.scope: Deactivated successfully.
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: Deploying daemon osd.2 on compute-0
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: OSD bench result of 12542.716258 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: from='osd.1 [v2:192.168.122.100:6806/707724241,v1:192.168.122.100:6807/707724241]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: from='osd.1 [v2:192.168.122.100:6806/707724241,v1:192.168.122.100:6807/707724241]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: osd.0 [v2:192.168.122.100:6802/3086364335,v1:192.168.122.100:6803/3086364335] boot
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: from='osd.1 [v2:192.168.122.100:6806/707724241,v1:192.168.122.100:6807/707724241]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  2 06:31:58 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb  2 06:31:58 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb  2 06:31:58 np0005604943 systemd[1]: Reloading.
Feb  2 06:31:58 np0005604943 ceph-mgr[75558]: [devicehealth INFO root] creating mgr pool
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Feb  2 06:31:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Feb  2 06:31:58 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:31:58 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:31:58 np0005604943 systemd[1]: Reloading.
Feb  2 06:31:59 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:31:59 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:31:59 np0005604943 systemd[1]: Starting Ceph osd.2 for 4548a36b-7cdc-5e3e-a814-4e1571be1fae...
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/707724241,v1:192.168.122.100:6807/707724241]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Feb  2 06:31:59 np0005604943 ceph-osd[87192]: osd.1 0 done with init, starting boot process
Feb  2 06:31:59 np0005604943 ceph-osd[87192]: osd.1 0 start_boot
Feb  2 06:31:59 np0005604943 ceph-osd[87192]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb  2 06:31:59 np0005604943 ceph-osd[87192]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb  2 06:31:59 np0005604943 ceph-osd[87192]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb  2 06:31:59 np0005604943 ceph-osd[87192]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb  2 06:31:59 np0005604943 ceph-osd[87192]: osd.1 0  bench count 12288000 bsize 4 KiB
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Feb  2 06:31:59 np0005604943 ceph-osd[86144]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb  2 06:31:59 np0005604943 ceph-osd[86144]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Feb  2 06:31:59 np0005604943 ceph-osd[86144]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 06:31:59 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 06:31:59 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Feb  2 06:31:59 np0005604943 ceph-mgr[75558]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/707724241; not ready for session (expect reconnect)
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 06:31:59 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 06:31:59 np0005604943 podman[87956]: 2026-02-02 11:31:59.46994149 +0000 UTC m=+0.065185520 container create a5859757eaf2787995aa3b9b0be03239647d41280fa147f042d298f1c6b29e02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Feb  2 06:31:59 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:31:59 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc99ca997598a6a4ad1529117d1f033d2a917f6d079684c5caa11a83e11d77e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:59 np0005604943 podman[87956]: 2026-02-02 11:31:59.442884747 +0000 UTC m=+0.038128817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:31:59 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc99ca997598a6a4ad1529117d1f033d2a917f6d079684c5caa11a83e11d77e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:59 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc99ca997598a6a4ad1529117d1f033d2a917f6d079684c5caa11a83e11d77e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:59 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc99ca997598a6a4ad1529117d1f033d2a917f6d079684c5caa11a83e11d77e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:59 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc99ca997598a6a4ad1529117d1f033d2a917f6d079684c5caa11a83e11d77e/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb  2 06:31:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v26: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Feb  2 06:31:59 np0005604943 podman[87956]: 2026-02-02 11:31:59.556620352 +0000 UTC m=+0.151864412 container init a5859757eaf2787995aa3b9b0be03239647d41280fa147f042d298f1c6b29e02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 06:31:59 np0005604943 podman[87956]: 2026-02-02 11:31:59.565898752 +0000 UTC m=+0.161142732 container start a5859757eaf2787995aa3b9b0be03239647d41280fa147f042d298f1c6b29e02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 06:31:59 np0005604943 podman[87956]: 2026-02-02 11:31:59.588197448 +0000 UTC m=+0.183441468 container attach a5859757eaf2787995aa3b9b0be03239647d41280fa147f042d298f1c6b29e02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: from='osd.1 [v2:192.168.122.100:6806/707724241,v1:192.168.122.100:6807/707724241]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb  2 06:31:59 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Feb  2 06:31:59 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate[87971]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:59 np0005604943 bash[87956]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:59 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate[87971]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:31:59 np0005604943 bash[87956]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:32:00 np0005604943 lvm[88055]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:32:00 np0005604943 lvm[88055]: VG ceph_vg0 finished
Feb  2 06:32:00 np0005604943 lvm[88058]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:32:00 np0005604943 lvm[88058]: VG ceph_vg1 finished
Feb  2 06:32:00 np0005604943 lvm[88060]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:32:00 np0005604943 lvm[88060]: VG ceph_vg2 finished
Feb  2 06:32:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Feb  2 06:32:00 np0005604943 ceph-mgr[75558]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/707724241; not ready for session (expect reconnect)
Feb  2 06:32:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 06:32:00 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 06:32:00 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 06:32:00 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb  2 06:32:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Feb  2 06:32:00 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Feb  2 06:32:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 06:32:00 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 06:32:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 06:32:00 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 06:32:00 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 06:32:00 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 06:32:00 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate[87971]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  2 06:32:00 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate[87971]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:32:00 np0005604943 bash[87956]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  2 06:32:00 np0005604943 bash[87956]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:32:00 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate[87971]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:32:00 np0005604943 bash[87956]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 06:32:00 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate[87971]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  2 06:32:00 np0005604943 bash[87956]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  2 06:32:00 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate[87971]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb  2 06:32:00 np0005604943 bash[87956]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb  2 06:32:00 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate[87971]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:00 np0005604943 bash[87956]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:00 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate[87971]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:00 np0005604943 bash[87956]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:00 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate[87971]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb  2 06:32:00 np0005604943 bash[87956]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb  2 06:32:00 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate[87971]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  2 06:32:00 np0005604943 bash[87956]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb  2 06:32:00 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate[87971]: --> ceph-volume lvm activate successful for osd ID: 2
Feb  2 06:32:00 np0005604943 bash[87956]: --> ceph-volume lvm activate successful for osd ID: 2
Feb  2 06:32:00 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb  2 06:32:00 np0005604943 podman[87956]: 2026-02-02 11:32:00.635641948 +0000 UTC m=+1.230885988 container died a5859757eaf2787995aa3b9b0be03239647d41280fa147f042d298f1c6b29e02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:00 np0005604943 systemd[1]: libpod-a5859757eaf2787995aa3b9b0be03239647d41280fa147f042d298f1c6b29e02.scope: Deactivated successfully.
Feb  2 06:32:00 np0005604943 systemd[1]: libpod-a5859757eaf2787995aa3b9b0be03239647d41280fa147f042d298f1c6b29e02.scope: Consumed 1.346s CPU time.
Feb  2 06:32:00 np0005604943 systemd[1]: var-lib-containers-storage-overlay-6cc99ca997598a6a4ad1529117d1f033d2a917f6d079684c5caa11a83e11d77e-merged.mount: Deactivated successfully.
Feb  2 06:32:00 np0005604943 podman[87956]: 2026-02-02 11:32:00.722786083 +0000 UTC m=+1.318030073 container remove a5859757eaf2787995aa3b9b0be03239647d41280fa147f042d298f1c6b29e02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2-activate, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:00 np0005604943 podman[88216]: 2026-02-02 11:32:00.928099472 +0000 UTC m=+0.049309920 container create 599aa4410c1f164fe63289f8e57a829ab4dd98bf68fe6ed58d4ebf68bc2ecffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 06:32:00 np0005604943 podman[88216]: 2026-02-02 11:32:00.896328741 +0000 UTC m=+0.017539179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fb1364ed7b5d9d589b8cddf9155cefe5a84933a55b185272f06719082c72bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fb1364ed7b5d9d589b8cddf9155cefe5a84933a55b185272f06719082c72bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fb1364ed7b5d9d589b8cddf9155cefe5a84933a55b185272f06719082c72bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fb1364ed7b5d9d589b8cddf9155cefe5a84933a55b185272f06719082c72bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fb1364ed7b5d9d589b8cddf9155cefe5a84933a55b185272f06719082c72bc/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:01 np0005604943 podman[88216]: 2026-02-02 11:32:01.038891242 +0000 UTC m=+0.160101670 container init 599aa4410c1f164fe63289f8e57a829ab4dd98bf68fe6ed58d4ebf68bc2ecffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:01 np0005604943 podman[88216]: 2026-02-02 11:32:01.046657307 +0000 UTC m=+0.167867735 container start 599aa4410c1f164fe63289f8e57a829ab4dd98bf68fe6ed58d4ebf68bc2ecffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:01 np0005604943 bash[88216]: 599aa4410c1f164fe63289f8e57a829ab4dd98bf68fe6ed58d4ebf68bc2ecffd
Feb  2 06:32:01 np0005604943 systemd[1]: Started Ceph osd.2 for 4548a36b-7cdc-5e3e-a814-4e1571be1fae.
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: pidfile_write: ignore empty --pid-file
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 06:32:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 06:32:01 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 06:32:01 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948400 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67948000 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: load: jerasure load: lrc 
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb  2 06:32:01 np0005604943 ceph-mgr[75558]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/707724241; not ready for session (expect reconnect)
Feb  2 06:32:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 06:32:01 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:32:01 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f67949c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f685df800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f685df800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f685df800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f685df800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs mount
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs mount shared_bdev_used = 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: RocksDB version: 7.9.2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Git sha 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: DB SUMMARY
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: DB Session ID:  WJU82V70N94RWMI6JYET
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: CURRENT file:  CURRENT
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                         Options.error_if_exists: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.create_if_missing: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                                     Options.env: 0x560f677d9ea0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                                Options.info_log: 0x560f688348a0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                              Options.statistics: (nil)
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.use_fsync: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                              Options.db_log_dir: 
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                                 Options.wal_dir: db.wal
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.write_buffer_manager: 0x560f686d0b40
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.unordered_write: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.row_cache: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                              Options.wal_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.two_write_queues: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.wal_compression: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.atomic_flush: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.max_background_jobs: 4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.max_background_compactions: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.max_subcompactions: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.max_open_files: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Compression algorithms supported:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: #011kZSTD supported: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: #011kXpressCompression supported: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: #011kZlibCompression supported: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 30498c56-d0cf-4c12-a712-3f74e9411e23
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031921423452, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031921424623, "job": 1, "event": "recovery_finished"}
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: freelist init
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: freelist _read_cfg
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs umount
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f685df800 /var/lib/ceph/osd/ceph-2/block) close
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f685df800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f685df800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f685df800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bdev(0x560f685df800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs mount
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluefs mount shared_bdev_used = 27262976
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: RocksDB version: 7.9.2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Git sha 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Compile date 2025-10-30 15:42:43
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: DB SUMMARY
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: DB Session ID:  WJU82V70N94RWMI6JYES
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: CURRENT file:  CURRENT
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                         Options.error_if_exists: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.create_if_missing: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                                     Options.env: 0x560f677d9ce0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                                Options.info_log: 0x560f68834a40
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                              Options.statistics: (nil)
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.use_fsync: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                              Options.db_log_dir: 
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                                 Options.wal_dir: db.wal
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.write_buffer_manager: 0x560f686d0b40
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.unordered_write: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.row_cache: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                              Options.wal_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.two_write_queues: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.wal_compression: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.atomic_flush: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.max_background_jobs: 4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.max_background_compactions: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.max_subcompactions: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.max_open_files: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Compression algorithms supported:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: #011kZSTD supported: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: #011kXpressCompression supported: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: #011kZlibCompression supported: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f68834bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dd8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f688350c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[87192]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 42.365 iops: 10845.404 elapsed_sec: 0.277
Feb  2 06:32:01 np0005604943 ceph-osd[87192]: log_channel(cluster) log [WRN] : OSD bench result of 10845.403740 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  2 06:32:01 np0005604943 ceph-osd[87192]: osd.1 0 waiting for initial osdmap
Feb  2 06:32:01 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1[87188]: 2026-02-02T11:32:01.480+0000 7ff409d68640 -1 osd.1 0 waiting for initial osdmap
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f688350c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:           Options.merge_operator: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f688350c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560f677dda30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.compression: LZ4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.num_levels: 7
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.bloom_locality: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                               Options.ttl: 2592000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                       Options.enable_blob_files: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                           Options.min_blob_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 30498c56-d0cf-4c12-a712-3f74e9411e23
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031921472691, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031921486625, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031921, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30498c56-d0cf-4c12-a712-3f74e9411e23", "db_session_id": "WJU82V70N94RWMI6JYES", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031921493254, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031921, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30498c56-d0cf-4c12-a712-3f74e9411e23", "db_session_id": "WJU82V70N94RWMI6JYES", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:32:01 np0005604943 ceph-osd[87192]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb  2 06:32:01 np0005604943 ceph-osd[87192]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Feb  2 06:32:01 np0005604943 ceph-osd[87192]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb  2 06:32:01 np0005604943 ceph-osd[87192]: osd.1 11 check_osdmap_features require_osd_release unknown -> tentacle
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031921507866, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031921, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30498c56-d0cf-4c12-a712-3f74e9411e23", "db_session_id": "WJU82V70N94RWMI6JYES", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770031921511682, "job": 1, "event": "recovery_finished"}
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb  2 06:32:01 np0005604943 ceph-osd[87192]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  2 06:32:01 np0005604943 ceph-osd[87192]: osd.1 11 set_numa_affinity not setting numa affinity
Feb  2 06:32:01 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-1[87188]: 2026-02-02T11:32:01.527+0000 7ff404b6d640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  2 06:32:01 np0005604943 ceph-osd[87192]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560f68836000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: DB pointer 0x560f689ee000
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560f677dd8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560f677dd8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560f677dd8d0#2 capacity: 460.80 MB usag
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: _get_class not permitted to load lua
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: _get_class not permitted to load sdk
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: osd.2 0 load_pgs
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: osd.2 0 load_pgs opened 0 pgs
Feb  2 06:32:01 np0005604943 ceph-osd[88236]: osd.2 0 log_to_monitors true
Feb  2 06:32:01 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2[88232]: 2026-02-02T11:32:01.550+0000 7f4da24c88c0 -1 osd.2 0 log_to_monitors true
Feb  2 06:32:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v28: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Feb  2 06:32:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Feb  2 06:32:01 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2157679858,v1:192.168.122.100:6811/2157679858]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Feb  2 06:32:01 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:01 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:01 np0005604943 ceph-mon[75271]: from='osd.2 [v2:192.168.122.100:6810/2157679858,v1:192.168.122.100:6811/2157679858]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Feb  2 06:32:01 np0005604943 podman[88752]: 2026-02-02 11:32:01.61483185 +0000 UTC m=+0.045027906 container create f655bcead17d84f127aed199ee958072063f774e6a3e3639dc2d8f8b6587a919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_fermat, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:01 np0005604943 systemd[1]: Started libpod-conmon-f655bcead17d84f127aed199ee958072063f774e6a3e3639dc2d8f8b6587a919.scope.
Feb  2 06:32:01 np0005604943 podman[88752]: 2026-02-02 11:32:01.585969123 +0000 UTC m=+0.016165189 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:01 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:01 np0005604943 podman[88752]: 2026-02-02 11:32:01.705346413 +0000 UTC m=+0.135542569 container init f655bcead17d84f127aed199ee958072063f774e6a3e3639dc2d8f8b6587a919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_fermat, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 06:32:01 np0005604943 podman[88752]: 2026-02-02 11:32:01.713683435 +0000 UTC m=+0.143879511 container start f655bcead17d84f127aed199ee958072063f774e6a3e3639dc2d8f8b6587a919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_fermat, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:01 np0005604943 agitated_fermat[88768]: 167 167
Feb  2 06:32:01 np0005604943 systemd[1]: libpod-f655bcead17d84f127aed199ee958072063f774e6a3e3639dc2d8f8b6587a919.scope: Deactivated successfully.
Feb  2 06:32:01 np0005604943 podman[88752]: 2026-02-02 11:32:01.719581705 +0000 UTC m=+0.149777801 container attach f655bcead17d84f127aed199ee958072063f774e6a3e3639dc2d8f8b6587a919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_fermat, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 06:32:01 np0005604943 conmon[88768]: conmon f655bcead17d84f127ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f655bcead17d84f127aed199ee958072063f774e6a3e3639dc2d8f8b6587a919.scope/container/memory.events
Feb  2 06:32:01 np0005604943 podman[88752]: 2026-02-02 11:32:01.720823491 +0000 UTC m=+0.151019607 container died f655bcead17d84f127aed199ee958072063f774e6a3e3639dc2d8f8b6587a919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:01 np0005604943 systemd[1]: var-lib-containers-storage-overlay-456560eed13aa42e3b5eb95aca13071c75667770438727a267321d59624c5f56-merged.mount: Deactivated successfully.
Feb  2 06:32:01 np0005604943 podman[88752]: 2026-02-02 11:32:01.772691994 +0000 UTC m=+0.202888040 container remove f655bcead17d84f127aed199ee958072063f774e6a3e3639dc2d8f8b6587a919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Feb  2 06:32:01 np0005604943 systemd[1]: libpod-conmon-f655bcead17d84f127aed199ee958072063f774e6a3e3639dc2d8f8b6587a919.scope: Deactivated successfully.
Feb  2 06:32:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:32:01 np0005604943 podman[88792]: 2026-02-02 11:32:01.96028274 +0000 UTC m=+0.062118252 container create de8073fa6bf0687c65b3268fe3e64d0cf7b04bf226bdb8abe7e3b1b998fada8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_clarke, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:02 np0005604943 systemd[1]: Started libpod-conmon-de8073fa6bf0687c65b3268fe3e64d0cf7b04bf226bdb8abe7e3b1b998fada8a.scope.
Feb  2 06:32:02 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:02 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3753715979c28710c108163f56a7e459f55ee804a1f12c23f6f72d90064128b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:02 np0005604943 podman[88792]: 2026-02-02 11:32:01.935781799 +0000 UTC m=+0.037617361 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:02 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3753715979c28710c108163f56a7e459f55ee804a1f12c23f6f72d90064128b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:02 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3753715979c28710c108163f56a7e459f55ee804a1f12c23f6f72d90064128b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:02 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3753715979c28710c108163f56a7e459f55ee804a1f12c23f6f72d90064128b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:02 np0005604943 podman[88792]: 2026-02-02 11:32:02.042295086 +0000 UTC m=+0.144130598 container init de8073fa6bf0687c65b3268fe3e64d0cf7b04bf226bdb8abe7e3b1b998fada8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_clarke, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 06:32:02 np0005604943 podman[88792]: 2026-02-02 11:32:02.047904418 +0000 UTC m=+0.149739900 container start de8073fa6bf0687c65b3268fe3e64d0cf7b04bf226bdb8abe7e3b1b998fada8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:02 np0005604943 podman[88792]: 2026-02-02 11:32:02.051944125 +0000 UTC m=+0.153779637 container attach de8073fa6bf0687c65b3268fe3e64d0cf7b04bf226bdb8abe7e3b1b998fada8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_clarke, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2157679858,v1:192.168.122.100:6811/2157679858]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/707724241,v1:192.168.122.100:6807/707724241] boot
Feb  2 06:32:02 np0005604943 ceph-osd[87192]: osd.1 12 state: booting -> active
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2157679858,v1:192.168.122.100:6811/2157679858]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  2 06:32:02 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Feb  2 06:32:02 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 06:32:02 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb  2 06:32:02 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb  2 06:32:02 np0005604943 lvm[88885]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:32:02 np0005604943 lvm[88885]: VG ceph_vg0 finished
Feb  2 06:32:02 np0005604943 lvm[88887]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:32:02 np0005604943 lvm[88887]: VG ceph_vg1 finished
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: OSD bench result of 10845.403740 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: from='osd.2 [v2:192.168.122.100:6810/2157679858,v1:192.168.122.100:6811/2157679858]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: osd.1 [v2:192.168.122.100:6806/707724241,v1:192.168.122.100:6807/707724241] boot
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: from='osd.2 [v2:192.168.122.100:6810/2157679858,v1:192.168.122.100:6811/2157679858]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb  2 06:32:02 np0005604943 lvm[88888]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:32:02 np0005604943 lvm[88888]: VG ceph_vg2 finished
Feb  2 06:32:02 np0005604943 relaxed_clarke[88807]: {}
Feb  2 06:32:02 np0005604943 systemd[1]: libpod-de8073fa6bf0687c65b3268fe3e64d0cf7b04bf226bdb8abe7e3b1b998fada8a.scope: Deactivated successfully.
Feb  2 06:32:02 np0005604943 systemd[1]: libpod-de8073fa6bf0687c65b3268fe3e64d0cf7b04bf226bdb8abe7e3b1b998fada8a.scope: Consumed 1.006s CPU time.
Feb  2 06:32:02 np0005604943 podman[88792]: 2026-02-02 11:32:02.812566874 +0000 UTC m=+0.914402356 container died de8073fa6bf0687c65b3268fe3e64d0cf7b04bf226bdb8abe7e3b1b998fada8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_clarke, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Feb  2 06:32:02 np0005604943 systemd[1]: var-lib-containers-storage-overlay-3753715979c28710c108163f56a7e459f55ee804a1f12c23f6f72d90064128b2-merged.mount: Deactivated successfully.
Feb  2 06:32:02 np0005604943 podman[88792]: 2026-02-02 11:32:02.866320602 +0000 UTC m=+0.968156084 container remove de8073fa6bf0687c65b3268fe3e64d0cf7b04bf226bdb8abe7e3b1b998fada8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 06:32:02 np0005604943 systemd[1]: libpod-conmon-de8073fa6bf0687c65b3268fe3e64d0cf7b04bf226bdb8abe7e3b1b998fada8a.scope: Deactivated successfully.
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:32:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Feb  2 06:32:03 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2157679858,v1:192.168.122.100:6811/2157679858]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  2 06:32:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Feb  2 06:32:03 np0005604943 ceph-osd[88236]: osd.2 0 done with init, starting boot process
Feb  2 06:32:03 np0005604943 ceph-osd[88236]: osd.2 0 start_boot
Feb  2 06:32:03 np0005604943 ceph-osd[88236]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb  2 06:32:03 np0005604943 ceph-osd[88236]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb  2 06:32:03 np0005604943 ceph-osd[88236]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb  2 06:32:03 np0005604943 ceph-osd[88236]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb  2 06:32:03 np0005604943 ceph-osd[88236]: osd.2 0  bench count 12288000 bsize 4 KiB
Feb  2 06:32:03 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Feb  2 06:32:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 06:32:03 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 06:32:03 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:32:03 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 06:32:03 np0005604943 ceph-mgr[75558]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2157679858; not ready for session (expect reconnect)
Feb  2 06:32:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 06:32:03 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 06:32:03 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 06:32:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v31: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Feb  2 06:32:03 np0005604943 podman[89020]: 2026-02-02 11:32:03.662818041 +0000 UTC m=+0.083324926 container exec fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:03 np0005604943 podman[89020]: 2026-02-02 11:32:03.786626008 +0000 UTC m=+0.207132853 container exec_died fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:03 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:03 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:03 np0005604943 ceph-mon[75271]: from='osd.2 [v2:192.168.122.100:6810/2157679858,v1:192.168.122.100:6811/2157679858]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  2 06:32:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Feb  2 06:32:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Feb  2 06:32:04 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Feb  2 06:32:04 np0005604943 ceph-mgr[75558]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2157679858; not ready for session (expect reconnect)
Feb  2 06:32:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 06:32:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 06:32:04 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 06:32:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:32:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:32:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:04 np0005604943 ceph-mgr[75558]: [devicehealth INFO root] creating main.db for devicehealth
Feb  2 06:32:04 np0005604943 podman[89232]: 2026-02-02 11:32:04.737770278 +0000 UTC m=+0.045538360 container create 99421253b846ff3cf4cf64396d04d1e0d17de99dceddd53dc585bd37074e7a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_knuth, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 06:32:04 np0005604943 ceph-mgr[75558]: [devicehealth INFO root] Check health
Feb  2 06:32:04 np0005604943 systemd[1]: Started libpod-conmon-99421253b846ff3cf4cf64396d04d1e0d17de99dceddd53dc585bd37074e7a36.scope.
Feb  2 06:32:04 np0005604943 ceph-mgr[75558]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Feb  2 06:32:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb  2 06:32:04 np0005604943 podman[89232]: 2026-02-02 11:32:04.711772215 +0000 UTC m=+0.019540307 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:04 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb  2 06:32:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  2 06:32:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb  2 06:32:04 np0005604943 podman[89232]: 2026-02-02 11:32:04.846465507 +0000 UTC m=+0.154233669 container init 99421253b846ff3cf4cf64396d04d1e0d17de99dceddd53dc585bd37074e7a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_knuth, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 06:32:04 np0005604943 podman[89232]: 2026-02-02 11:32:04.852215304 +0000 UTC m=+0.159983386 container start 99421253b846ff3cf4cf64396d04d1e0d17de99dceddd53dc585bd37074e7a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 06:32:04 np0005604943 stupefied_knuth[89259]: 167 167
Feb  2 06:32:04 np0005604943 systemd[1]: libpod-99421253b846ff3cf4cf64396d04d1e0d17de99dceddd53dc585bd37074e7a36.scope: Deactivated successfully.
Feb  2 06:32:04 np0005604943 podman[89232]: 2026-02-02 11:32:04.864560082 +0000 UTC m=+0.172328194 container attach 99421253b846ff3cf4cf64396d04d1e0d17de99dceddd53dc585bd37074e7a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:04 np0005604943 podman[89232]: 2026-02-02 11:32:04.864945283 +0000 UTC m=+0.172713395 container died 99421253b846ff3cf4cf64396d04d1e0d17de99dceddd53dc585bd37074e7a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 06:32:04 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a100894e1e61f88017229e67ee53e8981594b8c887b556c2ae6bfe242404693f-merged.mount: Deactivated successfully.
Feb  2 06:32:04 np0005604943 podman[89232]: 2026-02-02 11:32:04.982938081 +0000 UTC m=+0.290706163 container remove 99421253b846ff3cf4cf64396d04d1e0d17de99dceddd53dc585bd37074e7a36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:05 np0005604943 systemd[1]: libpod-conmon-99421253b846ff3cf4cf64396d04d1e0d17de99dceddd53dc585bd37074e7a36.scope: Deactivated successfully.
Feb  2 06:32:05 np0005604943 podman[89288]: 2026-02-02 11:32:05.195370737 +0000 UTC m=+0.083661346 container create dfa1f6522161d552336704c79befd66ac853375a81d012f8bcb9f93103343c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bohr, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:05 np0005604943 ceph-mgr[75558]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2157679858; not ready for session (expect reconnect)
Feb  2 06:32:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 06:32:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 06:32:05 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 06:32:05 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:05 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:05 np0005604943 ceph-mon[75271]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb  2 06:32:05 np0005604943 ceph-mon[75271]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb  2 06:32:05 np0005604943 podman[89288]: 2026-02-02 11:32:05.154337758 +0000 UTC m=+0.042628457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:05 np0005604943 systemd[1]: Started libpod-conmon-dfa1f6522161d552336704c79befd66ac853375a81d012f8bcb9f93103343c3b.scope.
Feb  2 06:32:05 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:05 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6980f8119223d133d651c131fd8407edc9679a64852e9f99c6695ac1dc38d984/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:05 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6980f8119223d133d651c131fd8407edc9679a64852e9f99c6695ac1dc38d984/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:05 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6980f8119223d133d651c131fd8407edc9679a64852e9f99c6695ac1dc38d984/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:05 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6980f8119223d133d651c131fd8407edc9679a64852e9f99c6695ac1dc38d984/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:05 np0005604943 podman[89288]: 2026-02-02 11:32:05.332628854 +0000 UTC m=+0.220919453 container init dfa1f6522161d552336704c79befd66ac853375a81d012f8bcb9f93103343c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:05 np0005604943 podman[89288]: 2026-02-02 11:32:05.341537933 +0000 UTC m=+0.229828552 container start dfa1f6522161d552336704c79befd66ac853375a81d012f8bcb9f93103343c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:05 np0005604943 podman[89288]: 2026-02-02 11:32:05.362609423 +0000 UTC m=+0.250900042 container attach dfa1f6522161d552336704c79befd66ac853375a81d012f8bcb9f93103343c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bohr, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v33: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Feb  2 06:32:05 np0005604943 ceph-osd[88236]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 44.460 iops: 11381.636 elapsed_sec: 0.264
Feb  2 06:32:05 np0005604943 ceph-osd[88236]: log_channel(cluster) log [WRN] : OSD bench result of 11381.635860 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  2 06:32:05 np0005604943 ceph-osd[88236]: osd.2 0 waiting for initial osdmap
Feb  2 06:32:05 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2[88232]: 2026-02-02T11:32:05.845+0000 7f4d9ec5c640 -1 osd.2 0 waiting for initial osdmap
Feb  2 06:32:05 np0005604943 ceph-osd[88236]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb  2 06:32:05 np0005604943 ceph-osd[88236]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Feb  2 06:32:05 np0005604943 ceph-osd[88236]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb  2 06:32:05 np0005604943 ceph-osd[88236]: osd.2 14 check_osdmap_features require_osd_release unknown -> tentacle
Feb  2 06:32:05 np0005604943 ceph-osd[88236]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  2 06:32:05 np0005604943 ceph-osd[88236]: osd.2 14 set_numa_affinity not setting numa affinity
Feb  2 06:32:05 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-osd-2[88232]: 2026-02-02T11:32:05.873+0000 7f4d9924f640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  2 06:32:05 np0005604943 ceph-osd[88236]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Feb  2 06:32:05 np0005604943 clever_bohr[89305]: [
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:    {
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:        "available": false,
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:        "being_replaced": false,
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:        "ceph_device_lvm": false,
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:        "lsm_data": {},
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:        "lvs": [],
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:        "path": "/dev/sr0",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:        "rejected_reasons": [
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "Insufficient space (<5GB)",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "Has a FileSystem"
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:        ],
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:        "sys_api": {
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "actuators": null,
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "device_nodes": [
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:                "sr0"
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            ],
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "devname": "sr0",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "human_readable_size": "482.00 KB",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "id_bus": "ata",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "model": "QEMU DVD-ROM",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "nr_requests": "2",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "parent": "/dev/sr0",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "partitions": {},
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "path": "/dev/sr0",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "removable": "1",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "rev": "2.5+",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "ro": "0",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "rotational": "1",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "sas_address": "",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "sas_device_handle": "",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "scheduler_mode": "mq-deadline",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "sectors": 0,
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "sectorsize": "2048",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "size": 493568.0,
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "support_discard": "2048",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "type": "disk",
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:            "vendor": "QEMU"
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:        }
Feb  2 06:32:05 np0005604943 clever_bohr[89305]:    }
Feb  2 06:32:05 np0005604943 clever_bohr[89305]: ]
Feb  2 06:32:05 np0005604943 systemd[1]: libpod-dfa1f6522161d552336704c79befd66ac853375a81d012f8bcb9f93103343c3b.scope: Deactivated successfully.
Feb  2 06:32:05 np0005604943 podman[89288]: 2026-02-02 11:32:05.943933987 +0000 UTC m=+0.832224556 container died dfa1f6522161d552336704c79befd66ac853375a81d012f8bcb9f93103343c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bohr, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:05 np0005604943 systemd[1]: var-lib-containers-storage-overlay-6980f8119223d133d651c131fd8407edc9679a64852e9f99c6695ac1dc38d984-merged.mount: Deactivated successfully.
Feb  2 06:32:05 np0005604943 podman[89288]: 2026-02-02 11:32:05.995703727 +0000 UTC m=+0.883994306 container remove dfa1f6522161d552336704c79befd66ac853375a81d012f8bcb9f93103343c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bohr, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 06:32:06 np0005604943 systemd[1]: libpod-conmon-dfa1f6522161d552336704c79befd66ac853375a81d012f8bcb9f93103343c3b.scope: Deactivated successfully.
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb  2 06:32:06 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43686k
Feb  2 06:32:06 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43686k
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Feb  2 06:32:06 np0005604943 ceph-mgr[75558]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Feb  2 06:32:06 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:32:06 np0005604943 ceph-mgr[75558]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2157679858; not ready for session (expect reconnect)
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 06:32:06 np0005604943 ceph-mgr[75558]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.twcemg(active, since 56s)
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2157679858,v1:192.168.122.100:6811/2157679858] boot
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb  2 06:32:06 np0005604943 ceph-osd[88236]: osd.2 15 state: booting -> active
Feb  2 06:32:06 np0005604943 podman[90168]: 2026-02-02 11:32:06.543310414 +0000 UTC m=+0.053759388 container create 86952468403a72d0032e25bf5cfaf198678d3bd9eabf83c4d399510c0421ab88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_tesla, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 06:32:06 np0005604943 systemd[1]: Started libpod-conmon-86952468403a72d0032e25bf5cfaf198678d3bd9eabf83c4d399510c0421ab88.scope.
Feb  2 06:32:06 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:06 np0005604943 podman[90168]: 2026-02-02 11:32:06.615131815 +0000 UTC m=+0.125580829 container init 86952468403a72d0032e25bf5cfaf198678d3bd9eabf83c4d399510c0421ab88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:06 np0005604943 podman[90168]: 2026-02-02 11:32:06.52245417 +0000 UTC m=+0.032903174 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:06 np0005604943 podman[90168]: 2026-02-02 11:32:06.620713887 +0000 UTC m=+0.131162861 container start 86952468403a72d0032e25bf5cfaf198678d3bd9eabf83c4d399510c0421ab88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_tesla, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:06 np0005604943 podman[90168]: 2026-02-02 11:32:06.623942081 +0000 UTC m=+0.134391055 container attach 86952468403a72d0032e25bf5cfaf198678d3bd9eabf83c4d399510c0421ab88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Feb  2 06:32:06 np0005604943 eloquent_tesla[90185]: 167 167
Feb  2 06:32:06 np0005604943 systemd[1]: libpod-86952468403a72d0032e25bf5cfaf198678d3bd9eabf83c4d399510c0421ab88.scope: Deactivated successfully.
Feb  2 06:32:06 np0005604943 podman[90168]: 2026-02-02 11:32:06.625498056 +0000 UTC m=+0.135947040 container died 86952468403a72d0032e25bf5cfaf198678d3bd9eabf83c4d399510c0421ab88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_tesla, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:06 np0005604943 systemd[1]: var-lib-containers-storage-overlay-446eedce9464e9dbd5f8b19ba38bb8e52d932294389a59c0fdb2e5b39db7c9be-merged.mount: Deactivated successfully.
Feb  2 06:32:06 np0005604943 podman[90168]: 2026-02-02 11:32:06.661450457 +0000 UTC m=+0.171899471 container remove 86952468403a72d0032e25bf5cfaf198678d3bd9eabf83c4d399510c0421ab88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 06:32:06 np0005604943 systemd[1]: libpod-conmon-86952468403a72d0032e25bf5cfaf198678d3bd9eabf83c4d399510c0421ab88.scope: Deactivated successfully.
Feb  2 06:32:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:32:06 np0005604943 podman[90208]: 2026-02-02 11:32:06.837487348 +0000 UTC m=+0.061780891 container create caf59be2aacc0ba9571a17fa011cbb45383ea833977dcfbf2cbdf541efbc9bfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 06:32:06 np0005604943 systemd[1]: Started libpod-conmon-caf59be2aacc0ba9571a17fa011cbb45383ea833977dcfbf2cbdf541efbc9bfe.scope.
Feb  2 06:32:06 np0005604943 podman[90208]: 2026-02-02 11:32:06.814333066 +0000 UTC m=+0.038626649 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:06 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4574cd6969bbe73d4763fa9a32f6b985d3fda3c96b5a3c3828bc854195a8051f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4574cd6969bbe73d4763fa9a32f6b985d3fda3c96b5a3c3828bc854195a8051f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4574cd6969bbe73d4763fa9a32f6b985d3fda3c96b5a3c3828bc854195a8051f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4574cd6969bbe73d4763fa9a32f6b985d3fda3c96b5a3c3828bc854195a8051f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4574cd6969bbe73d4763fa9a32f6b985d3fda3c96b5a3c3828bc854195a8051f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:06 np0005604943 podman[90208]: 2026-02-02 11:32:06.934747546 +0000 UTC m=+0.159041129 container init caf59be2aacc0ba9571a17fa011cbb45383ea833977dcfbf2cbdf541efbc9bfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:06 np0005604943 podman[90208]: 2026-02-02 11:32:06.949454452 +0000 UTC m=+0.173747995 container start caf59be2aacc0ba9571a17fa011cbb45383ea833977dcfbf2cbdf541efbc9bfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_galois, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 06:32:06 np0005604943 podman[90208]: 2026-02-02 11:32:06.953783648 +0000 UTC m=+0.178077231 container attach caf59be2aacc0ba9571a17fa011cbb45383ea833977dcfbf2cbdf541efbc9bfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Feb  2 06:32:07 np0005604943 ceph-mon[75271]: OSD bench result of 11381.635860 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  2 06:32:07 np0005604943 ceph-mon[75271]: Adjusting osd_memory_target on compute-0 to 43686k
Feb  2 06:32:07 np0005604943 ceph-mon[75271]: Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Feb  2 06:32:07 np0005604943 ceph-mon[75271]: osd.2 [v2:192.168.122.100:6810/2157679858,v1:192.168.122.100:6811/2157679858] boot
Feb  2 06:32:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Feb  2 06:32:07 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Feb  2 06:32:07 np0005604943 modest_galois[90224]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:32:07 np0005604943 modest_galois[90224]: --> All data devices are unavailable
Feb  2 06:32:07 np0005604943 systemd[1]: libpod-caf59be2aacc0ba9571a17fa011cbb45383ea833977dcfbf2cbdf541efbc9bfe.scope: Deactivated successfully.
Feb  2 06:32:07 np0005604943 podman[90208]: 2026-02-02 11:32:07.429921434 +0000 UTC m=+0.654214977 container died caf59be2aacc0ba9571a17fa011cbb45383ea833977dcfbf2cbdf541efbc9bfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_galois, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:07 np0005604943 systemd[1]: var-lib-containers-storage-overlay-4574cd6969bbe73d4763fa9a32f6b985d3fda3c96b5a3c3828bc854195a8051f-merged.mount: Deactivated successfully.
Feb  2 06:32:07 np0005604943 podman[90208]: 2026-02-02 11:32:07.480623103 +0000 UTC m=+0.704916636 container remove caf59be2aacc0ba9571a17fa011cbb45383ea833977dcfbf2cbdf541efbc9bfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_galois, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 06:32:07 np0005604943 systemd[1]: libpod-conmon-caf59be2aacc0ba9571a17fa011cbb45383ea833977dcfbf2cbdf541efbc9bfe.scope: Deactivated successfully.
Feb  2 06:32:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:07 np0005604943 podman[90318]: 2026-02-02 11:32:07.947396918 +0000 UTC m=+0.046690804 container create 99eabd4ad9bb9eb4072781162b61bd341e7121b1d2aa878c3cd7dc857d96fcaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:07 np0005604943 systemd[1]: Started libpod-conmon-99eabd4ad9bb9eb4072781162b61bd341e7121b1d2aa878c3cd7dc857d96fcaa.scope.
Feb  2 06:32:07 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:08 np0005604943 podman[90318]: 2026-02-02 11:32:08.012881276 +0000 UTC m=+0.112175182 container init 99eabd4ad9bb9eb4072781162b61bd341e7121b1d2aa878c3cd7dc857d96fcaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:08 np0005604943 podman[90318]: 2026-02-02 11:32:08.018166228 +0000 UTC m=+0.117460104 container start 99eabd4ad9bb9eb4072781162b61bd341e7121b1d2aa878c3cd7dc857d96fcaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:08 np0005604943 podman[90318]: 2026-02-02 11:32:07.926474452 +0000 UTC m=+0.025768358 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:08 np0005604943 jovial_poincare[90335]: 167 167
Feb  2 06:32:08 np0005604943 systemd[1]: libpod-99eabd4ad9bb9eb4072781162b61bd341e7121b1d2aa878c3cd7dc857d96fcaa.scope: Deactivated successfully.
Feb  2 06:32:08 np0005604943 podman[90318]: 2026-02-02 11:32:08.023510183 +0000 UTC m=+0.122804059 container attach 99eabd4ad9bb9eb4072781162b61bd341e7121b1d2aa878c3cd7dc857d96fcaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:08 np0005604943 podman[90318]: 2026-02-02 11:32:08.023935165 +0000 UTC m=+0.123229061 container died 99eabd4ad9bb9eb4072781162b61bd341e7121b1d2aa878c3cd7dc857d96fcaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:08 np0005604943 systemd[1]: var-lib-containers-storage-overlay-e14bb089ddee6b1fd5ba7a227a3d7cdfddd0657883944ad7dcec7b06257ab44a-merged.mount: Deactivated successfully.
Feb  2 06:32:08 np0005604943 podman[90318]: 2026-02-02 11:32:08.065063887 +0000 UTC m=+0.164357763 container remove 99eabd4ad9bb9eb4072781162b61bd341e7121b1d2aa878c3cd7dc857d96fcaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 06:32:08 np0005604943 systemd[1]: libpod-conmon-99eabd4ad9bb9eb4072781162b61bd341e7121b1d2aa878c3cd7dc857d96fcaa.scope: Deactivated successfully.
Feb  2 06:32:08 np0005604943 podman[90360]: 2026-02-02 11:32:08.186790355 +0000 UTC m=+0.042573946 container create 56aeb3b3b2233cbfe18bd924dff27590ee184be069c2f5b57d59174e1db40971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mccarthy, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Feb  2 06:32:08 np0005604943 systemd[1]: Started libpod-conmon-56aeb3b3b2233cbfe18bd924dff27590ee184be069c2f5b57d59174e1db40971.scope.
Feb  2 06:32:08 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:08 np0005604943 podman[90360]: 2026-02-02 11:32:08.166985551 +0000 UTC m=+0.022769122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5809270a4718758f41ed0c4cfc90e99d7e79c47e762cec6755b111f31ca0803/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5809270a4718758f41ed0c4cfc90e99d7e79c47e762cec6755b111f31ca0803/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5809270a4718758f41ed0c4cfc90e99d7e79c47e762cec6755b111f31ca0803/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5809270a4718758f41ed0c4cfc90e99d7e79c47e762cec6755b111f31ca0803/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:08 np0005604943 podman[90360]: 2026-02-02 11:32:08.285244697 +0000 UTC m=+0.141028288 container init 56aeb3b3b2233cbfe18bd924dff27590ee184be069c2f5b57d59174e1db40971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mccarthy, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 06:32:08 np0005604943 podman[90360]: 2026-02-02 11:32:08.299430448 +0000 UTC m=+0.155214029 container start 56aeb3b3b2233cbfe18bd924dff27590ee184be069c2f5b57d59174e1db40971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mccarthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:08 np0005604943 podman[90360]: 2026-02-02 11:32:08.30430841 +0000 UTC m=+0.160092061 container attach 56aeb3b3b2233cbfe18bd924dff27590ee184be069c2f5b57d59174e1db40971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]: {
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:    "0": [
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:        {
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "devices": [
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "/dev/loop3"
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            ],
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "lv_name": "ceph_lv0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "lv_size": "21470642176",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "name": "ceph_lv0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "tags": {
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.cluster_name": "ceph",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.crush_device_class": "",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.encrypted": "0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.objectstore": "bluestore",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.osd_id": "0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.type": "block",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.vdo": "0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.with_tpm": "0"
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            },
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "type": "block",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "vg_name": "ceph_vg0"
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:        }
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:    ],
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:    "1": [
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:        {
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "devices": [
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "/dev/loop4"
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            ],
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "lv_name": "ceph_lv1",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "lv_size": "21470642176",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "name": "ceph_lv1",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "tags": {
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.cluster_name": "ceph",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.crush_device_class": "",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.encrypted": "0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.objectstore": "bluestore",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.osd_id": "1",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.type": "block",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.vdo": "0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.with_tpm": "0"
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            },
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "type": "block",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "vg_name": "ceph_vg1"
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:        }
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:    ],
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:    "2": [
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:        {
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "devices": [
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "/dev/loop5"
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            ],
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "lv_name": "ceph_lv2",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "lv_size": "21470642176",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "name": "ceph_lv2",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "tags": {
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.cluster_name": "ceph",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.crush_device_class": "",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.encrypted": "0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.objectstore": "bluestore",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.osd_id": "2",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.type": "block",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.vdo": "0",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:                "ceph.with_tpm": "0"
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            },
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "type": "block",
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:            "vg_name": "ceph_vg2"
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:        }
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]:    ]
Feb  2 06:32:08 np0005604943 sharp_mccarthy[90376]: }
Feb  2 06:32:08 np0005604943 systemd[1]: libpod-56aeb3b3b2233cbfe18bd924dff27590ee184be069c2f5b57d59174e1db40971.scope: Deactivated successfully.
Feb  2 06:32:08 np0005604943 podman[90360]: 2026-02-02 11:32:08.602472919 +0000 UTC m=+0.458256480 container died 56aeb3b3b2233cbfe18bd924dff27590ee184be069c2f5b57d59174e1db40971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mccarthy, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:08 np0005604943 systemd[1]: var-lib-containers-storage-overlay-e5809270a4718758f41ed0c4cfc90e99d7e79c47e762cec6755b111f31ca0803-merged.mount: Deactivated successfully.
Feb  2 06:32:08 np0005604943 podman[90360]: 2026-02-02 11:32:08.644844787 +0000 UTC m=+0.500628348 container remove 56aeb3b3b2233cbfe18bd924dff27590ee184be069c2f5b57d59174e1db40971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mccarthy, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:08 np0005604943 systemd[1]: libpod-conmon-56aeb3b3b2233cbfe18bd924dff27590ee184be069c2f5b57d59174e1db40971.scope: Deactivated successfully.
Feb  2 06:32:09 np0005604943 podman[90461]: 2026-02-02 11:32:09.110584961 +0000 UTC m=+0.059802103 container create 56994f44412a3aac67746002d534788320b4ff1bef239f65ffca9f67d94c9798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 06:32:09 np0005604943 systemd[1]: Started libpod-conmon-56994f44412a3aac67746002d534788320b4ff1bef239f65ffca9f67d94c9798.scope.
Feb  2 06:32:09 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:09 np0005604943 podman[90461]: 2026-02-02 11:32:09.086858495 +0000 UTC m=+0.036075677 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:09 np0005604943 podman[90461]: 2026-02-02 11:32:09.196666306 +0000 UTC m=+0.145883528 container init 56994f44412a3aac67746002d534788320b4ff1bef239f65ffca9f67d94c9798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bardeen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:09 np0005604943 podman[90461]: 2026-02-02 11:32:09.206451029 +0000 UTC m=+0.155668191 container start 56994f44412a3aac67746002d534788320b4ff1bef239f65ffca9f67d94c9798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bardeen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:09 np0005604943 podman[90461]: 2026-02-02 11:32:09.210996131 +0000 UTC m=+0.160213303 container attach 56994f44412a3aac67746002d534788320b4ff1bef239f65ffca9f67d94c9798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:09 np0005604943 focused_bardeen[90503]: 167 167
Feb  2 06:32:09 np0005604943 systemd[1]: libpod-56994f44412a3aac67746002d534788320b4ff1bef239f65ffca9f67d94c9798.scope: Deactivated successfully.
Feb  2 06:32:09 np0005604943 podman[90461]: 2026-02-02 11:32:09.213120333 +0000 UTC m=+0.162337555 container died 56994f44412a3aac67746002d534788320b4ff1bef239f65ffca9f67d94c9798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bardeen, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:09 np0005604943 systemd[1]: var-lib-containers-storage-overlay-4750501302d5bfbea263913d2825044be324f7f6efa13b6fce14b58b9dae6562-merged.mount: Deactivated successfully.
Feb  2 06:32:09 np0005604943 podman[90461]: 2026-02-02 11:32:09.272182574 +0000 UTC m=+0.221399736 container remove 56994f44412a3aac67746002d534788320b4ff1bef239f65ffca9f67d94c9798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 06:32:09 np0005604943 systemd[1]: libpod-conmon-56994f44412a3aac67746002d534788320b4ff1bef239f65ffca9f67d94c9798.scope: Deactivated successfully.
Feb  2 06:32:09 np0005604943 python3[90505]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:09 np0005604943 podman[90524]: 2026-02-02 11:32:09.407193956 +0000 UTC m=+0.054785099 container create d01c1d485ab2b33e343c0a898d3f40a388b3eb1be6a57330e2085a7a7c9a25fe (image=quay.io/ceph/ceph:v20, name=admiring_rhodes, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:09 np0005604943 systemd[1]: Started libpod-conmon-d01c1d485ab2b33e343c0a898d3f40a388b3eb1be6a57330e2085a7a7c9a25fe.scope.
Feb  2 06:32:09 np0005604943 podman[90543]: 2026-02-02 11:32:09.463680143 +0000 UTC m=+0.058544538 container create f6da963b87553cd1f80cb397c4e1955f1e86fd540a012546e00e6b727228d7d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 06:32:09 np0005604943 podman[90524]: 2026-02-02 11:32:09.382655795 +0000 UTC m=+0.030246968 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:09 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:09 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1869c40132315534fca683d095dd526c06ecb10a0d1bebb06bd20fab764a7772/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:09 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1869c40132315534fca683d095dd526c06ecb10a0d1bebb06bd20fab764a7772/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:09 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1869c40132315534fca683d095dd526c06ecb10a0d1bebb06bd20fab764a7772/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:09 np0005604943 systemd[1]: Started libpod-conmon-f6da963b87553cd1f80cb397c4e1955f1e86fd540a012546e00e6b727228d7d9.scope.
Feb  2 06:32:09 np0005604943 podman[90524]: 2026-02-02 11:32:09.512503108 +0000 UTC m=+0.160094301 container init d01c1d485ab2b33e343c0a898d3f40a388b3eb1be6a57330e2085a7a7c9a25fe (image=quay.io/ceph/ceph:v20, name=admiring_rhodes, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:09 np0005604943 podman[90524]: 2026-02-02 11:32:09.521632582 +0000 UTC m=+0.169223735 container start d01c1d485ab2b33e343c0a898d3f40a388b3eb1be6a57330e2085a7a7c9a25fe (image=quay.io/ceph/ceph:v20, name=admiring_rhodes, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:09 np0005604943 podman[90524]: 2026-02-02 11:32:09.525853714 +0000 UTC m=+0.173444927 container attach d01c1d485ab2b33e343c0a898d3f40a388b3eb1be6a57330e2085a7a7c9a25fe (image=quay.io/ceph/ceph:v20, name=admiring_rhodes, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 06:32:09 np0005604943 podman[90543]: 2026-02-02 11:32:09.444288741 +0000 UTC m=+0.039153136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:09 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:09 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7031337d24407af2bfa55ca01bb01c4ef3fcfc1ff7a69c847f4d5e25861f0b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:09 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7031337d24407af2bfa55ca01bb01c4ef3fcfc1ff7a69c847f4d5e25861f0b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:09 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7031337d24407af2bfa55ca01bb01c4ef3fcfc1ff7a69c847f4d5e25861f0b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:09 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7031337d24407af2bfa55ca01bb01c4ef3fcfc1ff7a69c847f4d5e25861f0b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:32:09
Feb  2 06:32:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:32:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:32:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['.mgr']
Feb  2 06:32:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:32:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:09 np0005604943 podman[90543]: 2026-02-02 11:32:09.586650346 +0000 UTC m=+0.181514741 container init f6da963b87553cd1f80cb397c4e1955f1e86fd540a012546e00e6b727228d7d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 06:32:09 np0005604943 podman[90543]: 2026-02-02 11:32:09.596258714 +0000 UTC m=+0.191123099 container start f6da963b87553cd1f80cb397c4e1955f1e86fd540a012546e00e6b727228d7d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:09 np0005604943 podman[90543]: 2026-02-02 11:32:09.600085185 +0000 UTC m=+0.194949640 container attach f6da963b87553cd1f80cb397c4e1955f1e86fd540a012546e00e6b727228d7d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 06:32:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  2 06:32:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4164575799' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb  2 06:32:10 np0005604943 admiring_rhodes[90555]: 
Feb  2 06:32:10 np0005604943 admiring_rhodes[90555]: {"fsid":"4548a36b-7cdc-5e3e-a814-4e1571be1fae","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":78,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1770031926,"num_in_osds":3,"osd_in_since":1770031906,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502898688,"bytes_avail":63909027840,"bytes_total":64411926528},"fsmap":{"epoch":1,"btime":"2026-02-02T11:30:49:598633+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-02-02T11:30:49.600559+0000","services":{}},"progress_events":{}}
Feb  2 06:32:10 np0005604943 systemd[1]: libpod-d01c1d485ab2b33e343c0a898d3f40a388b3eb1be6a57330e2085a7a7c9a25fe.scope: Deactivated successfully.
Feb  2 06:32:10 np0005604943 podman[90650]: 2026-02-02 11:32:10.096347415 +0000 UTC m=+0.020454824 container died d01c1d485ab2b33e343c0a898d3f40a388b3eb1be6a57330e2085a7a7c9a25fe (image=quay.io/ceph/ceph:v20, name=admiring_rhodes, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 06:32:10 np0005604943 systemd[1]: var-lib-containers-storage-overlay-1869c40132315534fca683d095dd526c06ecb10a0d1bebb06bd20fab764a7772-merged.mount: Deactivated successfully.
Feb  2 06:32:10 np0005604943 podman[90650]: 2026-02-02 11:32:10.136503738 +0000 UTC m=+0.060611127 container remove d01c1d485ab2b33e343c0a898d3f40a388b3eb1be6a57330e2085a7a7c9a25fe (image=quay.io/ceph/ceph:v20, name=admiring_rhodes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:10 np0005604943 systemd[1]: libpod-conmon-d01c1d485ab2b33e343c0a898d3f40a388b3eb1be6a57330e2085a7a7c9a25fe.scope: Deactivated successfully.
Feb  2 06:32:10 np0005604943 lvm[90680]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:32:10 np0005604943 lvm[90680]: VG ceph_vg1 finished
Feb  2 06:32:10 np0005604943 lvm[90677]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:32:10 np0005604943 lvm[90677]: VG ceph_vg0 finished
Feb  2 06:32:10 np0005604943 lvm[90682]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:32:10 np0005604943 lvm[90682]: VG ceph_vg2 finished
Feb  2 06:32:10 np0005604943 crazy_bartik[90564]: {}
Feb  2 06:32:10 np0005604943 systemd[1]: libpod-f6da963b87553cd1f80cb397c4e1955f1e86fd540a012546e00e6b727228d7d9.scope: Deactivated successfully.
Feb  2 06:32:10 np0005604943 systemd[1]: libpod-f6da963b87553cd1f80cb397c4e1955f1e86fd540a012546e00e6b727228d7d9.scope: Consumed 1.059s CPU time.
Feb  2 06:32:10 np0005604943 podman[90543]: 2026-02-02 11:32:10.343125325 +0000 UTC m=+0.937989710 container died f6da963b87553cd1f80cb397c4e1955f1e86fd540a012546e00e6b727228d7d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:10 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b7031337d24407af2bfa55ca01bb01c4ef3fcfc1ff7a69c847f4d5e25861f0b1-merged.mount: Deactivated successfully.
Feb  2 06:32:10 np0005604943 podman[90543]: 2026-02-02 11:32:10.395996887 +0000 UTC m=+0.990861282 container remove f6da963b87553cd1f80cb397c4e1955f1e86fd540a012546e00e6b727228d7d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 06:32:10 np0005604943 systemd[1]: libpod-conmon-f6da963b87553cd1f80cb397c4e1955f1e86fd540a012546e00e6b727228d7d9.scope: Deactivated successfully.
Feb  2 06:32:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:32:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:32:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:10 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:10 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:10 np0005604943 python3[90739]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:10 np0005604943 podman[90748]: 2026-02-02 11:32:10.688681228 +0000 UTC m=+0.043258995 container create 6b17cf47302f3f75d9efd887c2ea705695ba4c918b882d4601b3e3b2542e1802 (image=quay.io/ceph/ceph:v20, name=condescending_vaughan, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  2 06:32:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:32:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:32:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:32:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:32:10 np0005604943 systemd[1]: Started libpod-conmon-6b17cf47302f3f75d9efd887c2ea705695ba4c918b882d4601b3e3b2542e1802.scope.
Feb  2 06:32:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:32:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:32:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:32:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:32:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:32:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:32:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:32:10 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/810742b9de6a2a9a39329103bb30fdeb4b470901f42acf8eeba593be4e9d0eda/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/810742b9de6a2a9a39329103bb30fdeb4b470901f42acf8eeba593be4e9d0eda/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:10 np0005604943 podman[90748]: 2026-02-02 11:32:10.761404635 +0000 UTC m=+0.115982432 container init 6b17cf47302f3f75d9efd887c2ea705695ba4c918b882d4601b3e3b2542e1802 (image=quay.io/ceph/ceph:v20, name=condescending_vaughan, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  2 06:32:10 np0005604943 podman[90748]: 2026-02-02 11:32:10.767572763 +0000 UTC m=+0.122150550 container start 6b17cf47302f3f75d9efd887c2ea705695ba4c918b882d4601b3e3b2542e1802 (image=quay.io/ceph/ceph:v20, name=condescending_vaughan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Feb  2 06:32:10 np0005604943 podman[90748]: 2026-02-02 11:32:10.675586038 +0000 UTC m=+0.030163805 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:10 np0005604943 podman[90748]: 2026-02-02 11:32:10.771497567 +0000 UTC m=+0.126075394 container attach 6b17cf47302f3f75d9efd887c2ea705695ba4c918b882d4601b3e3b2542e1802 (image=quay.io/ceph/ceph:v20, name=condescending_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 06:32:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 06:32:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3177058227' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 06:32:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Feb  2 06:32:11 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3177058227' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 06:32:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3177058227' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 06:32:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Feb  2 06:32:11 np0005604943 condescending_vaughan[90763]: pool 'vms' created
Feb  2 06:32:11 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Feb  2 06:32:11 np0005604943 systemd[1]: libpod-6b17cf47302f3f75d9efd887c2ea705695ba4c918b882d4601b3e3b2542e1802.scope: Deactivated successfully.
Feb  2 06:32:11 np0005604943 podman[90748]: 2026-02-02 11:32:11.659419095 +0000 UTC m=+1.013996842 container died 6b17cf47302f3f75d9efd887c2ea705695ba4c918b882d4601b3e3b2542e1802 (image=quay.io/ceph/ceph:v20, name=condescending_vaughan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 06:32:11 np0005604943 systemd[1]: var-lib-containers-storage-overlay-810742b9de6a2a9a39329103bb30fdeb4b470901f42acf8eeba593be4e9d0eda-merged.mount: Deactivated successfully.
Feb  2 06:32:11 np0005604943 podman[90748]: 2026-02-02 11:32:11.705171681 +0000 UTC m=+1.059749448 container remove 6b17cf47302f3f75d9efd887c2ea705695ba4c918b882d4601b3e3b2542e1802 (image=quay.io/ceph/ceph:v20, name=condescending_vaughan, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:11 np0005604943 systemd[1]: libpod-conmon-6b17cf47302f3f75d9efd887c2ea705695ba4c918b882d4601b3e3b2542e1802.scope: Deactivated successfully.
Feb  2 06:32:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:32:12 np0005604943 python3[90826]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:12 np0005604943 podman[90827]: 2026-02-02 11:32:12.053294847 +0000 UTC m=+0.034723497 container create ab1438320cf99b795a9d59915d8757c579d0c1fc948a68f4024773dd7c733753 (image=quay.io/ceph/ceph:v20, name=focused_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Feb  2 06:32:12 np0005604943 systemd[1]: Started libpod-conmon-ab1438320cf99b795a9d59915d8757c579d0c1fc948a68f4024773dd7c733753.scope.
Feb  2 06:32:12 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:32:12 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:12 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72447f3a9416a462d8e39852f63f915b33bd6ea20043970d732cb30d1f2519d5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:12 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72447f3a9416a462d8e39852f63f915b33bd6ea20043970d732cb30d1f2519d5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:12 np0005604943 podman[90827]: 2026-02-02 11:32:12.036446849 +0000 UTC m=+0.017875529 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:12 np0005604943 podman[90827]: 2026-02-02 11:32:12.134805249 +0000 UTC m=+0.116233919 container init ab1438320cf99b795a9d59915d8757c579d0c1fc948a68f4024773dd7c733753 (image=quay.io/ceph/ceph:v20, name=focused_euler, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 06:32:12 np0005604943 podman[90827]: 2026-02-02 11:32:12.142577374 +0000 UTC m=+0.124006024 container start ab1438320cf99b795a9d59915d8757c579d0c1fc948a68f4024773dd7c733753 (image=quay.io/ceph/ceph:v20, name=focused_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 06:32:12 np0005604943 podman[90827]: 2026-02-02 11:32:12.145297093 +0000 UTC m=+0.126725743 container attach ab1438320cf99b795a9d59915d8757c579d0c1fc948a68f4024773dd7c733753 (image=quay.io/ceph/ceph:v20, name=focused_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 06:32:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3822893138' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 06:32:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Feb  2 06:32:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3822893138' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 06:32:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Feb  2 06:32:12 np0005604943 focused_euler[90842]: pool 'volumes' created
Feb  2 06:32:12 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Feb  2 06:32:12 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:32:12 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3177058227' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 06:32:12 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3822893138' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 06:32:12 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:32:12 np0005604943 systemd[1]: libpod-ab1438320cf99b795a9d59915d8757c579d0c1fc948a68f4024773dd7c733753.scope: Deactivated successfully.
Feb  2 06:32:12 np0005604943 podman[90827]: 2026-02-02 11:32:12.674827116 +0000 UTC m=+0.656255776 container died ab1438320cf99b795a9d59915d8757c579d0c1fc948a68f4024773dd7c733753 (image=quay.io/ceph/ceph:v20, name=focused_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 06:32:12 np0005604943 systemd[1]: var-lib-containers-storage-overlay-72447f3a9416a462d8e39852f63f915b33bd6ea20043970d732cb30d1f2519d5-merged.mount: Deactivated successfully.
Feb  2 06:32:12 np0005604943 podman[90827]: 2026-02-02 11:32:12.712047145 +0000 UTC m=+0.693475785 container remove ab1438320cf99b795a9d59915d8757c579d0c1fc948a68f4024773dd7c733753 (image=quay.io/ceph/ceph:v20, name=focused_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:12 np0005604943 systemd[1]: libpod-conmon-ab1438320cf99b795a9d59915d8757c579d0c1fc948a68f4024773dd7c733753.scope: Deactivated successfully.
Feb  2 06:32:13 np0005604943 python3[90908]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:13 np0005604943 podman[90909]: 2026-02-02 11:32:13.085652061 +0000 UTC m=+0.059981620 container create 5754c5cfb24c7a407ed1f9161cb38ac917f1d94f08620e8f34db3ddb9aa6da4e (image=quay.io/ceph/ceph:v20, name=serene_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:13 np0005604943 systemd[1]: Started libpod-conmon-5754c5cfb24c7a407ed1f9161cb38ac917f1d94f08620e8f34db3ddb9aa6da4e.scope.
Feb  2 06:32:13 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f79c1115fef51b489a86e0c354a105c2d7af379bca8e36aece9975e802f7eb2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f79c1115fef51b489a86e0c354a105c2d7af379bca8e36aece9975e802f7eb2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:13 np0005604943 podman[90909]: 2026-02-02 11:32:13.058884184 +0000 UTC m=+0.033213793 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:13 np0005604943 podman[90909]: 2026-02-02 11:32:13.160744136 +0000 UTC m=+0.135073725 container init 5754c5cfb24c7a407ed1f9161cb38ac917f1d94f08620e8f34db3ddb9aa6da4e (image=quay.io/ceph/ceph:v20, name=serene_lalande, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 06:32:13 np0005604943 podman[90909]: 2026-02-02 11:32:13.164303439 +0000 UTC m=+0.138632998 container start 5754c5cfb24c7a407ed1f9161cb38ac917f1d94f08620e8f34db3ddb9aa6da4e (image=quay.io/ceph/ceph:v20, name=serene_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 06:32:13 np0005604943 podman[90909]: 2026-02-02 11:32:13.169130689 +0000 UTC m=+0.143460318 container attach 5754c5cfb24c7a407ed1f9161cb38ac917f1d94f08620e8f34db3ddb9aa6da4e (image=quay.io/ceph/ceph:v20, name=serene_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v41: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 06:32:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3653754925' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 06:32:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Feb  2 06:32:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3653754925' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 06:32:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Feb  2 06:32:13 np0005604943 serene_lalande[90924]: pool 'backups' created
Feb  2 06:32:13 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Feb  2 06:32:13 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3822893138' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 06:32:13 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3653754925' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 06:32:13 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3653754925' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 06:32:13 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:32:13 np0005604943 systemd[1]: libpod-5754c5cfb24c7a407ed1f9161cb38ac917f1d94f08620e8f34db3ddb9aa6da4e.scope: Deactivated successfully.
Feb  2 06:32:13 np0005604943 podman[90909]: 2026-02-02 11:32:13.68268933 +0000 UTC m=+0.657018919 container died 5754c5cfb24c7a407ed1f9161cb38ac917f1d94f08620e8f34db3ddb9aa6da4e (image=quay.io/ceph/ceph:v20, name=serene_lalande, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:13 np0005604943 systemd[1]: var-lib-containers-storage-overlay-3f79c1115fef51b489a86e0c354a105c2d7af379bca8e36aece9975e802f7eb2-merged.mount: Deactivated successfully.
Feb  2 06:32:13 np0005604943 podman[90909]: 2026-02-02 11:32:13.754959924 +0000 UTC m=+0.729289503 container remove 5754c5cfb24c7a407ed1f9161cb38ac917f1d94f08620e8f34db3ddb9aa6da4e (image=quay.io/ceph/ceph:v20, name=serene_lalande, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:13 np0005604943 systemd[1]: libpod-conmon-5754c5cfb24c7a407ed1f9161cb38ac917f1d94f08620e8f34db3ddb9aa6da4e.scope: Deactivated successfully.
Feb  2 06:32:13 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:32:14 np0005604943 python3[90990]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:14 np0005604943 podman[90991]: 2026-02-02 11:32:14.107808848 +0000 UTC m=+0.053179362 container create 16111fba94d8d741031fda0c706eeea4a8333f326fc3eb7bb15216bf9b09ddf1 (image=quay.io/ceph/ceph:v20, name=charming_neumann, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 06:32:14 np0005604943 systemd[1]: Started libpod-conmon-16111fba94d8d741031fda0c706eeea4a8333f326fc3eb7bb15216bf9b09ddf1.scope.
Feb  2 06:32:14 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:14 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f40b561685899f9a620b215330645280d2dd3af0f23e30b5ffa1a2c75006d4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:14 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3f40b561685899f9a620b215330645280d2dd3af0f23e30b5ffa1a2c75006d4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:14 np0005604943 podman[90991]: 2026-02-02 11:32:14.087142119 +0000 UTC m=+0.032512633 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:14 np0005604943 podman[90991]: 2026-02-02 11:32:14.201412909 +0000 UTC m=+0.146783383 container init 16111fba94d8d741031fda0c706eeea4a8333f326fc3eb7bb15216bf9b09ddf1 (image=quay.io/ceph/ceph:v20, name=charming_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 06:32:14 np0005604943 podman[90991]: 2026-02-02 11:32:14.207089484 +0000 UTC m=+0.152459998 container start 16111fba94d8d741031fda0c706eeea4a8333f326fc3eb7bb15216bf9b09ddf1 (image=quay.io/ceph/ceph:v20, name=charming_neumann, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:14 np0005604943 podman[90991]: 2026-02-02 11:32:14.216396194 +0000 UTC m=+0.161766668 container attach 16111fba94d8d741031fda0c706eeea4a8333f326fc3eb7bb15216bf9b09ddf1 (image=quay.io/ceph/ceph:v20, name=charming_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 06:32:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3721595935' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 06:32:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Feb  2 06:32:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3721595935' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 06:32:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Feb  2 06:32:14 np0005604943 charming_neumann[91007]: pool 'images' created
Feb  2 06:32:14 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Feb  2 06:32:14 np0005604943 systemd[1]: libpod-16111fba94d8d741031fda0c706eeea4a8333f326fc3eb7bb15216bf9b09ddf1.scope: Deactivated successfully.
Feb  2 06:32:14 np0005604943 conmon[91007]: conmon 16111fba94d8d741031f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-16111fba94d8d741031fda0c706eeea4a8333f326fc3eb7bb15216bf9b09ddf1.scope/container/memory.events
Feb  2 06:32:14 np0005604943 podman[90991]: 2026-02-02 11:32:14.701023616 +0000 UTC m=+0.646394090 container died 16111fba94d8d741031fda0c706eeea4a8333f326fc3eb7bb15216bf9b09ddf1 (image=quay.io/ceph/ceph:v20, name=charming_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Feb  2 06:32:14 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:32:14 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3721595935' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 06:32:14 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3721595935' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 06:32:14 np0005604943 systemd[1]: var-lib-containers-storage-overlay-c3f40b561685899f9a620b215330645280d2dd3af0f23e30b5ffa1a2c75006d4-merged.mount: Deactivated successfully.
Feb  2 06:32:14 np0005604943 podman[90991]: 2026-02-02 11:32:14.823112553 +0000 UTC m=+0.768483067 container remove 16111fba94d8d741031fda0c706eeea4a8333f326fc3eb7bb15216bf9b09ddf1 (image=quay.io/ceph/ceph:v20, name=charming_neumann, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:14 np0005604943 systemd[1]: libpod-conmon-16111fba94d8d741031fda0c706eeea4a8333f326fc3eb7bb15216bf9b09ddf1.scope: Deactivated successfully.
Feb  2 06:32:15 np0005604943 python3[91072]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:15 np0005604943 podman[91073]: 2026-02-02 11:32:15.167501462 +0000 UTC m=+0.056579660 container create 22521d1c5e7fa6b90b0b02fbbaf817fb05ac202b28d769ca7ed43b9c4d04db4d (image=quay.io/ceph/ceph:v20, name=great_elgamal, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:15 np0005604943 systemd[1]: Started libpod-conmon-22521d1c5e7fa6b90b0b02fbbaf817fb05ac202b28d769ca7ed43b9c4d04db4d.scope.
Feb  2 06:32:15 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:15 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da66d49ab14412e80022b2ed9c39128c8cb3e9c8d9c97c2305d49fa5c4601f10/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:15 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da66d49ab14412e80022b2ed9c39128c8cb3e9c8d9c97c2305d49fa5c4601f10/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:15 np0005604943 podman[91073]: 2026-02-02 11:32:15.145435133 +0000 UTC m=+0.034513341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:15 np0005604943 podman[91073]: 2026-02-02 11:32:15.247350146 +0000 UTC m=+0.136428354 container init 22521d1c5e7fa6b90b0b02fbbaf817fb05ac202b28d769ca7ed43b9c4d04db4d (image=quay.io/ceph/ceph:v20, name=great_elgamal, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 06:32:15 np0005604943 podman[91073]: 2026-02-02 11:32:15.253944137 +0000 UTC m=+0.143022325 container start 22521d1c5e7fa6b90b0b02fbbaf817fb05ac202b28d769ca7ed43b9c4d04db4d (image=quay.io/ceph/ceph:v20, name=great_elgamal, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 06:32:15 np0005604943 podman[91073]: 2026-02-02 11:32:15.257726517 +0000 UTC m=+0.146804695 container attach 22521d1c5e7fa6b90b0b02fbbaf817fb05ac202b28d769ca7ed43b9c4d04db4d (image=quay.io/ceph/ceph:v20, name=great_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 06:32:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v44: 5 pgs: 3 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 06:32:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2655259839' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 06:32:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:32:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Feb  2 06:32:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2655259839' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 06:32:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Feb  2 06:32:15 np0005604943 great_elgamal[91088]: pool 'cephfs.cephfs.meta' created
Feb  2 06:32:15 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Feb  2 06:32:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:32:15 np0005604943 systemd[1]: libpod-22521d1c5e7fa6b90b0b02fbbaf817fb05ac202b28d769ca7ed43b9c4d04db4d.scope: Deactivated successfully.
Feb  2 06:32:15 np0005604943 podman[91073]: 2026-02-02 11:32:15.705422039 +0000 UTC m=+0.594500207 container died 22521d1c5e7fa6b90b0b02fbbaf817fb05ac202b28d769ca7ed43b9c4d04db4d (image=quay.io/ceph/ceph:v20, name=great_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:15 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/2655259839' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 06:32:15 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/2655259839' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 06:32:15 np0005604943 systemd[1]: var-lib-containers-storage-overlay-da66d49ab14412e80022b2ed9c39128c8cb3e9c8d9c97c2305d49fa5c4601f10-merged.mount: Deactivated successfully.
Feb  2 06:32:15 np0005604943 podman[91073]: 2026-02-02 11:32:15.740279048 +0000 UTC m=+0.629357246 container remove 22521d1c5e7fa6b90b0b02fbbaf817fb05ac202b28d769ca7ed43b9c4d04db4d (image=quay.io/ceph/ceph:v20, name=great_elgamal, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:15 np0005604943 systemd[1]: libpod-conmon-22521d1c5e7fa6b90b0b02fbbaf817fb05ac202b28d769ca7ed43b9c4d04db4d.scope: Deactivated successfully.
Feb  2 06:32:16 np0005604943 python3[91153]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:16 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:32:16 np0005604943 podman[91154]: 2026-02-02 11:32:16.145347616 +0000 UTC m=+0.086170558 container create df234281bfa49a64501475d8e9afd777fcf6d3c4478a6931d932925d84b60660 (image=quay.io/ceph/ceph:v20, name=wonderful_mendeleev, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 06:32:16 np0005604943 podman[91154]: 2026-02-02 11:32:16.079874038 +0000 UTC m=+0.020696950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:16 np0005604943 systemd[1]: Started libpod-conmon-df234281bfa49a64501475d8e9afd777fcf6d3c4478a6931d932925d84b60660.scope.
Feb  2 06:32:16 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fa2d60378e0f9be15f3bbdf76638a71648f739402bfa72b888a967c280354df/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fa2d60378e0f9be15f3bbdf76638a71648f739402bfa72b888a967c280354df/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:16 np0005604943 podman[91154]: 2026-02-02 11:32:16.229545685 +0000 UTC m=+0.170368607 container init df234281bfa49a64501475d8e9afd777fcf6d3c4478a6931d932925d84b60660 (image=quay.io/ceph/ceph:v20, name=wonderful_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:16 np0005604943 podman[91154]: 2026-02-02 11:32:16.234909231 +0000 UTC m=+0.175732133 container start df234281bfa49a64501475d8e9afd777fcf6d3c4478a6931d932925d84b60660 (image=quay.io/ceph/ceph:v20, name=wonderful_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 06:32:16 np0005604943 podman[91154]: 2026-02-02 11:32:16.237918908 +0000 UTC m=+0.178741900 container attach df234281bfa49a64501475d8e9afd777fcf6d3c4478a6931d932925d84b60660 (image=quay.io/ceph/ceph:v20, name=wonderful_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 06:32:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 06:32:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2548061199' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 06:32:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Feb  2 06:32:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2548061199' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 06:32:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Feb  2 06:32:16 np0005604943 wonderful_mendeleev[91169]: pool 'cephfs.cephfs.data' created
Feb  2 06:32:16 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Feb  2 06:32:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:32:16 np0005604943 systemd[1]: libpod-df234281bfa49a64501475d8e9afd777fcf6d3c4478a6931d932925d84b60660.scope: Deactivated successfully.
Feb  2 06:32:16 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/2548061199' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb  2 06:32:16 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/2548061199' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 06:32:16 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:32:16 np0005604943 podman[91196]: 2026-02-02 11:32:16.756312058 +0000 UTC m=+0.025597702 container died df234281bfa49a64501475d8e9afd777fcf6d3c4478a6931d932925d84b60660 (image=quay.io/ceph/ceph:v20, name=wonderful_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  2 06:32:16 np0005604943 systemd[1]: var-lib-containers-storage-overlay-1fa2d60378e0f9be15f3bbdf76638a71648f739402bfa72b888a967c280354df-merged.mount: Deactivated successfully.
Feb  2 06:32:16 np0005604943 podman[91196]: 2026-02-02 11:32:16.801001653 +0000 UTC m=+0.070287287 container remove df234281bfa49a64501475d8e9afd777fcf6d3c4478a6931d932925d84b60660 (image=quay.io/ceph/ceph:v20, name=wonderful_mendeleev, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 06:32:16 np0005604943 systemd[1]: libpod-conmon-df234281bfa49a64501475d8e9afd777fcf6d3c4478a6931d932925d84b60660.scope: Deactivated successfully.
Feb  2 06:32:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:32:17 np0005604943 python3[91236]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:17 np0005604943 podman[91237]: 2026-02-02 11:32:17.236319007 +0000 UTC m=+0.059530176 container create d9f55532f95b8dfbdf9cf8decac3c04761d81abbffb20c9b959d98b8bd49ce11 (image=quay.io/ceph/ceph:v20, name=practical_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 06:32:17 np0005604943 systemd[1]: Started libpod-conmon-d9f55532f95b8dfbdf9cf8decac3c04761d81abbffb20c9b959d98b8bd49ce11.scope.
Feb  2 06:32:17 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:17 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce0ac560da3b71780597a3259e413e0050d7486868a23c0801a0d7aaf57cfdd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:17 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce0ac560da3b71780597a3259e413e0050d7486868a23c0801a0d7aaf57cfdd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:17 np0005604943 podman[91237]: 2026-02-02 11:32:17.208873571 +0000 UTC m=+0.032084790 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:17 np0005604943 podman[91237]: 2026-02-02 11:32:17.310947959 +0000 UTC m=+0.134159168 container init d9f55532f95b8dfbdf9cf8decac3c04761d81abbffb20c9b959d98b8bd49ce11 (image=quay.io/ceph/ceph:v20, name=practical_solomon, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:17 np0005604943 podman[91237]: 2026-02-02 11:32:17.317153769 +0000 UTC m=+0.140364928 container start d9f55532f95b8dfbdf9cf8decac3c04761d81abbffb20c9b959d98b8bd49ce11 (image=quay.io/ceph/ceph:v20, name=practical_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 06:32:17 np0005604943 podman[91237]: 2026-02-02 11:32:17.321866396 +0000 UTC m=+0.145077565 container attach d9f55532f95b8dfbdf9cf8decac3c04761d81abbffb20c9b959d98b8bd49ce11 (image=quay.io/ceph/ceph:v20, name=practical_solomon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v47: 7 pgs: 1 creating+peering, 4 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Feb  2 06:32:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Feb  2 06:32:17 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Feb  2 06:32:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:32:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Feb  2 06:32:17 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1527311046' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Feb  2 06:32:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Feb  2 06:32:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1527311046' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb  2 06:32:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Feb  2 06:32:18 np0005604943 practical_solomon[91252]: enabled application 'rbd' on pool 'vms'
Feb  2 06:32:18 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Feb  2 06:32:18 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/1527311046' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Feb  2 06:32:18 np0005604943 systemd[1]: libpod-d9f55532f95b8dfbdf9cf8decac3c04761d81abbffb20c9b959d98b8bd49ce11.scope: Deactivated successfully.
Feb  2 06:32:18 np0005604943 podman[91237]: 2026-02-02 11:32:18.740187122 +0000 UTC m=+1.563398281 container died d9f55532f95b8dfbdf9cf8decac3c04761d81abbffb20c9b959d98b8bd49ce11 (image=quay.io/ceph/ceph:v20, name=practical_solomon, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:18 np0005604943 systemd[1]: var-lib-containers-storage-overlay-3ce0ac560da3b71780597a3259e413e0050d7486868a23c0801a0d7aaf57cfdd-merged.mount: Deactivated successfully.
Feb  2 06:32:18 np0005604943 podman[91237]: 2026-02-02 11:32:18.778526163 +0000 UTC m=+1.601737312 container remove d9f55532f95b8dfbdf9cf8decac3c04761d81abbffb20c9b959d98b8bd49ce11 (image=quay.io/ceph/ceph:v20, name=practical_solomon, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:18 np0005604943 systemd[1]: libpod-conmon-d9f55532f95b8dfbdf9cf8decac3c04761d81abbffb20c9b959d98b8bd49ce11.scope: Deactivated successfully.
Feb  2 06:32:19 np0005604943 python3[91315]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:19 np0005604943 podman[91316]: 2026-02-02 11:32:19.135988851 +0000 UTC m=+0.061444432 container create 2a7ca65c7131cce4b5d863bd813eaf6e7986afbab78ba924426e2c4fac591d75 (image=quay.io/ceph/ceph:v20, name=pensive_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 06:32:19 np0005604943 systemd[1]: Started libpod-conmon-2a7ca65c7131cce4b5d863bd813eaf6e7986afbab78ba924426e2c4fac591d75.scope.
Feb  2 06:32:19 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:19 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/993dcc6e32df6a890adaafb20e52614604e4cd3ff81db3e284fa85ec4fea4692/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:19 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/993dcc6e32df6a890adaafb20e52614604e4cd3ff81db3e284fa85ec4fea4692/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:19 np0005604943 podman[91316]: 2026-02-02 11:32:19.110518223 +0000 UTC m=+0.035973844 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:19 np0005604943 podman[91316]: 2026-02-02 11:32:19.222670533 +0000 UTC m=+0.148126104 container init 2a7ca65c7131cce4b5d863bd813eaf6e7986afbab78ba924426e2c4fac591d75 (image=quay.io/ceph/ceph:v20, name=pensive_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:19 np0005604943 podman[91316]: 2026-02-02 11:32:19.228021507 +0000 UTC m=+0.153477058 container start 2a7ca65c7131cce4b5d863bd813eaf6e7986afbab78ba924426e2c4fac591d75 (image=quay.io/ceph/ceph:v20, name=pensive_goodall, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:19 np0005604943 podman[91316]: 2026-02-02 11:32:19.231309213 +0000 UTC m=+0.156764794 container attach 2a7ca65c7131cce4b5d863bd813eaf6e7986afbab78ba924426e2c4fac591d75 (image=quay.io/ceph/ceph:v20, name=pensive_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  2 06:32:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v50: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Feb  2 06:32:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1936024276' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Feb  2 06:32:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Feb  2 06:32:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1936024276' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb  2 06:32:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Feb  2 06:32:19 np0005604943 pensive_goodall[91331]: enabled application 'rbd' on pool 'volumes'
Feb  2 06:32:19 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/1527311046' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb  2 06:32:19 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/1936024276' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Feb  2 06:32:19 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Feb  2 06:32:19 np0005604943 systemd[1]: libpod-2a7ca65c7131cce4b5d863bd813eaf6e7986afbab78ba924426e2c4fac591d75.scope: Deactivated successfully.
Feb  2 06:32:19 np0005604943 podman[91316]: 2026-02-02 11:32:19.763288987 +0000 UTC m=+0.688744538 container died 2a7ca65c7131cce4b5d863bd813eaf6e7986afbab78ba924426e2c4fac591d75 (image=quay.io/ceph/ceph:v20, name=pensive_goodall, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 06:32:19 np0005604943 systemd[1]: var-lib-containers-storage-overlay-993dcc6e32df6a890adaafb20e52614604e4cd3ff81db3e284fa85ec4fea4692-merged.mount: Deactivated successfully.
Feb  2 06:32:19 np0005604943 podman[91316]: 2026-02-02 11:32:19.7999718 +0000 UTC m=+0.725427351 container remove 2a7ca65c7131cce4b5d863bd813eaf6e7986afbab78ba924426e2c4fac591d75 (image=quay.io/ceph/ceph:v20, name=pensive_goodall, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 06:32:19 np0005604943 systemd[1]: libpod-conmon-2a7ca65c7131cce4b5d863bd813eaf6e7986afbab78ba924426e2c4fac591d75.scope: Deactivated successfully.
Feb  2 06:32:20 np0005604943 python3[91392]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:20 np0005604943 podman[91393]: 2026-02-02 11:32:20.142183395 +0000 UTC m=+0.046252151 container create 23ebc5d80576626e0c4d2f272fbfeace8ec177a887f068a16e8e1de8fc83c31e (image=quay.io/ceph/ceph:v20, name=infallible_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 06:32:20 np0005604943 systemd[1]: Started libpod-conmon-23ebc5d80576626e0c4d2f272fbfeace8ec177a887f068a16e8e1de8fc83c31e.scope.
Feb  2 06:32:20 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:20 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4859c46d6db9b4a1476a5002b2bcbde6367e83af294a559c0ae6c3bf77282c13/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:20 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4859c46d6db9b4a1476a5002b2bcbde6367e83af294a559c0ae6c3bf77282c13/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:20 np0005604943 podman[91393]: 2026-02-02 11:32:20.122824605 +0000 UTC m=+0.026893411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:20 np0005604943 podman[91393]: 2026-02-02 11:32:20.237366143 +0000 UTC m=+0.141434909 container init 23ebc5d80576626e0c4d2f272fbfeace8ec177a887f068a16e8e1de8fc83c31e (image=quay.io/ceph/ceph:v20, name=infallible_matsumoto, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:20 np0005604943 podman[91393]: 2026-02-02 11:32:20.245253772 +0000 UTC m=+0.149322528 container start 23ebc5d80576626e0c4d2f272fbfeace8ec177a887f068a16e8e1de8fc83c31e (image=quay.io/ceph/ceph:v20, name=infallible_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 06:32:20 np0005604943 podman[91393]: 2026-02-02 11:32:20.248635079 +0000 UTC m=+0.152703855 container attach 23ebc5d80576626e0c4d2f272fbfeace8ec177a887f068a16e8e1de8fc83c31e (image=quay.io/ceph/ceph:v20, name=infallible_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Feb  2 06:32:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1423252608' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Feb  2 06:32:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Feb  2 06:32:20 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/1936024276' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb  2 06:32:20 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/1423252608' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Feb  2 06:32:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1423252608' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb  2 06:32:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Feb  2 06:32:20 np0005604943 infallible_matsumoto[91409]: enabled application 'rbd' on pool 'backups'
Feb  2 06:32:20 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Feb  2 06:32:20 np0005604943 systemd[1]: libpod-23ebc5d80576626e0c4d2f272fbfeace8ec177a887f068a16e8e1de8fc83c31e.scope: Deactivated successfully.
Feb  2 06:32:20 np0005604943 podman[91393]: 2026-02-02 11:32:20.780173591 +0000 UTC m=+0.684242377 container died 23ebc5d80576626e0c4d2f272fbfeace8ec177a887f068a16e8e1de8fc83c31e (image=quay.io/ceph/ceph:v20, name=infallible_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:20 np0005604943 systemd[1]: var-lib-containers-storage-overlay-4859c46d6db9b4a1476a5002b2bcbde6367e83af294a559c0ae6c3bf77282c13-merged.mount: Deactivated successfully.
Feb  2 06:32:20 np0005604943 podman[91393]: 2026-02-02 11:32:20.820705166 +0000 UTC m=+0.724773942 container remove 23ebc5d80576626e0c4d2f272fbfeace8ec177a887f068a16e8e1de8fc83c31e (image=quay.io/ceph/ceph:v20, name=infallible_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:20 np0005604943 systemd[1]: libpod-conmon-23ebc5d80576626e0c4d2f272fbfeace8ec177a887f068a16e8e1de8fc83c31e.scope: Deactivated successfully.
Feb  2 06:32:21 np0005604943 python3[91469]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:21 np0005604943 podman[91470]: 2026-02-02 11:32:21.176916377 +0000 UTC m=+0.056998582 container create 448b482e31c27b91fcc55832f200277c9e23fec6b2d14e898195db3b3958cbeb (image=quay.io/ceph/ceph:v20, name=youthful_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 06:32:21 np0005604943 systemd[1]: Started libpod-conmon-448b482e31c27b91fcc55832f200277c9e23fec6b2d14e898195db3b3958cbeb.scope.
Feb  2 06:32:21 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabf1f6c08e1dab9c456d3d29f6c3f5b62e9e3cbab47b27ca8c4e0fa26f755aa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabf1f6c08e1dab9c456d3d29f6c3f5b62e9e3cbab47b27ca8c4e0fa26f755aa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:21 np0005604943 podman[91470]: 2026-02-02 11:32:21.151078078 +0000 UTC m=+0.031160343 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:21 np0005604943 podman[91470]: 2026-02-02 11:32:21.471808332 +0000 UTC m=+0.351890497 container init 448b482e31c27b91fcc55832f200277c9e23fec6b2d14e898195db3b3958cbeb (image=quay.io/ceph/ceph:v20, name=youthful_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:21 np0005604943 podman[91470]: 2026-02-02 11:32:21.47796458 +0000 UTC m=+0.358046785 container start 448b482e31c27b91fcc55832f200277c9e23fec6b2d14e898195db3b3958cbeb (image=quay.io/ceph/ceph:v20, name=youthful_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 06:32:21 np0005604943 podman[91470]: 2026-02-02 11:32:21.483283134 +0000 UTC m=+0.363365339 container attach 448b482e31c27b91fcc55832f200277c9e23fec6b2d14e898195db3b3958cbeb (image=quay.io/ceph/ceph:v20, name=youthful_nash, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 06:32:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v53: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:21 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/1423252608' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb  2 06:32:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:32:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Feb  2 06:32:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2195508874' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Feb  2 06:32:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Feb  2 06:32:22 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/2195508874' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Feb  2 06:32:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2195508874' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb  2 06:32:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Feb  2 06:32:22 np0005604943 youthful_nash[91485]: enabled application 'rbd' on pool 'images'
Feb  2 06:32:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Feb  2 06:32:22 np0005604943 podman[91470]: 2026-02-02 11:32:22.796465754 +0000 UTC m=+1.676547949 container died 448b482e31c27b91fcc55832f200277c9e23fec6b2d14e898195db3b3958cbeb (image=quay.io/ceph/ceph:v20, name=youthful_nash, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 06:32:22 np0005604943 systemd[1]: libpod-448b482e31c27b91fcc55832f200277c9e23fec6b2d14e898195db3b3958cbeb.scope: Deactivated successfully.
Feb  2 06:32:22 np0005604943 systemd[1]: var-lib-containers-storage-overlay-cabf1f6c08e1dab9c456d3d29f6c3f5b62e9e3cbab47b27ca8c4e0fa26f755aa-merged.mount: Deactivated successfully.
Feb  2 06:32:22 np0005604943 podman[91470]: 2026-02-02 11:32:22.835651349 +0000 UTC m=+1.715733534 container remove 448b482e31c27b91fcc55832f200277c9e23fec6b2d14e898195db3b3958cbeb (image=quay.io/ceph/ceph:v20, name=youthful_nash, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 06:32:22 np0005604943 systemd[1]: libpod-conmon-448b482e31c27b91fcc55832f200277c9e23fec6b2d14e898195db3b3958cbeb.scope: Deactivated successfully.
Feb  2 06:32:23 np0005604943 python3[91547]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:23 np0005604943 podman[91548]: 2026-02-02 11:32:23.212091047 +0000 UTC m=+0.047398065 container create 921ef398878b5c9ae173fa0aaaf4ab060a329696bf7f43da0c98644f001264f3 (image=quay.io/ceph/ceph:v20, name=admiring_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:23 np0005604943 systemd[1]: Started libpod-conmon-921ef398878b5c9ae173fa0aaaf4ab060a329696bf7f43da0c98644f001264f3.scope.
Feb  2 06:32:23 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:23 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbbf1f5e6218c436629d31016400fc584d58dfb143a3ea02aae73985cb4ccfb3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:23 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbbf1f5e6218c436629d31016400fc584d58dfb143a3ea02aae73985cb4ccfb3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:23 np0005604943 podman[91548]: 2026-02-02 11:32:23.194162897 +0000 UTC m=+0.029469935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:23 np0005604943 podman[91548]: 2026-02-02 11:32:23.293877166 +0000 UTC m=+0.129184244 container init 921ef398878b5c9ae173fa0aaaf4ab060a329696bf7f43da0c98644f001264f3 (image=quay.io/ceph/ceph:v20, name=admiring_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:23 np0005604943 podman[91548]: 2026-02-02 11:32:23.301984881 +0000 UTC m=+0.137291909 container start 921ef398878b5c9ae173fa0aaaf4ab060a329696bf7f43da0c98644f001264f3 (image=quay.io/ceph/ceph:v20, name=admiring_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Feb  2 06:32:23 np0005604943 podman[91548]: 2026-02-02 11:32:23.305423602 +0000 UTC m=+0.140730670 container attach 921ef398878b5c9ae173fa0aaaf4ab060a329696bf7f43da0c98644f001264f3 (image=quay.io/ceph/ceph:v20, name=admiring_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Feb  2 06:32:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Feb  2 06:32:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/553571996' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Feb  2 06:32:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Feb  2 06:32:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/553571996' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb  2 06:32:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Feb  2 06:32:23 np0005604943 admiring_haibt[91563]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Feb  2 06:32:23 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/2195508874' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb  2 06:32:23 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/553571996' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Feb  2 06:32:23 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Feb  2 06:32:23 np0005604943 systemd[1]: libpod-921ef398878b5c9ae173fa0aaaf4ab060a329696bf7f43da0c98644f001264f3.scope: Deactivated successfully.
Feb  2 06:32:23 np0005604943 podman[91548]: 2026-02-02 11:32:23.802765952 +0000 UTC m=+0.638072990 container died 921ef398878b5c9ae173fa0aaaf4ab060a329696bf7f43da0c98644f001264f3 (image=quay.io/ceph/ceph:v20, name=admiring_haibt, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:23 np0005604943 systemd[1]: var-lib-containers-storage-overlay-cbbf1f5e6218c436629d31016400fc584d58dfb143a3ea02aae73985cb4ccfb3-merged.mount: Deactivated successfully.
Feb  2 06:32:23 np0005604943 podman[91548]: 2026-02-02 11:32:23.833332018 +0000 UTC m=+0.668639076 container remove 921ef398878b5c9ae173fa0aaaf4ab060a329696bf7f43da0c98644f001264f3 (image=quay.io/ceph/ceph:v20, name=admiring_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:23 np0005604943 systemd[1]: libpod-conmon-921ef398878b5c9ae173fa0aaaf4ab060a329696bf7f43da0c98644f001264f3.scope: Deactivated successfully.
Feb  2 06:32:24 np0005604943 python3[91625]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:24 np0005604943 podman[91626]: 2026-02-02 11:32:24.169179789 +0000 UTC m=+0.049366642 container create cd505d5b42c9959fcece6d64ce28b5c06ac347e1cf566952d24bd95d51b12553 (image=quay.io/ceph/ceph:v20, name=epic_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:24 np0005604943 systemd[1]: Started libpod-conmon-cd505d5b42c9959fcece6d64ce28b5c06ac347e1cf566952d24bd95d51b12553.scope.
Feb  2 06:32:24 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:24 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b02e24f22625071bec0353e28db1ffce0a3a794b52190a07f6b26020793840a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:24 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b02e24f22625071bec0353e28db1ffce0a3a794b52190a07f6b26020793840a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:24 np0005604943 podman[91626]: 2026-02-02 11:32:24.144192855 +0000 UTC m=+0.024379808 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:24 np0005604943 podman[91626]: 2026-02-02 11:32:24.240513425 +0000 UTC m=+0.120700288 container init cd505d5b42c9959fcece6d64ce28b5c06ac347e1cf566952d24bd95d51b12553 (image=quay.io/ceph/ceph:v20, name=epic_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:24 np0005604943 podman[91626]: 2026-02-02 11:32:24.247170668 +0000 UTC m=+0.127357511 container start cd505d5b42c9959fcece6d64ce28b5c06ac347e1cf566952d24bd95d51b12553 (image=quay.io/ceph/ceph:v20, name=epic_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 06:32:24 np0005604943 podman[91626]: 2026-02-02 11:32:24.250361551 +0000 UTC m=+0.130548394 container attach cd505d5b42c9959fcece6d64ce28b5c06ac347e1cf566952d24bd95d51b12553 (image=quay.io/ceph/ceph:v20, name=epic_taussig, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 06:32:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Feb  2 06:32:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1630432961' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Feb  2 06:32:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Feb  2 06:32:24 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/553571996' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb  2 06:32:24 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/1630432961' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Feb  2 06:32:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1630432961' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb  2 06:32:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Feb  2 06:32:24 np0005604943 epic_taussig[91641]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Feb  2 06:32:24 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Feb  2 06:32:24 np0005604943 systemd[1]: libpod-cd505d5b42c9959fcece6d64ce28b5c06ac347e1cf566952d24bd95d51b12553.scope: Deactivated successfully.
Feb  2 06:32:24 np0005604943 conmon[91641]: conmon cd505d5b42c9959fcece <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd505d5b42c9959fcece6d64ce28b5c06ac347e1cf566952d24bd95d51b12553.scope/container/memory.events
Feb  2 06:32:24 np0005604943 podman[91626]: 2026-02-02 11:32:24.832442897 +0000 UTC m=+0.712629770 container died cd505d5b42c9959fcece6d64ce28b5c06ac347e1cf566952d24bd95d51b12553 (image=quay.io/ceph/ceph:v20, name=epic_taussig, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 06:32:24 np0005604943 systemd[1]: var-lib-containers-storage-overlay-4b02e24f22625071bec0353e28db1ffce0a3a794b52190a07f6b26020793840a-merged.mount: Deactivated successfully.
Feb  2 06:32:24 np0005604943 podman[91626]: 2026-02-02 11:32:24.873542318 +0000 UTC m=+0.753729161 container remove cd505d5b42c9959fcece6d64ce28b5c06ac347e1cf566952d24bd95d51b12553 (image=quay.io/ceph/ceph:v20, name=epic_taussig, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:24 np0005604943 systemd[1]: libpod-conmon-cd505d5b42c9959fcece6d64ce28b5c06ac347e1cf566952d24bd95d51b12553.scope: Deactivated successfully.
Feb  2 06:32:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:25 np0005604943 python3[91753]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:32:25 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/1630432961' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb  2 06:32:25 np0005604943 python3[91824]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770031945.4402723-36672-134693047931632/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:32:26 np0005604943 python3[91926]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:32:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:32:27 np0005604943 python3[92001]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770031946.514528-36687-189586801450937/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=227d0b1375f68ead66587899739dabb8c9816619 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:32:27 np0005604943 python3[92051]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:27 np0005604943 podman[92052]: 2026-02-02 11:32:27.530452842 +0000 UTC m=+0.055495138 container create e3514e65efc3c104f0ba5131b7adf45d9fca4535e1267e58c8e21a5179820ef7 (image=quay.io/ceph/ceph:v20, name=crazy_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:27 np0005604943 systemd[1]: Started libpod-conmon-e3514e65efc3c104f0ba5131b7adf45d9fca4535e1267e58c8e21a5179820ef7.scope.
Feb  2 06:32:27 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:27 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/630e53b0deafcae96c7b64bea84b6f3b604596886b23fedb0fe50f32ec7ad8c6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:27 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/630e53b0deafcae96c7b64bea84b6f3b604596886b23fedb0fe50f32ec7ad8c6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:27 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/630e53b0deafcae96c7b64bea84b6f3b604596886b23fedb0fe50f32ec7ad8c6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:27 np0005604943 podman[92052]: 2026-02-02 11:32:27.503322656 +0000 UTC m=+0.028365002 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:27 np0005604943 podman[92052]: 2026-02-02 11:32:27.603706344 +0000 UTC m=+0.128748660 container init e3514e65efc3c104f0ba5131b7adf45d9fca4535e1267e58c8e21a5179820ef7 (image=quay.io/ceph/ceph:v20, name=crazy_proskuriakova, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 06:32:27 np0005604943 podman[92052]: 2026-02-02 11:32:27.608824543 +0000 UTC m=+0.133866829 container start e3514e65efc3c104f0ba5131b7adf45d9fca4535e1267e58c8e21a5179820ef7 (image=quay.io/ceph/ceph:v20, name=crazy_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:27 np0005604943 podman[92052]: 2026-02-02 11:32:27.612556132 +0000 UTC m=+0.137598398 container attach e3514e65efc3c104f0ba5131b7adf45d9fca4535e1267e58c8e21a5179820ef7 (image=quay.io/ceph/ceph:v20, name=crazy_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb  2 06:32:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3831285541' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  2 06:32:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3831285541' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb  2 06:32:28 np0005604943 crazy_proskuriakova[92067]: 
Feb  2 06:32:28 np0005604943 crazy_proskuriakova[92067]: [global]
Feb  2 06:32:28 np0005604943 crazy_proskuriakova[92067]: #011fsid = 4548a36b-7cdc-5e3e-a814-4e1571be1fae
Feb  2 06:32:28 np0005604943 crazy_proskuriakova[92067]: #011mon_host = 192.168.122.100
Feb  2 06:32:28 np0005604943 crazy_proskuriakova[92067]: #011rgw_keystone_api_version = 3
Feb  2 06:32:28 np0005604943 systemd[1]: libpod-e3514e65efc3c104f0ba5131b7adf45d9fca4535e1267e58c8e21a5179820ef7.scope: Deactivated successfully.
Feb  2 06:32:28 np0005604943 podman[92052]: 2026-02-02 11:32:28.027421252 +0000 UTC m=+0.552463518 container died e3514e65efc3c104f0ba5131b7adf45d9fca4535e1267e58c8e21a5179820ef7 (image=quay.io/ceph/ceph:v20, name=crazy_proskuriakova, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 06:32:28 np0005604943 systemd[1]: var-lib-containers-storage-overlay-630e53b0deafcae96c7b64bea84b6f3b604596886b23fedb0fe50f32ec7ad8c6-merged.mount: Deactivated successfully.
Feb  2 06:32:28 np0005604943 podman[92052]: 2026-02-02 11:32:28.058939276 +0000 UTC m=+0.583981542 container remove e3514e65efc3c104f0ba5131b7adf45d9fca4535e1267e58c8e21a5179820ef7 (image=quay.io/ceph/ceph:v20, name=crazy_proskuriakova, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:28 np0005604943 systemd[1]: libpod-conmon-e3514e65efc3c104f0ba5131b7adf45d9fca4535e1267e58c8e21a5179820ef7.scope: Deactivated successfully.
Feb  2 06:32:28 np0005604943 python3[92178]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:28 np0005604943 podman[92202]: 2026-02-02 11:32:28.407532496 +0000 UTC m=+0.042594085 container create 4a46b50c26ce4a9145b86a646209aabc65aee6f7d114f3407831adbf4d4d9e2e (image=quay.io/ceph/ceph:v20, name=elated_hellman, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:28 np0005604943 systemd[1]: Started libpod-conmon-4a46b50c26ce4a9145b86a646209aabc65aee6f7d114f3407831adbf4d4d9e2e.scope.
Feb  2 06:32:28 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:28 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efff6d532e7c8f39072f51fe53fb04ddea9aca995f27b6d8fe55f0938f1b831e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:28 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efff6d532e7c8f39072f51fe53fb04ddea9aca995f27b6d8fe55f0938f1b831e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:28 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efff6d532e7c8f39072f51fe53fb04ddea9aca995f27b6d8fe55f0938f1b831e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:28 np0005604943 podman[92202]: 2026-02-02 11:32:28.473193618 +0000 UTC m=+0.108255267 container init 4a46b50c26ce4a9145b86a646209aabc65aee6f7d114f3407831adbf4d4d9e2e (image=quay.io/ceph/ceph:v20, name=elated_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 06:32:28 np0005604943 podman[92202]: 2026-02-02 11:32:28.478179603 +0000 UTC m=+0.113241202 container start 4a46b50c26ce4a9145b86a646209aabc65aee6f7d114f3407831adbf4d4d9e2e (image=quay.io/ceph/ceph:v20, name=elated_hellman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:28 np0005604943 podman[92202]: 2026-02-02 11:32:28.386184227 +0000 UTC m=+0.021245856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:28 np0005604943 podman[92202]: 2026-02-02 11:32:28.482850838 +0000 UTC m=+0.117912437 container attach 4a46b50c26ce4a9145b86a646209aabc65aee6f7d114f3407831adbf4d4d9e2e (image=quay.io/ceph/ceph:v20, name=elated_hellman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:28 np0005604943 podman[92241]: 2026-02-02 11:32:28.552658511 +0000 UTC m=+0.074302134 container exec fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:28 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3831285541' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb  2 06:32:28 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3831285541' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb  2 06:32:28 np0005604943 podman[92241]: 2026-02-02 11:32:28.666706336 +0000 UTC m=+0.188349969 container exec_died fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1392521256' entity='client.admin' 
Feb  2 06:32:29 np0005604943 elated_hellman[92231]: set ssl_option
Feb  2 06:32:29 np0005604943 systemd[1]: libpod-4a46b50c26ce4a9145b86a646209aabc65aee6f7d114f3407831adbf4d4d9e2e.scope: Deactivated successfully.
Feb  2 06:32:29 np0005604943 podman[92202]: 2026-02-02 11:32:29.057935881 +0000 UTC m=+0.692997530 container died 4a46b50c26ce4a9145b86a646209aabc65aee6f7d114f3407831adbf4d4d9e2e (image=quay.io/ceph/ceph:v20, name=elated_hellman, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 06:32:29 np0005604943 systemd[1]: var-lib-containers-storage-overlay-efff6d532e7c8f39072f51fe53fb04ddea9aca995f27b6d8fe55f0938f1b831e-merged.mount: Deactivated successfully.
Feb  2 06:32:29 np0005604943 podman[92202]: 2026-02-02 11:32:29.10238616 +0000 UTC m=+0.737447729 container remove 4a46b50c26ce4a9145b86a646209aabc65aee6f7d114f3407831adbf4d4d9e2e (image=quay.io/ceph/ceph:v20, name=elated_hellman, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 06:32:29 np0005604943 systemd[1]: libpod-conmon-4a46b50c26ce4a9145b86a646209aabc65aee6f7d114f3407831adbf4d4d9e2e.scope: Deactivated successfully.
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:29 np0005604943 python3[92470]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:29 np0005604943 podman[92496]: 2026-02-02 11:32:29.459033704 +0000 UTC m=+0.056323304 container create 861828154c210e78c864b39dc86e069de1a5e392b71bababa487b45cf192edc2 (image=quay.io/ceph/ceph:v20, name=suspicious_merkle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  2 06:32:29 np0005604943 systemd[1]: Started libpod-conmon-861828154c210e78c864b39dc86e069de1a5e392b71bababa487b45cf192edc2.scope.
Feb  2 06:32:29 np0005604943 podman[92496]: 2026-02-02 11:32:29.430774204 +0000 UTC m=+0.028063864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:29 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:29 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72cb73d34ef1ca7bcaf10765a5b89cb360fa8353461d8980a0f84b3c7b6975b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:29 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72cb73d34ef1ca7bcaf10765a5b89cb360fa8353461d8980a0f84b3c7b6975b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:29 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72cb73d34ef1ca7bcaf10765a5b89cb360fa8353461d8980a0f84b3c7b6975b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:29 np0005604943 podman[92496]: 2026-02-02 11:32:29.565703605 +0000 UTC m=+0.162993225 container init 861828154c210e78c864b39dc86e069de1a5e392b71bababa487b45cf192edc2 (image=quay.io/ceph/ceph:v20, name=suspicious_merkle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Feb  2 06:32:29 np0005604943 podman[92496]: 2026-02-02 11:32:29.57178047 +0000 UTC m=+0.169070070 container start 861828154c210e78c864b39dc86e069de1a5e392b71bababa487b45cf192edc2 (image=quay.io/ceph/ceph:v20, name=suspicious_merkle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 06:32:29 np0005604943 podman[92496]: 2026-02-02 11:32:29.580523823 +0000 UTC m=+0.177813583 container attach 861828154c210e78c864b39dc86e069de1a5e392b71bababa487b45cf192edc2 (image=quay.io/ceph/ceph:v20, name=suspicious_merkle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:32:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:32:29 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 06:32:30 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Feb  2 06:32:30 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Feb  2 06:32:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  2 06:32:30 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:30 np0005604943 suspicious_merkle[92511]: Scheduled rgw.rgw update...
Feb  2 06:32:30 np0005604943 systemd[1]: libpod-861828154c210e78c864b39dc86e069de1a5e392b71bababa487b45cf192edc2.scope: Deactivated successfully.
Feb  2 06:32:30 np0005604943 podman[92496]: 2026-02-02 11:32:30.025910848 +0000 UTC m=+0.623200408 container died 861828154c210e78c864b39dc86e069de1a5e392b71bababa487b45cf192edc2 (image=quay.io/ceph/ceph:v20, name=suspicious_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  2 06:32:30 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/1392521256' entity='client.admin' 
Feb  2 06:32:30 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:30 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:30 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:32:30 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:30 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:32:30 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:30 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b72cb73d34ef1ca7bcaf10765a5b89cb360fa8353461d8980a0f84b3c7b6975b-merged.mount: Deactivated successfully.
Feb  2 06:32:30 np0005604943 podman[92496]: 2026-02-02 11:32:30.110383457 +0000 UTC m=+0.707673057 container remove 861828154c210e78c864b39dc86e069de1a5e392b71bababa487b45cf192edc2 (image=quay.io/ceph/ceph:v20, name=suspicious_merkle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:30 np0005604943 systemd[1]: libpod-conmon-861828154c210e78c864b39dc86e069de1a5e392b71bababa487b45cf192edc2.scope: Deactivated successfully.
Feb  2 06:32:30 np0005604943 podman[92640]: 2026-02-02 11:32:30.235001298 +0000 UTC m=+0.042933766 container create 8551fc8fdced05884b28bf2d9c684405c396ea0937df77a7f73d12a7fa851c03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 06:32:30 np0005604943 systemd[1]: Started libpod-conmon-8551fc8fdced05884b28bf2d9c684405c396ea0937df77a7f73d12a7fa851c03.scope.
Feb  2 06:32:30 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:30 np0005604943 podman[92640]: 2026-02-02 11:32:30.308285801 +0000 UTC m=+0.116218309 container init 8551fc8fdced05884b28bf2d9c684405c396ea0937df77a7f73d12a7fa851c03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 06:32:30 np0005604943 podman[92640]: 2026-02-02 11:32:30.211137396 +0000 UTC m=+0.019069894 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:30 np0005604943 podman[92640]: 2026-02-02 11:32:30.314157881 +0000 UTC m=+0.122090339 container start 8551fc8fdced05884b28bf2d9c684405c396ea0937df77a7f73d12a7fa851c03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 06:32:30 np0005604943 condescending_kilby[92657]: 167 167
Feb  2 06:32:30 np0005604943 systemd[1]: libpod-8551fc8fdced05884b28bf2d9c684405c396ea0937df77a7f73d12a7fa851c03.scope: Deactivated successfully.
Feb  2 06:32:30 np0005604943 podman[92640]: 2026-02-02 11:32:30.325808339 +0000 UTC m=+0.133740797 container attach 8551fc8fdced05884b28bf2d9c684405c396ea0937df77a7f73d12a7fa851c03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:30 np0005604943 podman[92640]: 2026-02-02 11:32:30.326277802 +0000 UTC m=+0.134210260 container died 8551fc8fdced05884b28bf2d9c684405c396ea0937df77a7f73d12a7fa851c03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:30 np0005604943 systemd[1]: var-lib-containers-storage-overlay-5e3ce3feb16527f297d27749218fa70460deccd30ae26d6efad2665f98f85015-merged.mount: Deactivated successfully.
Feb  2 06:32:30 np0005604943 podman[92640]: 2026-02-02 11:32:30.382699987 +0000 UTC m=+0.190632445 container remove 8551fc8fdced05884b28bf2d9c684405c396ea0937df77a7f73d12a7fa851c03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:30 np0005604943 systemd[1]: libpod-conmon-8551fc8fdced05884b28bf2d9c684405c396ea0937df77a7f73d12a7fa851c03.scope: Deactivated successfully.
Feb  2 06:32:30 np0005604943 podman[92681]: 2026-02-02 11:32:30.502417626 +0000 UTC m=+0.039591699 container create db1c8d3afd1c035ab4baa8639f02e7102d8c28f7d79b30d85ad95c3a9b1a9e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poitras, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb  2 06:32:30 np0005604943 systemd[1]: Started libpod-conmon-db1c8d3afd1c035ab4baa8639f02e7102d8c28f7d79b30d85ad95c3a9b1a9e68.scope.
Feb  2 06:32:30 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:30 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51256eca46d0ba62d5c901e2362247c95e56e647bb85f0b80dea0e28baf98d3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:30 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51256eca46d0ba62d5c901e2362247c95e56e647bb85f0b80dea0e28baf98d3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:30 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51256eca46d0ba62d5c901e2362247c95e56e647bb85f0b80dea0e28baf98d3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:30 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51256eca46d0ba62d5c901e2362247c95e56e647bb85f0b80dea0e28baf98d3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:30 np0005604943 podman[92681]: 2026-02-02 11:32:30.48563721 +0000 UTC m=+0.022811303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:30 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51256eca46d0ba62d5c901e2362247c95e56e647bb85f0b80dea0e28baf98d3b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:30 np0005604943 podman[92681]: 2026-02-02 11:32:30.610772195 +0000 UTC m=+0.147946348 container init db1c8d3afd1c035ab4baa8639f02e7102d8c28f7d79b30d85ad95c3a9b1a9e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poitras, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 06:32:30 np0005604943 podman[92681]: 2026-02-02 11:32:30.618341545 +0000 UTC m=+0.155515608 container start db1c8d3afd1c035ab4baa8639f02e7102d8c28f7d79b30d85ad95c3a9b1a9e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 06:32:30 np0005604943 podman[92681]: 2026-02-02 11:32:30.626033508 +0000 UTC m=+0.163207621 container attach db1c8d3afd1c035ab4baa8639f02e7102d8c28f7d79b30d85ad95c3a9b1a9e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:30 np0005604943 python3[92779]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:32:31 np0005604943 quirky_poitras[92697]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:32:31 np0005604943 quirky_poitras[92697]: --> All data devices are unavailable
Feb  2 06:32:31 np0005604943 systemd[1]: libpod-db1c8d3afd1c035ab4baa8639f02e7102d8c28f7d79b30d85ad95c3a9b1a9e68.scope: Deactivated successfully.
Feb  2 06:32:31 np0005604943 podman[92681]: 2026-02-02 11:32:31.12308996 +0000 UTC m=+0.660264043 container died db1c8d3afd1c035ab4baa8639f02e7102d8c28f7d79b30d85ad95c3a9b1a9e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:31 np0005604943 systemd[1]: var-lib-containers-storage-overlay-51256eca46d0ba62d5c901e2362247c95e56e647bb85f0b80dea0e28baf98d3b-merged.mount: Deactivated successfully.
Feb  2 06:32:31 np0005604943 podman[92681]: 2026-02-02 11:32:31.214090027 +0000 UTC m=+0.751264120 container remove db1c8d3afd1c035ab4baa8639f02e7102d8c28f7d79b30d85ad95c3a9b1a9e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poitras, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 06:32:31 np0005604943 systemd[1]: libpod-conmon-db1c8d3afd1c035ab4baa8639f02e7102d8c28f7d79b30d85ad95c3a9b1a9e68.scope: Deactivated successfully.
Feb  2 06:32:31 np0005604943 ceph-mon[75271]: Saving service rgw.rgw spec with placement compute-0
Feb  2 06:32:31 np0005604943 python3[92869]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770031950.7687836-36728-68509363426277/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:32:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:31 np0005604943 podman[92959]: 2026-02-02 11:32:31.62051246 +0000 UTC m=+0.045311503 container create dcc35020e1468105a0a9a40fcf891d58b098dd9aba27d743c99cde140637bb7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_cohen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:31 np0005604943 systemd[1]: Started libpod-conmon-dcc35020e1468105a0a9a40fcf891d58b098dd9aba27d743c99cde140637bb7f.scope.
Feb  2 06:32:31 np0005604943 podman[92959]: 2026-02-02 11:32:31.594468826 +0000 UTC m=+0.019267879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:31 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:31 np0005604943 podman[92959]: 2026-02-02 11:32:31.721419743 +0000 UTC m=+0.146218806 container init dcc35020e1468105a0a9a40fcf891d58b098dd9aba27d743c99cde140637bb7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_cohen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:31 np0005604943 podman[92959]: 2026-02-02 11:32:31.727193511 +0000 UTC m=+0.151992544 container start dcc35020e1468105a0a9a40fcf891d58b098dd9aba27d743c99cde140637bb7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  2 06:32:31 np0005604943 intelligent_cohen[93001]: 167 167
Feb  2 06:32:31 np0005604943 systemd[1]: libpod-dcc35020e1468105a0a9a40fcf891d58b098dd9aba27d743c99cde140637bb7f.scope: Deactivated successfully.
Feb  2 06:32:31 np0005604943 podman[92959]: 2026-02-02 11:32:31.736427348 +0000 UTC m=+0.161226381 container attach dcc35020e1468105a0a9a40fcf891d58b098dd9aba27d743c99cde140637bb7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_cohen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 06:32:31 np0005604943 podman[92959]: 2026-02-02 11:32:31.736788358 +0000 UTC m=+0.161587391 container died dcc35020e1468105a0a9a40fcf891d58b098dd9aba27d743c99cde140637bb7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_cohen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 06:32:31 np0005604943 python3[92998]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:31 np0005604943 systemd[1]: var-lib-containers-storage-overlay-13cd052a01f1cb68a0d65a507136195f831f3a7db73d33f3c191c29c2f44521d-merged.mount: Deactivated successfully.
Feb  2 06:32:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:32:31 np0005604943 podman[92959]: 2026-02-02 11:32:31.840665098 +0000 UTC m=+0.265464161 container remove dcc35020e1468105a0a9a40fcf891d58b098dd9aba27d743c99cde140637bb7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_cohen, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 06:32:31 np0005604943 systemd[1]: libpod-conmon-dcc35020e1468105a0a9a40fcf891d58b098dd9aba27d743c99cde140637bb7f.scope: Deactivated successfully.
Feb  2 06:32:31 np0005604943 podman[93020]: 2026-02-02 11:32:31.880388049 +0000 UTC m=+0.086946880 container create 024427dd0eb1430a3df97a1534adaf9fbf6e11696f1f90c41557018dffeedf23 (image=quay.io/ceph/ceph:v20, name=lucid_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 06:32:31 np0005604943 systemd[1]: Started libpod-conmon-024427dd0eb1430a3df97a1534adaf9fbf6e11696f1f90c41557018dffeedf23.scope.
Feb  2 06:32:31 np0005604943 podman[93020]: 2026-02-02 11:32:31.852294435 +0000 UTC m=+0.058853346 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:31 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:31 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3f45df2f98993095066ac550711613c0b9fc4c01e6b62f4ae1443a5575dafc0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:31 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3f45df2f98993095066ac550711613c0b9fc4c01e6b62f4ae1443a5575dafc0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:31 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3f45df2f98993095066ac550711613c0b9fc4c01e6b62f4ae1443a5575dafc0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:31 np0005604943 podman[93043]: 2026-02-02 11:32:31.977823781 +0000 UTC m=+0.049794424 container create a502c0d1bd691dfe3222157937ba076c7968cadc2015f7ead3f4fd1603be504d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 06:32:31 np0005604943 podman[93020]: 2026-02-02 11:32:31.998459478 +0000 UTC m=+0.205018339 container init 024427dd0eb1430a3df97a1534adaf9fbf6e11696f1f90c41557018dffeedf23 (image=quay.io/ceph/ceph:v20, name=lucid_germain, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Feb  2 06:32:32 np0005604943 podman[93020]: 2026-02-02 11:32:32.008635674 +0000 UTC m=+0.215194505 container start 024427dd0eb1430a3df97a1534adaf9fbf6e11696f1f90c41557018dffeedf23 (image=quay.io/ceph/ceph:v20, name=lucid_germain, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 06:32:32 np0005604943 podman[93020]: 2026-02-02 11:32:32.041629819 +0000 UTC m=+0.248188660 container attach 024427dd0eb1430a3df97a1534adaf9fbf6e11696f1f90c41557018dffeedf23 (image=quay.io/ceph/ceph:v20, name=lucid_germain, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:32 np0005604943 systemd[1]: Started libpod-conmon-a502c0d1bd691dfe3222157937ba076c7968cadc2015f7ead3f4fd1603be504d.scope.
Feb  2 06:32:32 np0005604943 podman[93043]: 2026-02-02 11:32:31.947887013 +0000 UTC m=+0.019857676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:32 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b923d878d7f0f3e52c08fc18d191efebc11c81da11e9dbdd8e0a055ee6c01474/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b923d878d7f0f3e52c08fc18d191efebc11c81da11e9dbdd8e0a055ee6c01474/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b923d878d7f0f3e52c08fc18d191efebc11c81da11e9dbdd8e0a055ee6c01474/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b923d878d7f0f3e52c08fc18d191efebc11c81da11e9dbdd8e0a055ee6c01474/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:32 np0005604943 podman[93043]: 2026-02-02 11:32:32.103283725 +0000 UTC m=+0.175254388 container init a502c0d1bd691dfe3222157937ba076c7968cadc2015f7ead3f4fd1603be504d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hypatia, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:32 np0005604943 podman[93043]: 2026-02-02 11:32:32.116823207 +0000 UTC m=+0.188793860 container start a502c0d1bd691dfe3222157937ba076c7968cadc2015f7ead3f4fd1603be504d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hypatia, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 06:32:32 np0005604943 podman[93043]: 2026-02-02 11:32:32.14798243 +0000 UTC m=+0.219953073 container attach a502c0d1bd691dfe3222157937ba076c7968cadc2015f7ead3f4fd1603be504d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]: {
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:    "0": [
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:        {
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "devices": [
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "/dev/loop3"
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            ],
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "lv_name": "ceph_lv0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "lv_size": "21470642176",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "name": "ceph_lv0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "tags": {
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.cluster_name": "ceph",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.crush_device_class": "",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.encrypted": "0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.objectstore": "bluestore",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.osd_id": "0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.type": "block",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.vdo": "0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.with_tpm": "0"
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            },
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "type": "block",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "vg_name": "ceph_vg0"
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:        }
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:    ],
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:    "1": [
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:        {
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "devices": [
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "/dev/loop4"
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            ],
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "lv_name": "ceph_lv1",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "lv_size": "21470642176",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "name": "ceph_lv1",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "tags": {
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.cluster_name": "ceph",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.crush_device_class": "",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.encrypted": "0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.objectstore": "bluestore",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.osd_id": "1",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.type": "block",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.vdo": "0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.with_tpm": "0"
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            },
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "type": "block",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "vg_name": "ceph_vg1"
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:        }
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:    ],
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:    "2": [
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:        {
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "devices": [
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "/dev/loop5"
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            ],
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "lv_name": "ceph_lv2",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "lv_size": "21470642176",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "name": "ceph_lv2",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "tags": {
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.cluster_name": "ceph",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.crush_device_class": "",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.encrypted": "0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.objectstore": "bluestore",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.osd_id": "2",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.type": "block",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.vdo": "0",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:                "ceph.with_tpm": "0"
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            },
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "type": "block",
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:            "vg_name": "ceph_vg2"
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:        }
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]:    ]
Feb  2 06:32:32 np0005604943 stoic_hypatia[93063]: }
Feb  2 06:32:32 np0005604943 systemd[1]: libpod-a502c0d1bd691dfe3222157937ba076c7968cadc2015f7ead3f4fd1603be504d.scope: Deactivated successfully.
Feb  2 06:32:32 np0005604943 podman[93043]: 2026-02-02 11:32:32.435015824 +0000 UTC m=+0.506986467 container died a502c0d1bd691dfe3222157937ba076c7968cadc2015f7ead3f4fd1603be504d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:32 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 06:32:32 np0005604943 ceph-mgr[75558]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb  2 06:32:32 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0[75267]: 2026-02-02T11:32:32.483+0000 7f15455dc640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb  2 06:32:32 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b923d878d7f0f3e52c08fc18d191efebc11c81da11e9dbdd8e0a055ee6c01474-merged.mount: Deactivated successfully.
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).mds e2 new map
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-02-02T11:32:32:483187+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T11:32:32.482996+0000#012modified#0112026-02-02T11:32:32.482996+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Feb  2 06:32:32 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Feb  2 06:32:32 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  2 06:32:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:32 np0005604943 ceph-mgr[75558]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb  2 06:32:32 np0005604943 podman[93043]: 2026-02-02 11:32:32.561031905 +0000 UTC m=+0.633002558 container remove a502c0d1bd691dfe3222157937ba076c7968cadc2015f7ead3f4fd1603be504d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hypatia, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 06:32:32 np0005604943 systemd[1]: libpod-conmon-a502c0d1bd691dfe3222157937ba076c7968cadc2015f7ead3f4fd1603be504d.scope: Deactivated successfully.
Feb  2 06:32:32 np0005604943 systemd[1]: libpod-024427dd0eb1430a3df97a1534adaf9fbf6e11696f1f90c41557018dffeedf23.scope: Deactivated successfully.
Feb  2 06:32:32 np0005604943 podman[93020]: 2026-02-02 11:32:32.587194673 +0000 UTC m=+0.793753514 container died 024427dd0eb1430a3df97a1534adaf9fbf6e11696f1f90c41557018dffeedf23 (image=quay.io/ceph/ceph:v20, name=lucid_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:32 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f3f45df2f98993095066ac550711613c0b9fc4c01e6b62f4ae1443a5575dafc0-merged.mount: Deactivated successfully.
Feb  2 06:32:32 np0005604943 podman[93020]: 2026-02-02 11:32:32.685393787 +0000 UTC m=+0.891952628 container remove 024427dd0eb1430a3df97a1534adaf9fbf6e11696f1f90c41557018dffeedf23 (image=quay.io/ceph/ceph:v20, name=lucid_germain, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:32 np0005604943 systemd[1]: libpod-conmon-024427dd0eb1430a3df97a1534adaf9fbf6e11696f1f90c41557018dffeedf23.scope: Deactivated successfully.
Feb  2 06:32:32 np0005604943 python3[93196]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:32 np0005604943 podman[93209]: 2026-02-02 11:32:32.993648156 +0000 UTC m=+0.049758263 container create 62d06049236efb5835740ddd966528e73233a726a59484574553b5571b5deeaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sutherland, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 06:32:33 np0005604943 systemd[1]: Started libpod-conmon-62d06049236efb5835740ddd966528e73233a726a59484574553b5571b5deeaa.scope.
Feb  2 06:32:33 np0005604943 podman[93221]: 2026-02-02 11:32:33.045807207 +0000 UTC m=+0.066328592 container create 42145769fe3f9d061210665ec1759e51455bc6936c957b8a06de178a9ee6de90 (image=quay.io/ceph/ceph:v20, name=dreamy_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 06:32:33 np0005604943 podman[93209]: 2026-02-02 11:32:32.967048786 +0000 UTC m=+0.023158893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:33 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:33 np0005604943 systemd[1]: Started libpod-conmon-42145769fe3f9d061210665ec1759e51455bc6936c957b8a06de178a9ee6de90.scope.
Feb  2 06:32:33 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13acedb760d1cbcc79fac939921654f39391faf606b65d54fb147f943b21dc59/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13acedb760d1cbcc79fac939921654f39391faf606b65d54fb147f943b21dc59/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13acedb760d1cbcc79fac939921654f39391faf606b65d54fb147f943b21dc59/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:33 np0005604943 podman[93209]: 2026-02-02 11:32:33.086624829 +0000 UTC m=+0.142734906 container init 62d06049236efb5835740ddd966528e73233a726a59484574553b5571b5deeaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:33 np0005604943 podman[93209]: 2026-02-02 11:32:33.102568922 +0000 UTC m=+0.158678989 container start 62d06049236efb5835740ddd966528e73233a726a59484574553b5571b5deeaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:33 np0005604943 thirsty_sutherland[93238]: 167 167
Feb  2 06:32:33 np0005604943 systemd[1]: libpod-62d06049236efb5835740ddd966528e73233a726a59484574553b5571b5deeaa.scope: Deactivated successfully.
Feb  2 06:32:33 np0005604943 podman[93209]: 2026-02-02 11:32:33.111939933 +0000 UTC m=+0.168050010 container attach 62d06049236efb5835740ddd966528e73233a726a59484574553b5571b5deeaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sutherland, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:33 np0005604943 podman[93209]: 2026-02-02 11:32:33.112362345 +0000 UTC m=+0.168472422 container died 62d06049236efb5835740ddd966528e73233a726a59484574553b5571b5deeaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 06:32:33 np0005604943 podman[93221]: 2026-02-02 11:32:33.020479153 +0000 UTC m=+0.041000558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:33 np0005604943 systemd[1]: var-lib-containers-storage-overlay-ea39c4a576d778a4fa67c77cce6a1b18ce7a1053fddd32c4ac976c89d072bfdc-merged.mount: Deactivated successfully.
Feb  2 06:32:33 np0005604943 podman[93209]: 2026-02-02 11:32:33.19226254 +0000 UTC m=+0.248372617 container remove 62d06049236efb5835740ddd966528e73233a726a59484574553b5571b5deeaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:33 np0005604943 systemd[1]: libpod-conmon-62d06049236efb5835740ddd966528e73233a726a59484574553b5571b5deeaa.scope: Deactivated successfully.
Feb  2 06:32:33 np0005604943 podman[93221]: 2026-02-02 11:32:33.218889171 +0000 UTC m=+0.239410576 container init 42145769fe3f9d061210665ec1759e51455bc6936c957b8a06de178a9ee6de90 (image=quay.io/ceph/ceph:v20, name=dreamy_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:33 np0005604943 podman[93221]: 2026-02-02 11:32:33.223422132 +0000 UTC m=+0.243943517 container start 42145769fe3f9d061210665ec1759e51455bc6936c957b8a06de178a9ee6de90 (image=quay.io/ceph/ceph:v20, name=dreamy_cannon, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:33 np0005604943 podman[93221]: 2026-02-02 11:32:33.236867272 +0000 UTC m=+0.257388897 container attach 42145769fe3f9d061210665ec1759e51455bc6936c957b8a06de178a9ee6de90 (image=quay.io/ceph/ceph:v20, name=dreamy_cannon, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 06:32:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Feb  2 06:32:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Feb  2 06:32:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Feb  2 06:32:33 np0005604943 ceph-mon[75271]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb  2 06:32:33 np0005604943 ceph-mon[75271]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb  2 06:32:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb  2 06:32:33 np0005604943 ceph-mon[75271]: Saving service mds.cephfs spec with placement compute-0
Feb  2 06:32:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:33 np0005604943 podman[93268]: 2026-02-02 11:32:33.3455547 +0000 UTC m=+0.056772475 container create 16d3f6ddf23cb9d488815f5c0605fe2262bfd9fd695eb8b99f79abfdda73d479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jang, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:33 np0005604943 systemd[1]: Started libpod-conmon-16d3f6ddf23cb9d488815f5c0605fe2262bfd9fd695eb8b99f79abfdda73d479.scope.
Feb  2 06:32:33 np0005604943 podman[93268]: 2026-02-02 11:32:33.312705578 +0000 UTC m=+0.023923433 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:33 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b71e1ccea2cfbc92ef69f5fbcd67a9c1fd2018bfe989378338d3e6376f31d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b71e1ccea2cfbc92ef69f5fbcd67a9c1fd2018bfe989378338d3e6376f31d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b71e1ccea2cfbc92ef69f5fbcd67a9c1fd2018bfe989378338d3e6376f31d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b71e1ccea2cfbc92ef69f5fbcd67a9c1fd2018bfe989378338d3e6376f31d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:33 np0005604943 podman[93268]: 2026-02-02 11:32:33.451841728 +0000 UTC m=+0.163059493 container init 16d3f6ddf23cb9d488815f5c0605fe2262bfd9fd695eb8b99f79abfdda73d479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 06:32:33 np0005604943 podman[93268]: 2026-02-02 11:32:33.458333577 +0000 UTC m=+0.169551332 container start 16d3f6ddf23cb9d488815f5c0605fe2262bfd9fd695eb8b99f79abfdda73d479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jang, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:33 np0005604943 podman[93268]: 2026-02-02 11:32:33.466147943 +0000 UTC m=+0.177365698 container attach 16d3f6ddf23cb9d488815f5c0605fe2262bfd9fd695eb8b99f79abfdda73d479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 06:32:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:33 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 06:32:33 np0005604943 ceph-mgr[75558]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Feb  2 06:32:33 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Feb  2 06:32:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  2 06:32:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:33 np0005604943 dreamy_cannon[93243]: Scheduled mds.cephfs update...
Feb  2 06:32:33 np0005604943 systemd[1]: libpod-42145769fe3f9d061210665ec1759e51455bc6936c957b8a06de178a9ee6de90.scope: Deactivated successfully.
Feb  2 06:32:33 np0005604943 podman[93221]: 2026-02-02 11:32:33.640348099 +0000 UTC m=+0.660869524 container died 42145769fe3f9d061210665ec1759e51455bc6936c957b8a06de178a9ee6de90 (image=quay.io/ceph/ceph:v20, name=dreamy_cannon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 06:32:33 np0005604943 systemd[1]: var-lib-containers-storage-overlay-13acedb760d1cbcc79fac939921654f39391faf606b65d54fb147f943b21dc59-merged.mount: Deactivated successfully.
Feb  2 06:32:33 np0005604943 podman[93221]: 2026-02-02 11:32:33.713788586 +0000 UTC m=+0.734309971 container remove 42145769fe3f9d061210665ec1759e51455bc6936c957b8a06de178a9ee6de90 (image=quay.io/ceph/ceph:v20, name=dreamy_cannon, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 06:32:33 np0005604943 systemd[1]: libpod-conmon-42145769fe3f9d061210665ec1759e51455bc6936c957b8a06de178a9ee6de90.scope: Deactivated successfully.
Feb  2 06:32:34 np0005604943 lvm[93395]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:32:34 np0005604943 lvm[93395]: VG ceph_vg1 finished
Feb  2 06:32:34 np0005604943 lvm[93393]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:32:34 np0005604943 lvm[93393]: VG ceph_vg0 finished
Feb  2 06:32:34 np0005604943 lvm[93397]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:32:34 np0005604943 lvm[93397]: VG ceph_vg2 finished
Feb  2 06:32:34 np0005604943 compassionate_jang[93303]: {}
Feb  2 06:32:34 np0005604943 systemd[1]: libpod-16d3f6ddf23cb9d488815f5c0605fe2262bfd9fd695eb8b99f79abfdda73d479.scope: Deactivated successfully.
Feb  2 06:32:34 np0005604943 podman[93268]: 2026-02-02 11:32:34.203939214 +0000 UTC m=+0.915157039 container died 16d3f6ddf23cb9d488815f5c0605fe2262bfd9fd695eb8b99f79abfdda73d479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:34 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a9b71e1ccea2cfbc92ef69f5fbcd67a9c1fd2018bfe989378338d3e6376f31d5-merged.mount: Deactivated successfully.
Feb  2 06:32:34 np0005604943 podman[93268]: 2026-02-02 11:32:34.292525091 +0000 UTC m=+1.003742846 container remove 16d3f6ddf23cb9d488815f5c0605fe2262bfd9fd695eb8b99f79abfdda73d479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jang, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 06:32:34 np0005604943 systemd[1]: libpod-conmon-16d3f6ddf23cb9d488815f5c0605fe2262bfd9fd695eb8b99f79abfdda73d479.scope: Deactivated successfully.
Feb  2 06:32:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:32:34 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:32:34 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:34 np0005604943 ceph-mon[75271]: Saving service mds.cephfs spec with placement compute-0
Feb  2 06:32:34 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:34 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:34 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:34 np0005604943 python3[93566]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 06:32:34 np0005604943 podman[93661]: 2026-02-02 11:32:34.95772932 +0000 UTC m=+0.061273526 container exec fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:35 np0005604943 python3[93696]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770031954.4817305-36776-185288086078480/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=9ed8f834e291931712ada0e12cd8297435f539d4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:32:35 np0005604943 podman[93661]: 2026-02-02 11:32:35.073603027 +0000 UTC m=+0.177147213 container exec_died fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:35 np0005604943 python3[93844]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:32:35 np0005604943 podman[93875]: 2026-02-02 11:32:35.58571161 +0000 UTC m=+0.025777937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:35 np0005604943 podman[93875]: 2026-02-02 11:32:35.683700049 +0000 UTC m=+0.123766376 container create 589bd4eadcaabdf6e2b251f62d6738cb809ef1b5657da71743c4a2fea8478b3f (image=quay.io/ceph/ceph:v20, name=vigorous_pasteur, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 06:32:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:32:35 np0005604943 systemd[1]: Started libpod-conmon-589bd4eadcaabdf6e2b251f62d6738cb809ef1b5657da71743c4a2fea8478b3f.scope.
Feb  2 06:32:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:32:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:32:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:32:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:32:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:32:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:32:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:32:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:32:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:32:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:32:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:32:35 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:35 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5306f960b65e4f1b44fb329b32e8a28395ad3c09c3d68e7dc22fc033b235fce7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:35 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5306f960b65e4f1b44fb329b32e8a28395ad3c09c3d68e7dc22fc033b235fce7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:35 np0005604943 podman[93875]: 2026-02-02 11:32:35.797692251 +0000 UTC m=+0.237758588 container init 589bd4eadcaabdf6e2b251f62d6738cb809ef1b5657da71743c4a2fea8478b3f (image=quay.io/ceph/ceph:v20, name=vigorous_pasteur, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Feb  2 06:32:35 np0005604943 podman[93875]: 2026-02-02 11:32:35.806653781 +0000 UTC m=+0.246720088 container start 589bd4eadcaabdf6e2b251f62d6738cb809ef1b5657da71743c4a2fea8478b3f (image=quay.io/ceph/ceph:v20, name=vigorous_pasteur, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Feb  2 06:32:35 np0005604943 podman[93875]: 2026-02-02 11:32:35.8166539 +0000 UTC m=+0.256720207 container attach 589bd4eadcaabdf6e2b251f62d6738cb809ef1b5657da71743c4a2fea8478b3f (image=quay.io/ceph/ceph:v20, name=vigorous_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:36 np0005604943 podman[93978]: 2026-02-02 11:32:36.111566293 +0000 UTC m=+0.038109195 container create 8ec6ed8d33ef6a82c9a475e58703a50a3937130d7ee158a96a47adfab7ed0f69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 06:32:36 np0005604943 systemd[1]: Started libpod-conmon-8ec6ed8d33ef6a82c9a475e58703a50a3937130d7ee158a96a47adfab7ed0f69.scope.
Feb  2 06:32:36 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:36 np0005604943 podman[93978]: 2026-02-02 11:32:36.165060553 +0000 UTC m=+0.091603495 container init 8ec6ed8d33ef6a82c9a475e58703a50a3937130d7ee158a96a47adfab7ed0f69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_payne, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:36 np0005604943 podman[93978]: 2026-02-02 11:32:36.170168811 +0000 UTC m=+0.096711723 container start 8ec6ed8d33ef6a82c9a475e58703a50a3937130d7ee158a96a47adfab7ed0f69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 06:32:36 np0005604943 goofy_payne[93994]: 167 167
Feb  2 06:32:36 np0005604943 systemd[1]: libpod-8ec6ed8d33ef6a82c9a475e58703a50a3937130d7ee158a96a47adfab7ed0f69.scope: Deactivated successfully.
Feb  2 06:32:36 np0005604943 conmon[93994]: conmon 8ec6ed8d33ef6a82c9a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ec6ed8d33ef6a82c9a475e58703a50a3937130d7ee158a96a47adfab7ed0f69.scope/container/memory.events
Feb  2 06:32:36 np0005604943 podman[93978]: 2026-02-02 11:32:36.174227879 +0000 UTC m=+0.100770821 container attach 8ec6ed8d33ef6a82c9a475e58703a50a3937130d7ee158a96a47adfab7ed0f69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_payne, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:36 np0005604943 podman[93978]: 2026-02-02 11:32:36.174775544 +0000 UTC m=+0.101318456 container died 8ec6ed8d33ef6a82c9a475e58703a50a3937130d7ee158a96a47adfab7ed0f69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_payne, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 06:32:36 np0005604943 podman[93978]: 2026-02-02 11:32:36.091928344 +0000 UTC m=+0.018471266 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:36 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f6c495bd4d229e6b54fb4b98abab2143e530e59c19e9480d89e97847ab119ce2-merged.mount: Deactivated successfully.
Feb  2 06:32:36 np0005604943 podman[93978]: 2026-02-02 11:32:36.235791572 +0000 UTC m=+0.162334514 container remove 8ec6ed8d33ef6a82c9a475e58703a50a3937130d7ee158a96a47adfab7ed0f69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_payne, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 06:32:36 np0005604943 systemd[1]: libpod-conmon-8ec6ed8d33ef6a82c9a475e58703a50a3937130d7ee158a96a47adfab7ed0f69.scope: Deactivated successfully.
Feb  2 06:32:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Feb  2 06:32:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2653940244' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Feb  2 06:32:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2653940244' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb  2 06:32:36 np0005604943 systemd[1]: libpod-589bd4eadcaabdf6e2b251f62d6738cb809ef1b5657da71743c4a2fea8478b3f.scope: Deactivated successfully.
Feb  2 06:32:36 np0005604943 podman[93875]: 2026-02-02 11:32:36.342925735 +0000 UTC m=+0.782992112 container died 589bd4eadcaabdf6e2b251f62d6738cb809ef1b5657da71743c4a2fea8478b3f (image=quay.io/ceph/ceph:v20, name=vigorous_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 06:32:36 np0005604943 systemd[1]: var-lib-containers-storage-overlay-5306f960b65e4f1b44fb329b32e8a28395ad3c09c3d68e7dc22fc033b235fce7-merged.mount: Deactivated successfully.
Feb  2 06:32:36 np0005604943 podman[93875]: 2026-02-02 11:32:36.426605639 +0000 UTC m=+0.866671946 container remove 589bd4eadcaabdf6e2b251f62d6738cb809ef1b5657da71743c4a2fea8478b3f (image=quay.io/ceph/ceph:v20, name=vigorous_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:36 np0005604943 systemd[1]: libpod-conmon-589bd4eadcaabdf6e2b251f62d6738cb809ef1b5657da71743c4a2fea8478b3f.scope: Deactivated successfully.
Feb  2 06:32:36 np0005604943 podman[94018]: 2026-02-02 11:32:36.473172288 +0000 UTC m=+0.151137380 container create 3e0caefc5404fa03e298413cf2cf17208bba52892864c344edcceb2a4ea5be12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_murdock, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:36 np0005604943 systemd[1]: Started libpod-conmon-3e0caefc5404fa03e298413cf2cf17208bba52892864c344edcceb2a4ea5be12.scope.
Feb  2 06:32:36 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:36 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3da4f2a5e6b9615134480a39415471aef4533d0a88d86d48f30f8d3f63c4f7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:36 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3da4f2a5e6b9615134480a39415471aef4533d0a88d86d48f30f8d3f63c4f7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:36 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3da4f2a5e6b9615134480a39415471aef4533d0a88d86d48f30f8d3f63c4f7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:36 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3da4f2a5e6b9615134480a39415471aef4533d0a88d86d48f30f8d3f63c4f7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:36 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3da4f2a5e6b9615134480a39415471aef4533d0a88d86d48f30f8d3f63c4f7d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:36 np0005604943 podman[94018]: 2026-02-02 11:32:36.455419243 +0000 UTC m=+0.133384365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:36 np0005604943 podman[94018]: 2026-02-02 11:32:36.568601412 +0000 UTC m=+0.246566584 container init 3e0caefc5404fa03e298413cf2cf17208bba52892864c344edcceb2a4ea5be12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 06:32:36 np0005604943 podman[94018]: 2026-02-02 11:32:36.575271425 +0000 UTC m=+0.253236547 container start 3e0caefc5404fa03e298413cf2cf17208bba52892864c344edcceb2a4ea5be12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_murdock, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 06:32:36 np0005604943 podman[94018]: 2026-02-02 11:32:36.581450794 +0000 UTC m=+0.259415926 container attach 3e0caefc5404fa03e298413cf2cf17208bba52892864c344edcceb2a4ea5be12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 06:32:36 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:36 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:36 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:32:36 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:36 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:32:36 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/2653940244' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Feb  2 06:32:36 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/2653940244' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb  2 06:32:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:32:36 np0005604943 gallant_murdock[94050]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:32:36 np0005604943 gallant_murdock[94050]: --> All data devices are unavailable
Feb  2 06:32:37 np0005604943 systemd[1]: libpod-3e0caefc5404fa03e298413cf2cf17208bba52892864c344edcceb2a4ea5be12.scope: Deactivated successfully.
Feb  2 06:32:37 np0005604943 podman[94018]: 2026-02-02 11:32:37.01113191 +0000 UTC m=+0.689097012 container died 3e0caefc5404fa03e298413cf2cf17208bba52892864c344edcceb2a4ea5be12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 06:32:37 np0005604943 systemd[1]: var-lib-containers-storage-overlay-e3da4f2a5e6b9615134480a39415471aef4533d0a88d86d48f30f8d3f63c4f7d-merged.mount: Deactivated successfully.
Feb  2 06:32:37 np0005604943 podman[94018]: 2026-02-02 11:32:37.066249278 +0000 UTC m=+0.744214370 container remove 3e0caefc5404fa03e298413cf2cf17208bba52892864c344edcceb2a4ea5be12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:37 np0005604943 systemd[1]: libpod-conmon-3e0caefc5404fa03e298413cf2cf17208bba52892864c344edcceb2a4ea5be12.scope: Deactivated successfully.
Feb  2 06:32:37 np0005604943 python3[94130]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:37 np0005604943 podman[94159]: 2026-02-02 11:32:37.335072795 +0000 UTC m=+0.042533034 container create be2b7cc2dcc682e1c1034fcc3b929ca86eedd5c22f5e82070d5af9a5066f568e (image=quay.io/ceph/ceph:v20, name=practical_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 06:32:37 np0005604943 systemd[1]: Started libpod-conmon-be2b7cc2dcc682e1c1034fcc3b929ca86eedd5c22f5e82070d5af9a5066f568e.scope.
Feb  2 06:32:37 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:37 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dc0f156021151fcca48dd3b0051983d978568de2cd12e1fb5857b61ca297ba3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:37 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dc0f156021151fcca48dd3b0051983d978568de2cd12e1fb5857b61ca297ba3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:37 np0005604943 podman[94159]: 2026-02-02 11:32:37.405066252 +0000 UTC m=+0.112526501 container init be2b7cc2dcc682e1c1034fcc3b929ca86eedd5c22f5e82070d5af9a5066f568e (image=quay.io/ceph/ceph:v20, name=practical_kowalevski, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:37 np0005604943 podman[94159]: 2026-02-02 11:32:37.410985493 +0000 UTC m=+0.118445772 container start be2b7cc2dcc682e1c1034fcc3b929ca86eedd5c22f5e82070d5af9a5066f568e (image=quay.io/ceph/ceph:v20, name=practical_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:37 np0005604943 podman[94159]: 2026-02-02 11:32:37.316046223 +0000 UTC m=+0.023506512 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:37 np0005604943 podman[94159]: 2026-02-02 11:32:37.419720026 +0000 UTC m=+0.127180305 container attach be2b7cc2dcc682e1c1034fcc3b929ca86eedd5c22f5e82070d5af9a5066f568e (image=quay.io/ceph/ceph:v20, name=practical_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 06:32:37 np0005604943 podman[94192]: 2026-02-02 11:32:37.459297532 +0000 UTC m=+0.037404723 container create 0128e2430a5a30865cc713555ee846d8325b075c0cf5e373fc475749c6cfe941 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:37 np0005604943 systemd[1]: Started libpod-conmon-0128e2430a5a30865cc713555ee846d8325b075c0cf5e373fc475749c6cfe941.scope.
Feb  2 06:32:37 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:37 np0005604943 podman[94192]: 2026-02-02 11:32:37.517784317 +0000 UTC m=+0.095891508 container init 0128e2430a5a30865cc713555ee846d8325b075c0cf5e373fc475749c6cfe941 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 06:32:37 np0005604943 podman[94192]: 2026-02-02 11:32:37.524403419 +0000 UTC m=+0.102510610 container start 0128e2430a5a30865cc713555ee846d8325b075c0cf5e373fc475749c6cfe941 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cerf, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  2 06:32:37 np0005604943 objective_cerf[94208]: 167 167
Feb  2 06:32:37 np0005604943 systemd[1]: libpod-0128e2430a5a30865cc713555ee846d8325b075c0cf5e373fc475749c6cfe941.scope: Deactivated successfully.
Feb  2 06:32:37 np0005604943 podman[94192]: 2026-02-02 11:32:37.442502706 +0000 UTC m=+0.020609927 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:37 np0005604943 podman[94192]: 2026-02-02 11:32:37.53791013 +0000 UTC m=+0.116017411 container attach 0128e2430a5a30865cc713555ee846d8325b075c0cf5e373fc475749c6cfe941 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cerf, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:37 np0005604943 podman[94192]: 2026-02-02 11:32:37.538171278 +0000 UTC m=+0.116278469 container died 0128e2430a5a30865cc713555ee846d8325b075c0cf5e373fc475749c6cfe941 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cerf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  2 06:32:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:37 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b2c9fd9de6887f4be32ccdee291ed73d945deee8c89571a7414ee018e1d7683d-merged.mount: Deactivated successfully.
Feb  2 06:32:37 np0005604943 podman[94192]: 2026-02-02 11:32:37.604112018 +0000 UTC m=+0.182219249 container remove 0128e2430a5a30865cc713555ee846d8325b075c0cf5e373fc475749c6cfe941 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cerf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 06:32:37 np0005604943 systemd[1]: libpod-conmon-0128e2430a5a30865cc713555ee846d8325b075c0cf5e373fc475749c6cfe941.scope: Deactivated successfully.
Feb  2 06:32:37 np0005604943 podman[94251]: 2026-02-02 11:32:37.733743842 +0000 UTC m=+0.044479328 container create 9b23e415457bea9bc7f44c56abe24d7b059e943cfcdcaae776f4c2d90082d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_nightingale, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:37 np0005604943 systemd[1]: Started libpod-conmon-9b23e415457bea9bc7f44c56abe24d7b059e943cfcdcaae776f4c2d90082d3a6.scope.
Feb  2 06:32:37 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:37 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3933bf52899402030e49867fef7e504196644878bdf6b9a48c37c4371c27197c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:37 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3933bf52899402030e49867fef7e504196644878bdf6b9a48c37c4371c27197c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:37 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3933bf52899402030e49867fef7e504196644878bdf6b9a48c37c4371c27197c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:37 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3933bf52899402030e49867fef7e504196644878bdf6b9a48c37c4371c27197c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:37 np0005604943 podman[94251]: 2026-02-02 11:32:37.713363272 +0000 UTC m=+0.024098788 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:37 np0005604943 podman[94251]: 2026-02-02 11:32:37.82686805 +0000 UTC m=+0.137603586 container init 9b23e415457bea9bc7f44c56abe24d7b059e943cfcdcaae776f4c2d90082d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_nightingale, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:37 np0005604943 podman[94251]: 2026-02-02 11:32:37.835574872 +0000 UTC m=+0.146310358 container start 9b23e415457bea9bc7f44c56abe24d7b059e943cfcdcaae776f4c2d90082d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_nightingale, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:37 np0005604943 podman[94251]: 2026-02-02 11:32:37.842545134 +0000 UTC m=+0.153280650 container attach 9b23e415457bea9bc7f44c56abe24d7b059e943cfcdcaae776f4c2d90082d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 06:32:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  2 06:32:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3146704273' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb  2 06:32:37 np0005604943 practical_kowalevski[94176]: 
Feb  2 06:32:37 np0005604943 practical_kowalevski[94176]: {"fsid":"4548a36b-7cdc-5e3e-a814-4e1571be1fae","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":106,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":30,"num_osds":3,"num_up_osds":3,"osd_up_since":1770031926,"num_in_osds":3,"osd_in_since":1770031906,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83832832,"bytes_avail":64328093696,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-02-02T11:32:32:483187+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-02T11:32:11.558080+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Feb  2 06:32:37 np0005604943 systemd[1]: libpod-be2b7cc2dcc682e1c1034fcc3b929ca86eedd5c22f5e82070d5af9a5066f568e.scope: Deactivated successfully.
Feb  2 06:32:37 np0005604943 podman[94159]: 2026-02-02 11:32:37.970587784 +0000 UTC m=+0.678048003 container died be2b7cc2dcc682e1c1034fcc3b929ca86eedd5c22f5e82070d5af9a5066f568e (image=quay.io/ceph/ceph:v20, name=practical_kowalevski, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 06:32:38 np0005604943 podman[94159]: 2026-02-02 11:32:38.019514541 +0000 UTC m=+0.726974770 container remove be2b7cc2dcc682e1c1034fcc3b929ca86eedd5c22f5e82070d5af9a5066f568e (image=quay.io/ceph/ceph:v20, name=practical_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 06:32:38 np0005604943 systemd[1]: libpod-conmon-be2b7cc2dcc682e1c1034fcc3b929ca86eedd5c22f5e82070d5af9a5066f568e.scope: Deactivated successfully.
Feb  2 06:32:38 np0005604943 systemd[1]: var-lib-containers-storage-overlay-1dc0f156021151fcca48dd3b0051983d978568de2cd12e1fb5857b61ca297ba3-merged.mount: Deactivated successfully.
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]: {
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:    "0": [
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:        {
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "devices": [
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "/dev/loop3"
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            ],
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "lv_name": "ceph_lv0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "lv_size": "21470642176",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "name": "ceph_lv0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "tags": {
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.cluster_name": "ceph",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.crush_device_class": "",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.encrypted": "0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.objectstore": "bluestore",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.osd_id": "0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.type": "block",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.vdo": "0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.with_tpm": "0"
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            },
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "type": "block",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "vg_name": "ceph_vg0"
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:        }
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:    ],
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:    "1": [
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:        {
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "devices": [
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "/dev/loop4"
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            ],
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "lv_name": "ceph_lv1",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "lv_size": "21470642176",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "name": "ceph_lv1",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "tags": {
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.cluster_name": "ceph",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.crush_device_class": "",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.encrypted": "0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.objectstore": "bluestore",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.osd_id": "1",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.type": "block",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.vdo": "0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.with_tpm": "0"
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            },
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "type": "block",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "vg_name": "ceph_vg1"
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:        }
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:    ],
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:    "2": [
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:        {
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "devices": [
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "/dev/loop5"
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            ],
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "lv_name": "ceph_lv2",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "lv_size": "21470642176",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "name": "ceph_lv2",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "tags": {
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.cluster_name": "ceph",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.crush_device_class": "",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.encrypted": "0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.objectstore": "bluestore",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.osd_id": "2",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.type": "block",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.vdo": "0",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:                "ceph.with_tpm": "0"
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            },
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "type": "block",
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:            "vg_name": "ceph_vg2"
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:        }
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]:    ]
Feb  2 06:32:38 np0005604943 beautiful_nightingale[94268]: }
Feb  2 06:32:38 np0005604943 systemd[1]: libpod-9b23e415457bea9bc7f44c56abe24d7b059e943cfcdcaae776f4c2d90082d3a6.scope: Deactivated successfully.
Feb  2 06:32:38 np0005604943 podman[94251]: 2026-02-02 11:32:38.147660263 +0000 UTC m=+0.458395749 container died 9b23e415457bea9bc7f44c56abe24d7b059e943cfcdcaae776f4c2d90082d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:38 np0005604943 systemd[1]: var-lib-containers-storage-overlay-3933bf52899402030e49867fef7e504196644878bdf6b9a48c37c4371c27197c-merged.mount: Deactivated successfully.
Feb  2 06:32:38 np0005604943 podman[94251]: 2026-02-02 11:32:38.200299158 +0000 UTC m=+0.511034644 container remove 9b23e415457bea9bc7f44c56abe24d7b059e943cfcdcaae776f4c2d90082d3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:38 np0005604943 systemd[1]: libpod-conmon-9b23e415457bea9bc7f44c56abe24d7b059e943cfcdcaae776f4c2d90082d3a6.scope: Deactivated successfully.
Feb  2 06:32:38 np0005604943 python3[94327]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:38 np0005604943 podman[94373]: 2026-02-02 11:32:38.38476505 +0000 UTC m=+0.052297975 container create a05326b704c1d189bf8a07209b2de1bf00c4bac69229a9f3e1a071c71eb5dba5 (image=quay.io/ceph/ceph:v20, name=strange_easley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 06:32:38 np0005604943 systemd[1]: Started libpod-conmon-a05326b704c1d189bf8a07209b2de1bf00c4bac69229a9f3e1a071c71eb5dba5.scope.
Feb  2 06:32:38 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:38 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d41edc54b842fb8706b408b473d2d4248e9e9b477bc35d0b005720d502b64e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:38 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d41edc54b842fb8706b408b473d2d4248e9e9b477bc35d0b005720d502b64e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:38 np0005604943 podman[94373]: 2026-02-02 11:32:38.353848575 +0000 UTC m=+0.021381560 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:38 np0005604943 podman[94373]: 2026-02-02 11:32:38.456704265 +0000 UTC m=+0.124237160 container init a05326b704c1d189bf8a07209b2de1bf00c4bac69229a9f3e1a071c71eb5dba5 (image=quay.io/ceph/ceph:v20, name=strange_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 06:32:38 np0005604943 podman[94373]: 2026-02-02 11:32:38.463979726 +0000 UTC m=+0.131512621 container start a05326b704c1d189bf8a07209b2de1bf00c4bac69229a9f3e1a071c71eb5dba5 (image=quay.io/ceph/ceph:v20, name=strange_easley, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 06:32:38 np0005604943 podman[94373]: 2026-02-02 11:32:38.469826925 +0000 UTC m=+0.137359820 container attach a05326b704c1d189bf8a07209b2de1bf00c4bac69229a9f3e1a071c71eb5dba5 (image=quay.io/ceph/ceph:v20, name=strange_easley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 06:32:38 np0005604943 podman[94426]: 2026-02-02 11:32:38.658709766 +0000 UTC m=+0.059077232 container create 0f1f41be1c2a0f026ae07254e97f8a66bc3e3b6aa9fd2a3c809d80cf3c67e0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Feb  2 06:32:38 np0005604943 systemd[1]: Started libpod-conmon-0f1f41be1c2a0f026ae07254e97f8a66bc3e3b6aa9fd2a3c809d80cf3c67e0bc.scope.
Feb  2 06:32:38 np0005604943 podman[94426]: 2026-02-02 11:32:38.618630015 +0000 UTC m=+0.018997511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:38 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:38 np0005604943 podman[94426]: 2026-02-02 11:32:38.73097069 +0000 UTC m=+0.131338176 container init 0f1f41be1c2a0f026ae07254e97f8a66bc3e3b6aa9fd2a3c809d80cf3c67e0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:38 np0005604943 podman[94426]: 2026-02-02 11:32:38.736917662 +0000 UTC m=+0.137285128 container start 0f1f41be1c2a0f026ae07254e97f8a66bc3e3b6aa9fd2a3c809d80cf3c67e0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_cori, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 06:32:38 np0005604943 modest_cori[94442]: 167 167
Feb  2 06:32:38 np0005604943 systemd[1]: libpod-0f1f41be1c2a0f026ae07254e97f8a66bc3e3b6aa9fd2a3c809d80cf3c67e0bc.scope: Deactivated successfully.
Feb  2 06:32:38 np0005604943 podman[94426]: 2026-02-02 11:32:38.740287439 +0000 UTC m=+0.140654935 container attach 0f1f41be1c2a0f026ae07254e97f8a66bc3e3b6aa9fd2a3c809d80cf3c67e0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_cori, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:38 np0005604943 conmon[94442]: conmon 0f1f41be1c2a0f026ae0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f1f41be1c2a0f026ae07254e97f8a66bc3e3b6aa9fd2a3c809d80cf3c67e0bc.scope/container/memory.events
Feb  2 06:32:38 np0005604943 podman[94426]: 2026-02-02 11:32:38.740748953 +0000 UTC m=+0.141116419 container died 0f1f41be1c2a0f026ae07254e97f8a66bc3e3b6aa9fd2a3c809d80cf3c67e0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:38 np0005604943 systemd[1]: var-lib-containers-storage-overlay-ca5023a6cfc857b0da0138a635f3bef554891a4baf6447783c4ebf73e8bceb1a-merged.mount: Deactivated successfully.
Feb  2 06:32:38 np0005604943 podman[94426]: 2026-02-02 11:32:38.784083568 +0000 UTC m=+0.184451034 container remove 0f1f41be1c2a0f026ae07254e97f8a66bc3e3b6aa9fd2a3c809d80cf3c67e0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_cori, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:38 np0005604943 systemd[1]: libpod-conmon-0f1f41be1c2a0f026ae07254e97f8a66bc3e3b6aa9fd2a3c809d80cf3c67e0bc.scope: Deactivated successfully.
Feb  2 06:32:38 np0005604943 podman[94467]: 2026-02-02 11:32:38.933404303 +0000 UTC m=+0.056591470 container create 3f0bd373fb6002f237cd37d47e4e5cd9e9d6daad7962cdccf3a24d11eb995d73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Feb  2 06:32:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:32:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/229969178' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:32:39 np0005604943 podman[94467]: 2026-02-02 11:32:38.905647289 +0000 UTC m=+0.028834446 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:39 np0005604943 strange_easley[94391]: 
Feb  2 06:32:39 np0005604943 strange_easley[94391]: {"epoch":1,"fsid":"4548a36b-7cdc-5e3e-a814-4e1571be1fae","modified":"2026-02-02T11:30:47.628454Z","created":"2026-02-02T11:30:47.628454Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Feb  2 06:32:39 np0005604943 strange_easley[94391]: dumped monmap epoch 1
Feb  2 06:32:39 np0005604943 systemd[1]: libpod-a05326b704c1d189bf8a07209b2de1bf00c4bac69229a9f3e1a071c71eb5dba5.scope: Deactivated successfully.
Feb  2 06:32:39 np0005604943 systemd[1]: Started libpod-conmon-3f0bd373fb6002f237cd37d47e4e5cd9e9d6daad7962cdccf3a24d11eb995d73.scope.
Feb  2 06:32:39 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:39 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c155cfe0062c4aee53abd6fafd79c75b5a7963d5f77577a815baaed9fa472f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:39 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c155cfe0062c4aee53abd6fafd79c75b5a7963d5f77577a815baaed9fa472f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:39 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c155cfe0062c4aee53abd6fafd79c75b5a7963d5f77577a815baaed9fa472f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:39 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c155cfe0062c4aee53abd6fafd79c75b5a7963d5f77577a815baaed9fa472f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:39 np0005604943 podman[94467]: 2026-02-02 11:32:39.342884655 +0000 UTC m=+0.466071822 container init 3f0bd373fb6002f237cd37d47e4e5cd9e9d6daad7962cdccf3a24d11eb995d73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hoover, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 06:32:39 np0005604943 podman[94373]: 2026-02-02 11:32:39.345316595 +0000 UTC m=+1.012849500 container died a05326b704c1d189bf8a07209b2de1bf00c4bac69229a9f3e1a071c71eb5dba5 (image=quay.io/ceph/ceph:v20, name=strange_easley, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 06:32:39 np0005604943 podman[94467]: 2026-02-02 11:32:39.351262898 +0000 UTC m=+0.474450065 container start 3f0bd373fb6002f237cd37d47e4e5cd9e9d6daad7962cdccf3a24d11eb995d73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hoover, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:39 np0005604943 podman[94467]: 2026-02-02 11:32:39.377986452 +0000 UTC m=+0.501173619 container attach 3f0bd373fb6002f237cd37d47e4e5cd9e9d6daad7962cdccf3a24d11eb995d73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 06:32:39 np0005604943 systemd[1]: var-lib-containers-storage-overlay-69d41edc54b842fb8706b408b473d2d4248e9e9b477bc35d0b005720d502b64e-merged.mount: Deactivated successfully.
Feb  2 06:32:39 np0005604943 podman[94484]: 2026-02-02 11:32:39.412265825 +0000 UTC m=+0.389393151 container remove a05326b704c1d189bf8a07209b2de1bf00c4bac69229a9f3e1a071c71eb5dba5 (image=quay.io/ceph/ceph:v20, name=strange_easley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:39 np0005604943 systemd[1]: libpod-conmon-a05326b704c1d189bf8a07209b2de1bf00c4bac69229a9f3e1a071c71eb5dba5.scope: Deactivated successfully.
Feb  2 06:32:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:39 np0005604943 python3[94587]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:39 np0005604943 lvm[94605]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:32:39 np0005604943 lvm[94605]: VG ceph_vg0 finished
Feb  2 06:32:39 np0005604943 lvm[94606]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:32:39 np0005604943 lvm[94606]: VG ceph_vg1 finished
Feb  2 06:32:39 np0005604943 lvm[94613]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:32:39 np0005604943 lvm[94613]: VG ceph_vg2 finished
Feb  2 06:32:40 np0005604943 podman[94603]: 2026-02-02 11:32:40.001114622 +0000 UTC m=+0.046575930 container create d19bf7f3b37de89081a00a8a28e15e12e2abfb5c9d43ceb48b9a7a8cf7df88d3 (image=quay.io/ceph/ceph:v20, name=kind_nightingale, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 06:32:40 np0005604943 practical_hoover[94498]: {}
Feb  2 06:32:40 np0005604943 systemd[1]: Started libpod-conmon-d19bf7f3b37de89081a00a8a28e15e12e2abfb5c9d43ceb48b9a7a8cf7df88d3.scope.
Feb  2 06:32:40 np0005604943 podman[94603]: 2026-02-02 11:32:39.975282843 +0000 UTC m=+0.020744221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:40 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd00121259c406731a01ce3180aeda54d3b951ebfb02d759c3a2552652401018/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd00121259c406731a01ce3180aeda54d3b951ebfb02d759c3a2552652401018/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:40 np0005604943 systemd[1]: libpod-3f0bd373fb6002f237cd37d47e4e5cd9e9d6daad7962cdccf3a24d11eb995d73.scope: Deactivated successfully.
Feb  2 06:32:40 np0005604943 systemd[1]: libpod-3f0bd373fb6002f237cd37d47e4e5cd9e9d6daad7962cdccf3a24d11eb995d73.scope: Consumed 1.059s CPU time.
Feb  2 06:32:40 np0005604943 conmon[94498]: conmon 3f0bd373fb6002f237cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f0bd373fb6002f237cd37d47e4e5cd9e9d6daad7962cdccf3a24d11eb995d73.scope/container/memory.events
Feb  2 06:32:40 np0005604943 podman[94467]: 2026-02-02 11:32:40.105766864 +0000 UTC m=+1.228954031 container died 3f0bd373fb6002f237cd37d47e4e5cd9e9d6daad7962cdccf3a24d11eb995d73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hoover, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 06:32:40 np0005604943 podman[94603]: 2026-02-02 11:32:40.124242939 +0000 UTC m=+0.169704347 container init d19bf7f3b37de89081a00a8a28e15e12e2abfb5c9d43ceb48b9a7a8cf7df88d3 (image=quay.io/ceph/ceph:v20, name=kind_nightingale, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:40 np0005604943 podman[94603]: 2026-02-02 11:32:40.128680817 +0000 UTC m=+0.174142155 container start d19bf7f3b37de89081a00a8a28e15e12e2abfb5c9d43ceb48b9a7a8cf7df88d3 (image=quay.io/ceph/ceph:v20, name=kind_nightingale, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:40 np0005604943 systemd[1]: var-lib-containers-storage-overlay-7c155cfe0062c4aee53abd6fafd79c75b5a7963d5f77577a815baaed9fa472f8-merged.mount: Deactivated successfully.
Feb  2 06:32:40 np0005604943 podman[94603]: 2026-02-02 11:32:40.137717818 +0000 UTC m=+0.183179206 container attach d19bf7f3b37de89081a00a8a28e15e12e2abfb5c9d43ceb48b9a7a8cf7df88d3 (image=quay.io/ceph/ceph:v20, name=kind_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:40 np0005604943 podman[94467]: 2026-02-02 11:32:40.153787104 +0000 UTC m=+1.276974291 container remove 3f0bd373fb6002f237cd37d47e4e5cd9e9d6daad7962cdccf3a24d11eb995d73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hoover, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:40 np0005604943 systemd[1]: libpod-conmon-3f0bd373fb6002f237cd37d47e4e5cd9e9d6daad7962cdccf3a24d11eb995d73.scope: Deactivated successfully.
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:40 np0005604943 ceph-mgr[75558]: [progress INFO root] update: starting ev e5c0fc94-c457-4465-92fe-38173ab19a51 (Updating rgw.rgw deployment (+1 -> 1))
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ctqttb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ctqttb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ctqttb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:32:40 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.ctqttb on compute-0
Feb  2 06:32:40 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.ctqttb on compute-0
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2278812684' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Feb  2 06:32:40 np0005604943 kind_nightingale[94625]: [client.openstack]
Feb  2 06:32:40 np0005604943 kind_nightingale[94625]: #011key = AQDHioBpAAAAABAA51w/VIvmoyuNXxgfaqLFUQ==
Feb  2 06:32:40 np0005604943 kind_nightingale[94625]: #011caps mgr = "allow *"
Feb  2 06:32:40 np0005604943 kind_nightingale[94625]: #011caps mon = "profile rbd"
Feb  2 06:32:40 np0005604943 kind_nightingale[94625]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Feb  2 06:32:40 np0005604943 podman[94750]: 2026-02-02 11:32:40.703487997 +0000 UTC m=+0.043132280 container create 6c2d66dd6db818a499b3639a000ea8fa207af3c2f4ee396a3ff5b9fb3bb02ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swanson, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:40 np0005604943 systemd[1]: libpod-d19bf7f3b37de89081a00a8a28e15e12e2abfb5c9d43ceb48b9a7a8cf7df88d3.scope: Deactivated successfully.
Feb  2 06:32:40 np0005604943 podman[94603]: 2026-02-02 11:32:40.707178684 +0000 UTC m=+0.752639972 container died d19bf7f3b37de89081a00a8a28e15e12e2abfb5c9d43ceb48b9a7a8cf7df88d3 (image=quay.io/ceph/ceph:v20, name=kind_nightingale, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 06:32:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:32:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:32:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:32:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:32:40 np0005604943 systemd[1]: Started libpod-conmon-6c2d66dd6db818a499b3639a000ea8fa207af3c2f4ee396a3ff5b9fb3bb02ff5.scope.
Feb  2 06:32:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:32:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ctqttb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ctqttb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: Deploying daemon rgw.rgw.compute-0.ctqttb on compute-0
Feb  2 06:32:40 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/2278812684' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Feb  2 06:32:40 np0005604943 systemd[1]: var-lib-containers-storage-overlay-bd00121259c406731a01ce3180aeda54d3b951ebfb02d759c3a2552652401018-merged.mount: Deactivated successfully.
Feb  2 06:32:40 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:40 np0005604943 podman[94603]: 2026-02-02 11:32:40.766368768 +0000 UTC m=+0.811830066 container remove d19bf7f3b37de89081a00a8a28e15e12e2abfb5c9d43ceb48b9a7a8cf7df88d3 (image=quay.io/ceph/ceph:v20, name=kind_nightingale, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:40 np0005604943 systemd[1]: libpod-conmon-d19bf7f3b37de89081a00a8a28e15e12e2abfb5c9d43ceb48b9a7a8cf7df88d3.scope: Deactivated successfully.
Feb  2 06:32:40 np0005604943 podman[94750]: 2026-02-02 11:32:40.773407802 +0000 UTC m=+0.113052105 container init 6c2d66dd6db818a499b3639a000ea8fa207af3c2f4ee396a3ff5b9fb3bb02ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swanson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:40 np0005604943 podman[94750]: 2026-02-02 11:32:40.777584823 +0000 UTC m=+0.117229126 container start 6c2d66dd6db818a499b3639a000ea8fa207af3c2f4ee396a3ff5b9fb3bb02ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swanson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 06:32:40 np0005604943 podman[94750]: 2026-02-02 11:32:40.683150318 +0000 UTC m=+0.022794651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:40 np0005604943 suspicious_swanson[94779]: 167 167
Feb  2 06:32:40 np0005604943 systemd[1]: libpod-6c2d66dd6db818a499b3639a000ea8fa207af3c2f4ee396a3ff5b9fb3bb02ff5.scope: Deactivated successfully.
Feb  2 06:32:40 np0005604943 podman[94750]: 2026-02-02 11:32:40.783339361 +0000 UTC m=+0.122983664 container attach 6c2d66dd6db818a499b3639a000ea8fa207af3c2f4ee396a3ff5b9fb3bb02ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 06:32:40 np0005604943 podman[94750]: 2026-02-02 11:32:40.784301348 +0000 UTC m=+0.123945641 container died 6c2d66dd6db818a499b3639a000ea8fa207af3c2f4ee396a3ff5b9fb3bb02ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:40 np0005604943 systemd[1]: var-lib-containers-storage-overlay-90a315eb42508cb1a79e7442e9f1d89d33f76808c3f639c4750378e484b875c7-merged.mount: Deactivated successfully.
Feb  2 06:32:40 np0005604943 podman[94750]: 2026-02-02 11:32:40.824893054 +0000 UTC m=+0.164537347 container remove 6c2d66dd6db818a499b3639a000ea8fa207af3c2f4ee396a3ff5b9fb3bb02ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 06:32:40 np0005604943 systemd[1]: libpod-conmon-6c2d66dd6db818a499b3639a000ea8fa207af3c2f4ee396a3ff5b9fb3bb02ff5.scope: Deactivated successfully.
Feb  2 06:32:40 np0005604943 systemd[1]: Reloading.
Feb  2 06:32:40 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:32:40 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:32:41 np0005604943 systemd[1]: Reloading.
Feb  2 06:32:41 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:32:41 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:32:41 np0005604943 systemd[1]: Starting Ceph rgw.rgw.compute-0.ctqttb for 4548a36b-7cdc-5e3e-a814-4e1571be1fae...
Feb  2 06:32:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:41 np0005604943 podman[94923]: 2026-02-02 11:32:41.646076592 +0000 UTC m=+0.062912704 container create e99f14eeb73d2875924ac314fa66315a78e51bc47ad69dfc1b512fee8f93bf90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-rgw-rgw-compute-0-ctqttb, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 06:32:41 np0005604943 podman[94923]: 2026-02-02 11:32:41.60288809 +0000 UTC m=+0.019724212 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:41 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8422e6552c0ced62d9720d717acc360d2492d2fbf1e81dcfad6c88bbcd8d449c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:41 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8422e6552c0ced62d9720d717acc360d2492d2fbf1e81dcfad6c88bbcd8d449c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:41 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8422e6552c0ced62d9720d717acc360d2492d2fbf1e81dcfad6c88bbcd8d449c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:41 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8422e6552c0ced62d9720d717acc360d2492d2fbf1e81dcfad6c88bbcd8d449c/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.ctqttb supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:32:41 np0005604943 podman[94923]: 2026-02-02 11:32:41.977638965 +0000 UTC m=+0.394475077 container init e99f14eeb73d2875924ac314fa66315a78e51bc47ad69dfc1b512fee8f93bf90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-rgw-rgw-compute-0-ctqttb, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:41 np0005604943 podman[94923]: 2026-02-02 11:32:41.982181988 +0000 UTC m=+0.399018090 container start e99f14eeb73d2875924ac314fa66315a78e51bc47ad69dfc1b512fee8f93bf90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-rgw-rgw-compute-0-ctqttb, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:42 np0005604943 radosgw[94979]: deferred set uid:gid to 167:167 (ceph:ceph)
Feb  2 06:32:42 np0005604943 radosgw[94979]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Feb  2 06:32:42 np0005604943 radosgw[94979]: framework: beast
Feb  2 06:32:42 np0005604943 radosgw[94979]: framework conf key: endpoint, val: 192.168.122.100:8082
Feb  2 06:32:42 np0005604943 radosgw[94979]: init_numa not setting numa affinity
Feb  2 06:32:42 np0005604943 bash[94923]: e99f14eeb73d2875924ac314fa66315a78e51bc47ad69dfc1b512fee8f93bf90
Feb  2 06:32:42 np0005604943 systemd[1]: Started Ceph rgw.rgw.compute-0.ctqttb for 4548a36b-7cdc-5e3e-a814-4e1571be1fae.
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:42 np0005604943 ceph-mgr[75558]: [progress INFO root] complete: finished ev e5c0fc94-c457-4465-92fe-38173ab19a51 (Updating rgw.rgw deployment (+1 -> 1))
Feb  2 06:32:42 np0005604943 ceph-mgr[75558]: [progress INFO root] Completed event e5c0fc94-c457-4465-92fe-38173ab19a51 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Feb  2 06:32:42 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Feb  2 06:32:42 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:42 np0005604943 ceph-mgr[75558]: [progress INFO root] update: starting ev b89bec64-c1fc-4cf6-96d5-78e292f371fa (Updating mds.cephfs deployment (+1 -> 1))
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mldrue", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mldrue", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mldrue", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:32:42 np0005604943 ceph-mgr[75558]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.mldrue on compute-0
Feb  2 06:32:42 np0005604943 ceph-mgr[75558]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.mldrue on compute-0
Feb  2 06:32:42 np0005604943 ansible-async_wrapper.py[95121]: Invoked with j576980239508 30 /home/zuul/.ansible/tmp/ansible-tmp-1770031961.9509847-36848-119436207624034/AnsiballZ_command.py _
Feb  2 06:32:42 np0005604943 ansible-async_wrapper.py[95174]: Starting module and watcher
Feb  2 06:32:42 np0005604943 ansible-async_wrapper.py[95174]: Start watching 95175 (30)
Feb  2 06:32:42 np0005604943 ansible-async_wrapper.py[95175]: Start module (95175)
Feb  2 06:32:42 np0005604943 ansible-async_wrapper.py[95121]: Return async_wrapper task started.
Feb  2 06:32:42 np0005604943 python3[95176]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:42 np0005604943 podman[95179]: 2026-02-02 11:32:42.60334419 +0000 UTC m=+0.045156969 container create 5e2e1db9ed3532d422bd2d87c08895e83ceb758f4df5f2f14fa3816ff6a3f7e6 (image=quay.io/ceph/ceph:v20, name=boring_carver, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 06:32:42 np0005604943 systemd[1]: Started libpod-conmon-5e2e1db9ed3532d422bd2d87c08895e83ceb758f4df5f2f14fa3816ff6a3f7e6.scope.
Feb  2 06:32:42 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:42 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470b6c23745315acd1f9c0fdf1bd1f496fd78f8d905f0693b8b1246c29400d7b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:42 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470b6c23745315acd1f9c0fdf1bd1f496fd78f8d905f0693b8b1246c29400d7b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:42 np0005604943 podman[95179]: 2026-02-02 11:32:42.580715665 +0000 UTC m=+0.022528474 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:42 np0005604943 podman[95179]: 2026-02-02 11:32:42.687003204 +0000 UTC m=+0.128816013 container init 5e2e1db9ed3532d422bd2d87c08895e83ceb758f4df5f2f14fa3816ff6a3f7e6 (image=quay.io/ceph/ceph:v20, name=boring_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:42 np0005604943 podman[95179]: 2026-02-02 11:32:42.691821094 +0000 UTC m=+0.133633883 container start 5e2e1db9ed3532d422bd2d87c08895e83ceb758f4df5f2f14fa3816ff6a3f7e6 (image=quay.io/ceph/ceph:v20, name=boring_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 06:32:42 np0005604943 podman[95179]: 2026-02-02 11:32:42.699461605 +0000 UTC m=+0.141274434 container attach 5e2e1db9ed3532d422bd2d87c08895e83ceb758f4df5f2f14fa3816ff6a3f7e6 (image=quay.io/ceph/ceph:v20, name=boring_carver, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:42 np0005604943 podman[95237]: 2026-02-02 11:32:42.728378503 +0000 UTC m=+0.038695712 container create 1968fa8aa1798b9c73172a41a356a0323160b5a41d5cc83c4fb93d7f81131459 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 06:32:42 np0005604943 systemd[1]: Started libpod-conmon-1968fa8aa1798b9c73172a41a356a0323160b5a41d5cc83c4fb93d7f81131459.scope.
Feb  2 06:32:42 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:42 np0005604943 podman[95237]: 2026-02-02 11:32:42.803345174 +0000 UTC m=+0.113662493 container init 1968fa8aa1798b9c73172a41a356a0323160b5a41d5cc83c4fb93d7f81131459 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 06:32:42 np0005604943 podman[95237]: 2026-02-02 11:32:42.70930332 +0000 UTC m=+0.019620559 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:42 np0005604943 podman[95237]: 2026-02-02 11:32:42.808751601 +0000 UTC m=+0.119068810 container start 1968fa8aa1798b9c73172a41a356a0323160b5a41d5cc83c4fb93d7f81131459 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:42 np0005604943 nifty_keller[95255]: 167 167
Feb  2 06:32:42 np0005604943 systemd[1]: libpod-1968fa8aa1798b9c73172a41a356a0323160b5a41d5cc83c4fb93d7f81131459.scope: Deactivated successfully.
Feb  2 06:32:42 np0005604943 conmon[95255]: conmon 1968fa8aa1798b9c7317 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1968fa8aa1798b9c73172a41a356a0323160b5a41d5cc83c4fb93d7f81131459.scope/container/memory.events
Feb  2 06:32:42 np0005604943 podman[95237]: 2026-02-02 11:32:42.814576719 +0000 UTC m=+0.124893928 container attach 1968fa8aa1798b9c73172a41a356a0323160b5a41d5cc83c4fb93d7f81131459 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 06:32:42 np0005604943 podman[95237]: 2026-02-02 11:32:42.815007072 +0000 UTC m=+0.125324271 container died 1968fa8aa1798b9c73172a41a356a0323160b5a41d5cc83c4fb93d7f81131459 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 06:32:42 np0005604943 systemd[1]: var-lib-containers-storage-overlay-30b44e84b858691e79ce08a1007204e3045c6b513855fe06289dda37faff0f37-merged.mount: Deactivated successfully.
Feb  2 06:32:42 np0005604943 podman[95237]: 2026-02-02 11:32:42.856942847 +0000 UTC m=+0.167260096 container remove 1968fa8aa1798b9c73172a41a356a0323160b5a41d5cc83c4fb93d7f81131459 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:42 np0005604943 systemd[1]: libpod-conmon-1968fa8aa1798b9c73172a41a356a0323160b5a41d5cc83c4fb93d7f81131459.scope: Deactivated successfully.
Feb  2 06:32:42 np0005604943 systemd[1]: Reloading.
Feb  2 06:32:42 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:32:42 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Feb  2 06:32:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3979104623' entity='client.rgw.rgw.compute-0.ctqttb' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Feb  2 06:32:43 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 06:32:43 np0005604943 boring_carver[95223]: 
Feb  2 06:32:43 np0005604943 boring_carver[95223]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb  2 06:32:43 np0005604943 systemd[1]: Reloading.
Feb  2 06:32:43 np0005604943 podman[95179]: 2026-02-02 11:32:43.1950465 +0000 UTC m=+0.636859289 container died 5e2e1db9ed3532d422bd2d87c08895e83ceb758f4df5f2f14fa3816ff6a3f7e6 (image=quay.io/ceph/ceph:v20, name=boring_carver, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: Saving service rgw.rgw spec with placement compute-0
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mldrue", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mldrue", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: Deploying daemon mds.cephfs.compute-0.mldrue on compute-0
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3979104623' entity='client.rgw.rgw.compute-0.ctqttb' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Feb  2 06:32:43 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:32:43 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:32:43 np0005604943 systemd[1]: libpod-5e2e1db9ed3532d422bd2d87c08895e83ceb758f4df5f2f14fa3816ff6a3f7e6.scope: Deactivated successfully.
Feb  2 06:32:43 np0005604943 systemd[1]: var-lib-containers-storage-overlay-470b6c23745315acd1f9c0fdf1bd1f496fd78f8d905f0693b8b1246c29400d7b-merged.mount: Deactivated successfully.
Feb  2 06:32:43 np0005604943 podman[95179]: 2026-02-02 11:32:43.428589916 +0000 UTC m=+0.870402745 container remove 5e2e1db9ed3532d422bd2d87c08895e83ceb758f4df5f2f14fa3816ff6a3f7e6 (image=quay.io/ceph/ceph:v20, name=boring_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 06:32:43 np0005604943 systemd[1]: Starting Ceph mds.cephfs.compute-0.mldrue for 4548a36b-7cdc-5e3e-a814-4e1571be1fae...
Feb  2 06:32:43 np0005604943 systemd[1]: libpod-conmon-5e2e1db9ed3532d422bd2d87c08895e83ceb758f4df5f2f14fa3816ff6a3f7e6.scope: Deactivated successfully.
Feb  2 06:32:43 np0005604943 ansible-async_wrapper.py[95175]: Module complete (95175)
Feb  2 06:32:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v69: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:43 np0005604943 podman[95479]: 2026-02-02 11:32:43.662979295 +0000 UTC m=+0.054266263 container create 2fd942fa7e2a914f99986cfceb092b754d16f7807e8ec91f52763c7e36375409 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mds-cephfs-compute-0-mldrue, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:43 np0005604943 python3[95467]: ansible-ansible.legacy.async_status Invoked with jid=j576980239508.95121 mode=status _async_dir=/root/.ansible_async
Feb  2 06:32:43 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f03d7f06f2160d053bffff722c287214a7ee4cb2856ca91947d8156b2bd569d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:43 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f03d7f06f2160d053bffff722c287214a7ee4cb2856ca91947d8156b2bd569d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:43 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f03d7f06f2160d053bffff722c287214a7ee4cb2856ca91947d8156b2bd569d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:43 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f03d7f06f2160d053bffff722c287214a7ee4cb2856ca91947d8156b2bd569d/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.mldrue supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:43 np0005604943 podman[95479]: 2026-02-02 11:32:43.63827035 +0000 UTC m=+0.029557328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:43 np0005604943 podman[95479]: 2026-02-02 11:32:43.732694344 +0000 UTC m=+0.123981312 container init 2fd942fa7e2a914f99986cfceb092b754d16f7807e8ec91f52763c7e36375409 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mds-cephfs-compute-0-mldrue, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 06:32:43 np0005604943 podman[95479]: 2026-02-02 11:32:43.738704798 +0000 UTC m=+0.129991746 container start 2fd942fa7e2a914f99986cfceb092b754d16f7807e8ec91f52763c7e36375409 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mds-cephfs-compute-0-mldrue, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 06:32:43 np0005604943 bash[95479]: 2fd942fa7e2a914f99986cfceb092b754d16f7807e8ec91f52763c7e36375409
Feb  2 06:32:43 np0005604943 systemd[1]: Started Ceph mds.cephfs.compute-0.mldrue for 4548a36b-7cdc-5e3e-a814-4e1571be1fae.
Feb  2 06:32:43 np0005604943 ceph-mds[95505]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 06:32:43 np0005604943 ceph-mds[95505]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Feb  2 06:32:43 np0005604943 ceph-mds[95505]: main not setting numa affinity
Feb  2 06:32:43 np0005604943 ceph-mds[95505]: pidfile_write: ignore empty --pid-file
Feb  2 06:32:43 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mds-cephfs-compute-0-mldrue[95494]: starting mds.cephfs.compute-0.mldrue at 
Feb  2 06:32:43 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue Updating MDS map to version 2 from mon.0
Feb  2 06:32:43 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 31 pg[8.0( empty local-lis/les=0/0 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [1] r=0 lpr=31 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:43 np0005604943 ceph-mgr[75558]: [progress INFO root] complete: finished ev b89bec64-c1fc-4cf6-96d5-78e292f371fa (Updating mds.cephfs deployment (+1 -> 1))
Feb  2 06:32:43 np0005604943 ceph-mgr[75558]: [progress INFO root] Completed event b89bec64-c1fc-4cf6-96d5-78e292f371fa (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:43 np0005604943 python3[95565]: ansible-ansible.legacy.async_status Invoked with jid=j576980239508.95121 mode=cleanup _async_dir=/root/.ansible_async
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3979104623' entity='client.rgw.rgw.compute-0.ctqttb' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Feb  2 06:32:43 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Feb  2 06:32:44 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 32 pg[8.0( empty local-lis/les=31/32 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [1] r=0 lpr=31 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/3979104623' entity='client.rgw.rgw.compute-0.ctqttb' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb  2 06:32:44 np0005604943 podman[95684]: 2026-02-02 11:32:44.570839974 +0000 UTC m=+0.234196726 container exec fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:44 np0005604943 python3[95723]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).mds e3 new map
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-02-02T11:32:44:281809+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T11:32:32.482996+0000#012modified#0112026-02-02T11:32:32.482996+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.mldrue{-1:14253} state up:standby seq 1 addr [v2:192.168.122.100:6814/2041160053,v1:192.168.122.100:6815/2041160053] compat {c=[1],r=[1],i=[1fff]}]
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue Updating MDS map to version 3 from mon.0
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue Monitors have assigned me to become a standby
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2041160053,v1:192.168.122.100:6815/2041160053] up:boot
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2041160053,v1:192.168.122.100:6815/2041160053] as mds.0
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.mldrue assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.mldrue"} v 0)
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.mldrue"} : dispatch
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).mds e3 all = 0
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).mds e4 new map
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-02-02T11:32:44:723008+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T11:32:32.482996+0000#012modified#0112026-02-02T11:32:44.722996+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14253}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.mldrue{0:14253} state up:creating seq 1 addr [v2:192.168.122.100:6814/2041160053,v1:192.168.122.100:6815/2041160053] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue Updating MDS map to version 4 from mon.0
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.0.4 handle_mds_map I am now mds.0.4
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.0.cache creating system inode with ino:0x1
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.0.cache creating system inode with ino:0x100
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.0.cache creating system inode with ino:0x600
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.0.cache creating system inode with ino:0x601
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.0.cache creating system inode with ino:0x602
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.0.cache creating system inode with ino:0x603
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.0.cache creating system inode with ino:0x604
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.0.cache creating system inode with ino:0x605
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.0.cache creating system inode with ino:0x606
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.0.cache creating system inode with ino:0x607
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.0.cache creating system inode with ino:0x608
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.0.cache creating system inode with ino:0x609
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.mldrue=up:creating}
Feb  2 06:32:44 np0005604943 podman[95741]: 2026-02-02 11:32:44.800581408 +0000 UTC m=+0.114175158 container exec_died fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 06:32:44 np0005604943 podman[95730]: 2026-02-02 11:32:44.793991327 +0000 UTC m=+0.203151945 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:44 np0005604943 podman[95684]: 2026-02-02 11:32:44.971404847 +0000 UTC m=+0.634761549 container exec_died fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Feb  2 06:32:44 np0005604943 ceph-mds[95505]: mds.0.4 creating_done
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.mldrue is now active in filesystem cephfs as rank 0
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/686116269' entity='client.rgw.rgw.compute-0.ctqttb' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Feb  2 06:32:45 np0005604943 podman[95730]: 2026-02-02 11:32:45.237548056 +0000 UTC m=+0.646708694 container create d6bfd1dbae17e2af23ff0db8aa33a308ea0406787a52d140c2e387638c97be78 (image=quay.io/ceph/ceph:v20, name=objective_jemison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 06:32:45 np0005604943 systemd[1]: Started libpod-conmon-d6bfd1dbae17e2af23ff0db8aa33a308ea0406787a52d140c2e387638c97be78.scope.
Feb  2 06:32:45 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:45 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a903409ce3f16b4b794a6e94f5a4f9c4f395a694dedb42d27fa864b4719f010b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:45 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a903409ce3f16b4b794a6e94f5a4f9c4f395a694dedb42d27fa864b4719f010b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v72: 9 pgs: 2 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:32:45 np0005604943 podman[95730]: 2026-02-02 11:32:45.567332928 +0000 UTC m=+0.976493606 container init d6bfd1dbae17e2af23ff0db8aa33a308ea0406787a52d140c2e387638c97be78 (image=quay.io/ceph/ceph:v20, name=objective_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 06:32:45 np0005604943 podman[95730]: 2026-02-02 11:32:45.575562877 +0000 UTC m=+0.984723505 container start d6bfd1dbae17e2af23ff0db8aa33a308ea0406787a52d140c2e387638c97be78 (image=quay.io/ceph/ceph:v20, name=objective_jemison, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:45 np0005604943 podman[95730]: 2026-02-02 11:32:45.598007857 +0000 UTC m=+1.007168495 container attach d6bfd1dbae17e2af23ff0db8aa33a308ea0406787a52d140c2e387638c97be78 (image=quay.io/ceph/ceph:v20, name=objective_jemison, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:45 np0005604943 ceph-mgr[75558]: [progress INFO root] Writing back 5 completed events
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: daemon mds.cephfs.compute-0.mldrue assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: Cluster is now healthy
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: daemon mds.cephfs.compute-0.mldrue is now active in filesystem cephfs as rank 0
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/686116269' entity='client.rgw.rgw.compute-0.ctqttb' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:45 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 33 pg[9.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).mds e5 new map
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2026-02-02T11:32:45:823574+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T11:32:32.482996+0000#012modified#0112026-02-02T11:32:45.823572+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14253}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14253 members: 14253#012[mds.cephfs.compute-0.mldrue{0:14253} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2041160053,v1:192.168.122.100:6815/2041160053] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Feb  2 06:32:45 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue Updating MDS map to version 5 from mon.0
Feb  2 06:32:45 np0005604943 ceph-mds[95505]: mds.0.4 handle_mds_map I am now mds.0.4
Feb  2 06:32:45 np0005604943 ceph-mds[95505]: mds.0.4 handle_mds_map state change up:creating --> up:active
Feb  2 06:32:45 np0005604943 ceph-mds[95505]: mds.0.4 recovery_done -- successful recovery!
Feb  2 06:32:45 np0005604943 ceph-mds[95505]: mds.0.4 active_start
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2041160053,v1:192.168.122.100:6815/2041160053] up:active
Feb  2 06:32:45 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.mldrue=up:active}
Feb  2 06:32:46 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 06:32:46 np0005604943 objective_jemison[96344]: 
Feb  2 06:32:46 np0005604943 objective_jemison[96344]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb  2 06:32:46 np0005604943 systemd[1]: libpod-d6bfd1dbae17e2af23ff0db8aa33a308ea0406787a52d140c2e387638c97be78.scope: Deactivated successfully.
Feb  2 06:32:46 np0005604943 podman[95730]: 2026-02-02 11:32:46.047418605 +0000 UTC m=+1.456579213 container died d6bfd1dbae17e2af23ff0db8aa33a308ea0406787a52d140c2e387638c97be78 (image=quay.io/ceph/ceph:v20, name=objective_jemison, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/686116269' entity='client.rgw.rgw.compute-0.ctqttb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Feb  2 06:32:46 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a903409ce3f16b4b794a6e94f5a4f9c4f395a694dedb42d27fa864b4719f010b-merged.mount: Deactivated successfully.
Feb  2 06:32:46 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 34 pg[9.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:32:46 np0005604943 podman[95730]: 2026-02-02 11:32:46.213526947 +0000 UTC m=+1.622687545 container remove d6bfd1dbae17e2af23ff0db8aa33a308ea0406787a52d140c2e387638c97be78 (image=quay.io/ceph/ceph:v20, name=objective_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:46 np0005604943 systemd[1]: libpod-conmon-d6bfd1dbae17e2af23ff0db8aa33a308ea0406787a52d140c2e387638c97be78.scope: Deactivated successfully.
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/686116269' entity='client.rgw.rgw.compute-0.ctqttb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:32:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:32:47 np0005604943 podman[96607]: 2026-02-02 11:32:47.059241835 +0000 UTC m=+0.117536846 container create 25d3183f1d4e2ab59e7df5acac5bc88701bae278d4af21f66900fc49892e6387 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_gauss, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 06:32:47 np0005604943 podman[96607]: 2026-02-02 11:32:46.975298213 +0000 UTC m=+0.033593254 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:47 np0005604943 systemd[1]: Started libpod-conmon-25d3183f1d4e2ab59e7df5acac5bc88701bae278d4af21f66900fc49892e6387.scope.
Feb  2 06:32:47 np0005604943 python3[96618]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Feb  2 06:32:47 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Feb  2 06:32:47 np0005604943 podman[96607]: 2026-02-02 11:32:47.244759028 +0000 UTC m=+0.303054129 container init 25d3183f1d4e2ab59e7df5acac5bc88701bae278d4af21f66900fc49892e6387 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_gauss, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:47 np0005604943 podman[96607]: 2026-02-02 11:32:47.252285087 +0000 UTC m=+0.310580138 container start 25d3183f1d4e2ab59e7df5acac5bc88701bae278d4af21f66900fc49892e6387 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:47 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Feb  2 06:32:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Feb  2 06:32:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/686116269' entity='client.rgw.rgw.compute-0.ctqttb' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Feb  2 06:32:47 np0005604943 vigorous_gauss[96629]: 167 167
Feb  2 06:32:47 np0005604943 systemd[1]: libpod-25d3183f1d4e2ab59e7df5acac5bc88701bae278d4af21f66900fc49892e6387.scope: Deactivated successfully.
Feb  2 06:32:47 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 35 pg[10.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:32:47 np0005604943 podman[96607]: 2026-02-02 11:32:47.272485732 +0000 UTC m=+0.330780783 container attach 25d3183f1d4e2ab59e7df5acac5bc88701bae278d4af21f66900fc49892e6387 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:47 np0005604943 podman[96607]: 2026-02-02 11:32:47.272988726 +0000 UTC m=+0.331283757 container died 25d3183f1d4e2ab59e7df5acac5bc88701bae278d4af21f66900fc49892e6387 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_gauss, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 06:32:47 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a2bbded06c5df8e9f72caf3d7c5913271002cbcb1f0e033e6467d5262d8d93a9-merged.mount: Deactivated successfully.
Feb  2 06:32:47 np0005604943 ansible-async_wrapper.py[95174]: Done in kid B.
Feb  2 06:32:47 np0005604943 podman[96607]: 2026-02-02 11:32:47.406382891 +0000 UTC m=+0.464677902 container remove 25d3183f1d4e2ab59e7df5acac5bc88701bae278d4af21f66900fc49892e6387 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_gauss, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:47 np0005604943 systemd[1]: libpod-conmon-25d3183f1d4e2ab59e7df5acac5bc88701bae278d4af21f66900fc49892e6387.scope: Deactivated successfully.
Feb  2 06:32:47 np0005604943 podman[96632]: 2026-02-02 11:32:47.321311406 +0000 UTC m=+0.139053828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:47 np0005604943 podman[96632]: 2026-02-02 11:32:47.450596891 +0000 UTC m=+0.268339273 container create b2e159feea4ba5c1c7705d612496bcfc9e526ea6546f55b5b4fa4c282bfc8435 (image=quay.io/ceph/ceph:v20, name=intelligent_fermi, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:47 np0005604943 systemd[1]: Started libpod-conmon-b2e159feea4ba5c1c7705d612496bcfc9e526ea6546f55b5b4fa4c282bfc8435.scope.
Feb  2 06:32:47 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2b4cd3caa5abe44db9c7fe13d3801c644e7865663e9f4db48ca51ad4904d4b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2b4cd3caa5abe44db9c7fe13d3801c644e7865663e9f4db48ca51ad4904d4b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:47 np0005604943 podman[96632]: 2026-02-02 11:32:47.559856226 +0000 UTC m=+0.377598588 container init b2e159feea4ba5c1c7705d612496bcfc9e526ea6546f55b5b4fa4c282bfc8435 (image=quay.io/ceph/ceph:v20, name=intelligent_fermi, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v75: 10 pgs: 1 unknown, 9 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s
Feb  2 06:32:47 np0005604943 podman[96632]: 2026-02-02 11:32:47.568707663 +0000 UTC m=+0.386450015 container start b2e159feea4ba5c1c7705d612496bcfc9e526ea6546f55b5b4fa4c282bfc8435 (image=quay.io/ceph/ceph:v20, name=intelligent_fermi, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:47 np0005604943 podman[96632]: 2026-02-02 11:32:47.580536555 +0000 UTC m=+0.398278897 container attach b2e159feea4ba5c1c7705d612496bcfc9e526ea6546f55b5b4fa4c282bfc8435 (image=quay.io/ceph/ceph:v20, name=intelligent_fermi, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:47 np0005604943 podman[96670]: 2026-02-02 11:32:47.620882314 +0000 UTC m=+0.105358393 container create 688efb2eb8064a953e589587e7161e2cc56528f13fab9092f4b43b4fa08e56c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:47 np0005604943 podman[96670]: 2026-02-02 11:32:47.548878739 +0000 UTC m=+0.033354858 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:47 np0005604943 systemd[1]: Started libpod-conmon-688efb2eb8064a953e589587e7161e2cc56528f13fab9092f4b43b4fa08e56c2.scope.
Feb  2 06:32:47 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88d9de176193fef3cce5d87b10379dc3aabcde164b90576ae249fe2e9b9fee5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88d9de176193fef3cce5d87b10379dc3aabcde164b90576ae249fe2e9b9fee5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88d9de176193fef3cce5d87b10379dc3aabcde164b90576ae249fe2e9b9fee5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88d9de176193fef3cce5d87b10379dc3aabcde164b90576ae249fe2e9b9fee5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88d9de176193fef3cce5d87b10379dc3aabcde164b90576ae249fe2e9b9fee5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:47 np0005604943 podman[96670]: 2026-02-02 11:32:47.872425061 +0000 UTC m=+0.356901150 container init 688efb2eb8064a953e589587e7161e2cc56528f13fab9092f4b43b4fa08e56c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 06:32:47 np0005604943 podman[96670]: 2026-02-02 11:32:47.877856127 +0000 UTC m=+0.362332186 container start 688efb2eb8064a953e589587e7161e2cc56528f13fab9092f4b43b4fa08e56c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 06:32:47 np0005604943 podman[96670]: 2026-02-02 11:32:47.957536855 +0000 UTC m=+0.442012954 container attach 688efb2eb8064a953e589587e7161e2cc56528f13fab9092f4b43b4fa08e56c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 06:32:47 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 06:32:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.ctqttb", "name": "rgw_frontends"} v 0)
Feb  2 06:32:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.ctqttb", "name": "rgw_frontends"} : dispatch
Feb  2 06:32:47 np0005604943 intelligent_fermi[96672]: 
Feb  2 06:32:47 np0005604943 intelligent_fermi[96672]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Feb  2 06:32:48 np0005604943 systemd[1]: libpod-b2e159feea4ba5c1c7705d612496bcfc9e526ea6546f55b5b4fa4c282bfc8435.scope: Deactivated successfully.
Feb  2 06:32:48 np0005604943 podman[96632]: 2026-02-02 11:32:48.001965743 +0000 UTC m=+0.819708075 container died b2e159feea4ba5c1c7705d612496bcfc9e526ea6546f55b5b4fa4c282bfc8435 (image=quay.io/ceph/ceph:v20, name=intelligent_fermi, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 06:32:48 np0005604943 systemd[1]: var-lib-containers-storage-overlay-e2b4cd3caa5abe44db9c7fe13d3801c644e7865663e9f4db48ca51ad4904d4b5-merged.mount: Deactivated successfully.
Feb  2 06:32:48 np0005604943 podman[96632]: 2026-02-02 11:32:48.226682952 +0000 UTC m=+1.044425284 container remove b2e159feea4ba5c1c7705d612496bcfc9e526ea6546f55b5b4fa4c282bfc8435 (image=quay.io/ceph/ceph:v20, name=intelligent_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 06:32:48 np0005604943 systemd[1]: libpod-conmon-b2e159feea4ba5c1c7705d612496bcfc9e526ea6546f55b5b4fa4c282bfc8435.scope: Deactivated successfully.
Feb  2 06:32:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Feb  2 06:32:48 np0005604943 frosty_turing[96709]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:32:48 np0005604943 frosty_turing[96709]: --> All data devices are unavailable
Feb  2 06:32:48 np0005604943 systemd[1]: libpod-688efb2eb8064a953e589587e7161e2cc56528f13fab9092f4b43b4fa08e56c2.scope: Deactivated successfully.
Feb  2 06:32:48 np0005604943 podman[96670]: 2026-02-02 11:32:48.31982347 +0000 UTC m=+0.804299579 container died 688efb2eb8064a953e589587e7161e2cc56528f13fab9092f4b43b4fa08e56c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_turing, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 06:32:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/686116269' entity='client.rgw.rgw.compute-0.ctqttb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb  2 06:32:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Feb  2 06:32:48 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Feb  2 06:32:48 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/686116269' entity='client.rgw.rgw.compute-0.ctqttb' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Feb  2 06:32:48 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 36 pg[10.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:32:48 np0005604943 systemd[1]: var-lib-containers-storage-overlay-e88d9de176193fef3cce5d87b10379dc3aabcde164b90576ae249fe2e9b9fee5-merged.mount: Deactivated successfully.
Feb  2 06:32:48 np0005604943 podman[96670]: 2026-02-02 11:32:48.597019739 +0000 UTC m=+1.081495838 container remove 688efb2eb8064a953e589587e7161e2cc56528f13fab9092f4b43b4fa08e56c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:48 np0005604943 systemd[1]: libpod-conmon-688efb2eb8064a953e589587e7161e2cc56528f13fab9092f4b43b4fa08e56c2.scope: Deactivated successfully.
Feb  2 06:32:49 np0005604943 podman[96817]: 2026-02-02 11:32:49.121707038 +0000 UTC m=+0.112227061 container create cdc720301fa29536ab19bef6ad85b447d22dc9d51a7a0ce542c57f8d554916c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 06:32:49 np0005604943 podman[96817]: 2026-02-02 11:32:49.036089048 +0000 UTC m=+0.026609091 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:49 np0005604943 systemd[1]: Started libpod-conmon-cdc720301fa29536ab19bef6ad85b447d22dc9d51a7a0ce542c57f8d554916c2.scope.
Feb  2 06:32:49 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:49 np0005604943 python3[96856]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:49 np0005604943 podman[96817]: 2026-02-02 11:32:49.243480716 +0000 UTC m=+0.234000779 container init cdc720301fa29536ab19bef6ad85b447d22dc9d51a7a0ce542c57f8d554916c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamport, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:49 np0005604943 podman[96817]: 2026-02-02 11:32:49.24811467 +0000 UTC m=+0.238634713 container start cdc720301fa29536ab19bef6ad85b447d22dc9d51a7a0ce542c57f8d554916c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 06:32:49 np0005604943 dazzling_lamport[96859]: 167 167
Feb  2 06:32:49 np0005604943 systemd[1]: libpod-cdc720301fa29536ab19bef6ad85b447d22dc9d51a7a0ce542c57f8d554916c2.scope: Deactivated successfully.
Feb  2 06:32:49 np0005604943 podman[96817]: 2026-02-02 11:32:49.304278187 +0000 UTC m=+0.294798220 container attach cdc720301fa29536ab19bef6ad85b447d22dc9d51a7a0ce542c57f8d554916c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 06:32:49 np0005604943 podman[96817]: 2026-02-02 11:32:49.305282556 +0000 UTC m=+0.295802599 container died cdc720301fa29536ab19bef6ad85b447d22dc9d51a7a0ce542c57f8d554916c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 06:32:49 np0005604943 podman[96862]: 2026-02-02 11:32:49.250030576 +0000 UTC m=+0.024125361 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:49 np0005604943 systemd[1]: var-lib-containers-storage-overlay-1fe8f8a874b9414950cc0e996ca1798aa95da0cfc7140b580fcbb5cdbb02c16d-merged.mount: Deactivated successfully.
Feb  2 06:32:49 np0005604943 podman[96817]: 2026-02-02 11:32:49.470303366 +0000 UTC m=+0.460823419 container remove cdc720301fa29536ab19bef6ad85b447d22dc9d51a7a0ce542c57f8d554916c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Feb  2 06:32:49 np0005604943 podman[96862]: 2026-02-02 11:32:49.530229422 +0000 UTC m=+0.304324177 container create f396cda4a5f30eb1a6b7e48e8031fd0dc30692615029bfb66d9f22bec6470775 (image=quay.io/ceph/ceph:v20, name=sharp_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 06:32:49 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/686116269' entity='client.rgw.rgw.compute-0.ctqttb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb  2 06:32:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Feb  2 06:32:49 np0005604943 systemd[1]: Started libpod-conmon-f396cda4a5f30eb1a6b7e48e8031fd0dc30692615029bfb66d9f22bec6470775.scope.
Feb  2 06:32:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v77: 10 pgs: 1 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 4.7 KiB/s wr, 13 op/s
Feb  2 06:32:49 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Feb  2 06:32:49 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 37 pg[11.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:32:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Feb  2 06:32:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/686116269' entity='client.rgw.rgw.compute-0.ctqttb' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Feb  2 06:32:49 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a270fcaee190056df222a638b67bc75440a9730caffcd2967eac61e79c07dbd6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a270fcaee190056df222a638b67bc75440a9730caffcd2967eac61e79c07dbd6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:49 np0005604943 systemd[1]: libpod-conmon-cdc720301fa29536ab19bef6ad85b447d22dc9d51a7a0ce542c57f8d554916c2.scope: Deactivated successfully.
Feb  2 06:32:49 np0005604943 podman[96862]: 2026-02-02 11:32:49.608746177 +0000 UTC m=+0.382840952 container init f396cda4a5f30eb1a6b7e48e8031fd0dc30692615029bfb66d9f22bec6470775 (image=quay.io/ceph/ceph:v20, name=sharp_curie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:49 np0005604943 podman[96862]: 2026-02-02 11:32:49.61438117 +0000 UTC m=+0.388475935 container start f396cda4a5f30eb1a6b7e48e8031fd0dc30692615029bfb66d9f22bec6470775 (image=quay.io/ceph/ceph:v20, name=sharp_curie, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:49 np0005604943 podman[96862]: 2026-02-02 11:32:49.631904987 +0000 UTC m=+0.405999742 container attach f396cda4a5f30eb1a6b7e48e8031fd0dc30692615029bfb66d9f22bec6470775 (image=quay.io/ceph/ceph:v20, name=sharp_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 06:32:49 np0005604943 podman[96899]: 2026-02-02 11:32:49.646115918 +0000 UTC m=+0.073280013 container create b994b8d13a9802c4f43bf846f1b6a3dd2dbeebecf7763851f278426a01cb0dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 06:32:49 np0005604943 systemd[1]: Started libpod-conmon-b994b8d13a9802c4f43bf846f1b6a3dd2dbeebecf7763851f278426a01cb0dc5.scope.
Feb  2 06:32:49 np0005604943 podman[96899]: 2026-02-02 11:32:49.605568764 +0000 UTC m=+0.032732909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:49 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d38b546e74c620ae15c3c967edb9bc36228c604d5ad668d430d904322793f34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d38b546e74c620ae15c3c967edb9bc36228c604d5ad668d430d904322793f34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d38b546e74c620ae15c3c967edb9bc36228c604d5ad668d430d904322793f34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:49 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d38b546e74c620ae15c3c967edb9bc36228c604d5ad668d430d904322793f34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:49 np0005604943 podman[96899]: 2026-02-02 11:32:49.748762962 +0000 UTC m=+0.175927037 container init b994b8d13a9802c4f43bf846f1b6a3dd2dbeebecf7763851f278426a01cb0dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mayer, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 06:32:49 np0005604943 ceph-mds[95505]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Feb  2 06:32:49 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mds-cephfs-compute-0-mldrue[95494]: 2026-02-02T11:32:49.751+0000 7f3622c61640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Feb  2 06:32:49 np0005604943 podman[96899]: 2026-02-02 11:32:49.755557739 +0000 UTC m=+0.182721794 container start b994b8d13a9802c4f43bf846f1b6a3dd2dbeebecf7763851f278426a01cb0dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 06:32:49 np0005604943 podman[96899]: 2026-02-02 11:32:49.797072131 +0000 UTC m=+0.224236186 container attach b994b8d13a9802c4f43bf846f1b6a3dd2dbeebecf7763851f278426a01cb0dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mayer, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:49 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 06:32:50 np0005604943 sharp_curie[96901]: 
Feb  2 06:32:50 np0005604943 sharp_curie[96901]: [{"container_id": "61b0483497dc", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.22%", "created": "2026-02-02T11:31:31.270406Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-02-02T11:31:31.347735Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T11:32:46.349870Z", "memory_usage": 7790919, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-02-02T11:31:31.144651Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae@crash.compute-0", "version": "20.2.0"}, {"container_id": "2fd942fa7e2a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "3.55%", "created": "2026-02-02T11:32:43.753454Z", "daemon_id": "cephfs.compute-0.mldrue", "daemon_name": "mds.cephfs.compute-0.mldrue", "daemon_type": "mds", "events": ["2026-02-02T11:32:43.820886Z daemon:mds.cephfs.compute-0.mldrue [INFO] \"Deployed mds.cephfs.compute-0.mldrue on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T11:32:46.350722Z", "memory_usage": 15487467, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-02-02T11:32:43.642427Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae@mds.cephfs.compute-0.mldrue", "version": "20.2.0"}, {"container_id": "e108912e9f7d", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "17.64%", "created": "2026-02-02T11:30:53.594401Z", "daemon_id": "compute-0.twcemg", "daemon_name": "mgr.compute-0.twcemg", "daemon_type": "mgr", "events": ["2026-02-02T11:31:35.079611Z daemon:mgr.compute-0.twcemg [INFO] \"Reconfigured mgr.compute-0.twcemg on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T11:32:46.349716Z", "memory_usage": 545783808, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-02-02T11:30:53.483711Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae@mgr.compute-0.twcemg", "version": "20.2.0"}, {"container_id": "fffb528e3212", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.94%", "created": "2026-02-02T11:30:49.508293Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-02-02T11:31:34.531826Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T11:32:46.349516Z", "memory_request": 2147483648, "memory_usage": 39940259, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-02-02T11:30:51.629032Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae@mon.compute-0", "version": "20.2.0"}, {"container_id": "409c17664cc0", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.59%", "created": "2026-02-02T11:31:53.413032Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-02-02T11:31:53.488231Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T11:32:46.350021Z", "memory_request": 4294967296, "memory_usage": 56874762, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-02T11:31:53.307140Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae@osd.0", "version": "20.2.0"}, {"container_id": "8937a933e506", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.88%", "created": "2026-02-02T11:31:57.219139Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-02-02T11:31:57.286562Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T11:32:46.350170Z", "memory_request": 4294967296, "memory_usage": 59045314, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-02T11:31:57.106688Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae@osd.1", "version": "20.2.0"}, {"container_id": "599aa4410c1f", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.77%", "created": "2026-02-02T11:32:01.068248Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-02-02T11:32:01.190882Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T11:32:46.350395Z", "memory_request": 4294967296, "memory_usage": 57577308, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-02T11:32:00.901232Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae@osd.2", "version": "20.2.0"}, {"container_id": "e99f14eeb73d", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac68
Feb  2 06:32:50 np0005604943 systemd[1]: libpod-f396cda4a5f30eb1a6b7e48e8031fd0dc30692615029bfb66d9f22bec6470775.scope: Deactivated successfully.
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]: {
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:    "0": [
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:        {
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "devices": [
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "/dev/loop3"
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            ],
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "lv_name": "ceph_lv0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "lv_size": "21470642176",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "name": "ceph_lv0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "tags": {
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.cluster_name": "ceph",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.crush_device_class": "",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.encrypted": "0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.objectstore": "bluestore",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.osd_id": "0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.type": "block",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.vdo": "0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.with_tpm": "0"
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            },
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "type": "block",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "vg_name": "ceph_vg0"
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:        }
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:    ],
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:    "1": [
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:        {
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "devices": [
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "/dev/loop4"
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            ],
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "lv_name": "ceph_lv1",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "lv_size": "21470642176",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "name": "ceph_lv1",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "tags": {
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.cluster_name": "ceph",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.crush_device_class": "",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.encrypted": "0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.objectstore": "bluestore",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.osd_id": "1",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.type": "block",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.vdo": "0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.with_tpm": "0"
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            },
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "type": "block",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "vg_name": "ceph_vg1"
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:        }
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:    ],
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:    "2": [
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:        {
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "devices": [
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "/dev/loop5"
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            ],
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "lv_name": "ceph_lv2",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "lv_size": "21470642176",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "name": "ceph_lv2",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "tags": {
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.cluster_name": "ceph",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.crush_device_class": "",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.encrypted": "0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.objectstore": "bluestore",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.osd_id": "2",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.type": "block",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.vdo": "0",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:                "ceph.with_tpm": "0"
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            },
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "type": "block",
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:            "vg_name": "ceph_vg2"
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:        }
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]:    ]
Feb  2 06:32:50 np0005604943 trusting_mayer[96919]: }
Feb  2 06:32:50 np0005604943 podman[96949]: 2026-02-02 11:32:50.057927048 +0000 UTC m=+0.024084949 container died f396cda4a5f30eb1a6b7e48e8031fd0dc30692615029bfb66d9f22bec6470775 (image=quay.io/ceph/ceph:v20, name=sharp_curie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 06:32:50 np0005604943 systemd[1]: libpod-b994b8d13a9802c4f43bf846f1b6a3dd2dbeebecf7763851f278426a01cb0dc5.scope: Deactivated successfully.
Feb  2 06:32:50 np0005604943 podman[96899]: 2026-02-02 11:32:50.117949406 +0000 UTC m=+0.545113501 container died b994b8d13a9802c4f43bf846f1b6a3dd2dbeebecf7763851f278426a01cb0dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mayer, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:50 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a270fcaee190056df222a638b67bc75440a9730caffcd2967eac61e79c07dbd6-merged.mount: Deactivated successfully.
Feb  2 06:32:50 np0005604943 systemd[1]: var-lib-containers-storage-overlay-1d38b546e74c620ae15c3c967edb9bc36228c604d5ad668d430d904322793f34-merged.mount: Deactivated successfully.
Feb  2 06:32:50 np0005604943 podman[96899]: 2026-02-02 11:32:50.306433756 +0000 UTC m=+0.733597851 container remove b994b8d13a9802c4f43bf846f1b6a3dd2dbeebecf7763851f278426a01cb0dc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mayer, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 06:32:50 np0005604943 rsyslogd[1009]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "61b0483497dc", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb  2 06:32:50 np0005604943 systemd[1]: libpod-conmon-b994b8d13a9802c4f43bf846f1b6a3dd2dbeebecf7763851f278426a01cb0dc5.scope: Deactivated successfully.
Feb  2 06:32:50 np0005604943 podman[96949]: 2026-02-02 11:32:50.374960311 +0000 UTC m=+0.341118192 container remove f396cda4a5f30eb1a6b7e48e8031fd0dc30692615029bfb66d9f22bec6470775 (image=quay.io/ceph/ceph:v20, name=sharp_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 06:32:50 np0005604943 systemd[1]: libpod-conmon-f396cda4a5f30eb1a6b7e48e8031fd0dc30692615029bfb66d9f22bec6470775.scope: Deactivated successfully.
Feb  2 06:32:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Feb  2 06:32:50 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/686116269' entity='client.rgw.rgw.compute-0.ctqttb' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Feb  2 06:32:50 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/686116269' entity='client.rgw.rgw.compute-0.ctqttb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb  2 06:32:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Feb  2 06:32:50 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Feb  2 06:32:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Feb  2 06:32:50 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/686116269' entity='client.rgw.rgw.compute-0.ctqttb' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Feb  2 06:32:50 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 38 pg[11.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:32:50 np0005604943 podman[97042]: 2026-02-02 11:32:50.766343648 +0000 UTC m=+0.069634417 container create aee90a8003802bf8d8923714f033f54f4fb8af666e13f529675525e0d0f949f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elion, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  2 06:32:50 np0005604943 podman[97042]: 2026-02-02 11:32:50.719824641 +0000 UTC m=+0.023115460 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:50 np0005604943 systemd[1]: Started libpod-conmon-aee90a8003802bf8d8923714f033f54f4fb8af666e13f529675525e0d0f949f5.scope.
Feb  2 06:32:50 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:50 np0005604943 podman[97042]: 2026-02-02 11:32:50.917875879 +0000 UTC m=+0.221166658 container init aee90a8003802bf8d8923714f033f54f4fb8af666e13f529675525e0d0f949f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elion, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:50 np0005604943 podman[97042]: 2026-02-02 11:32:50.921959577 +0000 UTC m=+0.225250346 container start aee90a8003802bf8d8923714f033f54f4fb8af666e13f529675525e0d0f949f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:50 np0005604943 podman[97042]: 2026-02-02 11:32:50.929372551 +0000 UTC m=+0.232663340 container attach aee90a8003802bf8d8923714f033f54f4fb8af666e13f529675525e0d0f949f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elion, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:50 np0005604943 nice_elion[97058]: 167 167
Feb  2 06:32:50 np0005604943 systemd[1]: libpod-aee90a8003802bf8d8923714f033f54f4fb8af666e13f529675525e0d0f949f5.scope: Deactivated successfully.
Feb  2 06:32:50 np0005604943 podman[97042]: 2026-02-02 11:32:50.931152193 +0000 UTC m=+0.234443022 container died aee90a8003802bf8d8923714f033f54f4fb8af666e13f529675525e0d0f949f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elion, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Feb  2 06:32:51 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f020f18d058e3cac5f868a533294a9909b6263c0127d483287fbf607c173e603-merged.mount: Deactivated successfully.
Feb  2 06:32:51 np0005604943 podman[97042]: 2026-02-02 11:32:51.187354264 +0000 UTC m=+0.490645073 container remove aee90a8003802bf8d8923714f033f54f4fb8af666e13f529675525e0d0f949f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elion, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:51 np0005604943 systemd[1]: libpod-conmon-aee90a8003802bf8d8923714f033f54f4fb8af666e13f529675525e0d0f949f5.scope: Deactivated successfully.
Feb  2 06:32:51 np0005604943 python3[97100]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:51 np0005604943 podman[97108]: 2026-02-02 11:32:51.37777037 +0000 UTC m=+0.085284212 container create ce5955ef5b06bd2178329acb84b96f9df45a9f85b1988bd37444dc6a5668fec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hertz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 06:32:51 np0005604943 podman[97108]: 2026-02-02 11:32:51.313737824 +0000 UTC m=+0.021251716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:51 np0005604943 systemd[1]: Started libpod-conmon-ce5955ef5b06bd2178329acb84b96f9df45a9f85b1988bd37444dc6a5668fec2.scope.
Feb  2 06:32:51 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf02ee73f2ea059c3d26fa6ecc51243e76fc35317a180f1a4648e5a112392b88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf02ee73f2ea059c3d26fa6ecc51243e76fc35317a180f1a4648e5a112392b88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf02ee73f2ea059c3d26fa6ecc51243e76fc35317a180f1a4648e5a112392b88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf02ee73f2ea059c3d26fa6ecc51243e76fc35317a180f1a4648e5a112392b88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:51 np0005604943 podman[97121]: 2026-02-02 11:32:51.480483375 +0000 UTC m=+0.157392910 container create f1dca82ba0d79c200c7c2accbd56c3b4fc86a543eefbc86fd4ed421017afb7f5 (image=quay.io/ceph/ceph:v20, name=nostalgic_sinoussi, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:51 np0005604943 podman[97121]: 2026-02-02 11:32:51.426172542 +0000 UTC m=+0.103082107 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:51 np0005604943 podman[97108]: 2026-02-02 11:32:51.534772907 +0000 UTC m=+0.242286769 container init ce5955ef5b06bd2178329acb84b96f9df45a9f85b1988bd37444dc6a5668fec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Feb  2 06:32:51 np0005604943 podman[97108]: 2026-02-02 11:32:51.540458592 +0000 UTC m=+0.247972484 container start ce5955ef5b06bd2178329acb84b96f9df45a9f85b1988bd37444dc6a5668fec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hertz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 06:32:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v80: 11 pgs: 2 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 4.9 KiB/s wr, 13 op/s
Feb  2 06:32:51 np0005604943 podman[97108]: 2026-02-02 11:32:51.598910385 +0000 UTC m=+0.306424277 container attach ce5955ef5b06bd2178329acb84b96f9df45a9f85b1988bd37444dc6a5668fec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hertz, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 06:32:51 np0005604943 systemd[1]: Started libpod-conmon-f1dca82ba0d79c200c7c2accbd56c3b4fc86a543eefbc86fd4ed421017afb7f5.scope.
Feb  2 06:32:51 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11747aeb9acbda239204a68271003d95e4926521109703f790d65a6542b111e9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11747aeb9acbda239204a68271003d95e4926521109703f790d65a6542b111e9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Feb  2 06:32:51 np0005604943 podman[97121]: 2026-02-02 11:32:51.733379071 +0000 UTC m=+0.410288596 container init f1dca82ba0d79c200c7c2accbd56c3b4fc86a543eefbc86fd4ed421017afb7f5 (image=quay.io/ceph/ceph:v20, name=nostalgic_sinoussi, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 06:32:51 np0005604943 podman[97121]: 2026-02-02 11:32:51.737381187 +0000 UTC m=+0.414290692 container start f1dca82ba0d79c200c7c2accbd56c3b4fc86a543eefbc86fd4ed421017afb7f5 (image=quay.io/ceph/ceph:v20, name=nostalgic_sinoussi, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 06:32:51 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/686116269' entity='client.rgw.rgw.compute-0.ctqttb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb  2 06:32:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Feb  2 06:32:51 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/686116269' entity='client.rgw.rgw.compute-0.ctqttb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb  2 06:32:51 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/686116269' entity='client.rgw.rgw.compute-0.ctqttb' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Feb  2 06:32:51 np0005604943 podman[97121]: 2026-02-02 11:32:51.802254885 +0000 UTC m=+0.479164430 container attach f1dca82ba0d79c200c7c2accbd56c3b4fc86a543eefbc86fd4ed421017afb7f5 (image=quay.io/ceph/ceph:v20, name=nostalgic_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:32:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Feb  2 06:32:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:32:52 np0005604943 lvm[97239]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:32:52 np0005604943 lvm[97239]: VG ceph_vg0 finished
Feb  2 06:32:52 np0005604943 lvm[97241]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:32:52 np0005604943 lvm[97241]: VG ceph_vg1 finished
Feb  2 06:32:52 np0005604943 lvm[97243]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:32:52 np0005604943 lvm[97243]: VG ceph_vg2 finished
Feb  2 06:32:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  2 06:32:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1785378145' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb  2 06:32:52 np0005604943 nostalgic_sinoussi[97144]: 
Feb  2 06:32:52 np0005604943 nostalgic_sinoussi[97144]: {"fsid":"4548a36b-7cdc-5e3e-a814-4e1571be1fae","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":120,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":39,"num_osds":3,"num_up_osds":3,"osd_up_since":1770031926,"num_in_osds":3,"osd_in_since":1770031906,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":9},{"state_name":"unknown","count":2}],"num_pgs":11,"num_pools":11,"num_objects":30,"data_bytes":463390,"bytes_used":84049920,"bytes_avail":64327876608,"bytes_total":64411926528,"unknown_pgs_ratio":0.18181818723678589,"read_bytes_sec":1185,"write_bytes_sec":4979,"read_op_per_sec":0,"write_op_per_sec":12},"fsmap":{"epoch":5,"btime":"2026-02-02T11:32:45:823574+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.mldrue","status":"up:active","gid":14253}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-02T11:32:11.558080+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Feb  2 06:32:52 np0005604943 systemd[1]: libpod-f1dca82ba0d79c200c7c2accbd56c3b4fc86a543eefbc86fd4ed421017afb7f5.scope: Deactivated successfully.
Feb  2 06:32:52 np0005604943 podman[97121]: 2026-02-02 11:32:52.253836427 +0000 UTC m=+0.930745922 container died f1dca82ba0d79c200c7c2accbd56c3b4fc86a543eefbc86fd4ed421017afb7f5 (image=quay.io/ceph/ceph:v20, name=nostalgic_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:52 np0005604943 boring_hertz[97137]: {}
Feb  2 06:32:52 np0005604943 systemd[1]: var-lib-containers-storage-overlay-11747aeb9acbda239204a68271003d95e4926521109703f790d65a6542b111e9-merged.mount: Deactivated successfully.
Feb  2 06:32:52 np0005604943 systemd[1]: libpod-ce5955ef5b06bd2178329acb84b96f9df45a9f85b1988bd37444dc6a5668fec2.scope: Deactivated successfully.
Feb  2 06:32:52 np0005604943 systemd[1]: libpod-ce5955ef5b06bd2178329acb84b96f9df45a9f85b1988bd37444dc6a5668fec2.scope: Consumed 1.090s CPU time.
Feb  2 06:32:52 np0005604943 podman[97108]: 2026-02-02 11:32:52.475710904 +0000 UTC m=+1.183224796 container died ce5955ef5b06bd2178329acb84b96f9df45a9f85b1988bd37444dc6a5668fec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:52 np0005604943 podman[97121]: 2026-02-02 11:32:52.554282499 +0000 UTC m=+1.231192035 container remove f1dca82ba0d79c200c7c2accbd56c3b4fc86a543eefbc86fd4ed421017afb7f5 (image=quay.io/ceph/ceph:v20, name=nostalgic_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 06:32:52 np0005604943 systemd[1]: libpod-conmon-f1dca82ba0d79c200c7c2accbd56c3b4fc86a543eefbc86fd4ed421017afb7f5.scope: Deactivated successfully.
Feb  2 06:32:52 np0005604943 systemd[1]: var-lib-containers-storage-overlay-cf02ee73f2ea059c3d26fa6ecc51243e76fc35317a180f1a4648e5a112392b88-merged.mount: Deactivated successfully.
Feb  2 06:32:52 np0005604943 podman[97108]: 2026-02-02 11:32:52.707849548 +0000 UTC m=+1.415363430 container remove ce5955ef5b06bd2178329acb84b96f9df45a9f85b1988bd37444dc6a5668fec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:52 np0005604943 systemd[1]: libpod-conmon-ce5955ef5b06bd2178329acb84b96f9df45a9f85b1988bd37444dc6a5668fec2.scope: Deactivated successfully.
Feb  2 06:32:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:32:52 np0005604943 ceph-mon[75271]: from='client.? 192.168.122.100:0/686116269' entity='client.rgw.rgw.compute-0.ctqttb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb  2 06:32:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:32:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:32:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:32:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v82: 11 pgs: 11 active+clean; 455 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 198 B/s rd, 7.9 KiB/s wr, 29 op/s
Feb  2 06:32:53 np0005604943 python3[97393]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:53 np0005604943 radosgw[94979]: v1 topic migration: starting v1 topic migration..
Feb  2 06:32:53 np0005604943 radosgw[94979]: v1 topic migration: finished v1 topic migration
Feb  2 06:32:53 np0005604943 podman[97407]: 2026-02-02 11:32:53.713911991 +0000 UTC m=+0.122869790 container create 15304d333297632b3891dd718df0a7e25ecb6827f0ac8e40ca44bbcdc657299b (image=quay.io/ceph/ceph:v20, name=eloquent_elbakyan, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:53 np0005604943 podman[97407]: 2026-02-02 11:32:53.620009421 +0000 UTC m=+0.028967260 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:53 np0005604943 radosgw[94979]: framework: beast
Feb  2 06:32:53 np0005604943 radosgw[94979]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Feb  2 06:32:53 np0005604943 radosgw[94979]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Feb  2 06:32:53 np0005604943 radosgw[94979]: starting handler: beast
Feb  2 06:32:53 np0005604943 systemd[1]: Started libpod-conmon-15304d333297632b3891dd718df0a7e25ecb6827f0ac8e40ca44bbcdc657299b.scope.
Feb  2 06:32:53 np0005604943 radosgw[94979]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 06:32:53 np0005604943 radosgw[94979]: mgrc service_daemon_register rgw.14256 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.ctqttb,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=5146e00b-0a71-4eb0-9811-9c57cf47df3e,zone_name=default,zonegroup_id=cea5580c-4145-447d-b53f-2a852b395f37,zonegroup_name=default}
Feb  2 06:32:53 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd518d64fe3a6ee5daa4154472e58b7f9bdfa02d014e7a76faa19ee047b913e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd518d64fe3a6ee5daa4154472e58b7f9bdfa02d014e7a76faa19ee047b913e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:53 np0005604943 podman[97407]: 2026-02-02 11:32:53.900133455 +0000 UTC m=+0.309091294 container init 15304d333297632b3891dd718df0a7e25ecb6827f0ac8e40ca44bbcdc657299b (image=quay.io/ceph/ceph:v20, name=eloquent_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Feb  2 06:32:53 np0005604943 podman[97407]: 2026-02-02 11:32:53.90894536 +0000 UTC m=+0.317903169 container start 15304d333297632b3891dd718df0a7e25ecb6827f0ac8e40ca44bbcdc657299b (image=quay.io/ceph/ceph:v20, name=eloquent_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:53 np0005604943 podman[97407]: 2026-02-02 11:32:53.992959164 +0000 UTC m=+0.401917063 container attach 15304d333297632b3891dd718df0a7e25ecb6827f0ac8e40ca44bbcdc657299b (image=quay.io/ceph/ceph:v20, name=eloquent_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:54 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:54 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:54 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:54 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:54 np0005604943 podman[97472]: 2026-02-02 11:32:54.147047297 +0000 UTC m=+0.284880372 container exec fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 06:32:54 np0005604943 podman[97472]: 2026-02-02 11:32:54.244445419 +0000 UTC m=+0.382278444 container exec_died fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 06:32:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 06:32:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3697280252' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb  2 06:32:54 np0005604943 eloquent_elbakyan[97458]: 
Feb  2 06:32:54 np0005604943 eloquent_elbakyan[97458]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.ctqttb","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Feb  2 06:32:54 np0005604943 systemd[1]: libpod-15304d333297632b3891dd718df0a7e25ecb6827f0ac8e40ca44bbcdc657299b.scope: Deactivated successfully.
Feb  2 06:32:54 np0005604943 podman[97407]: 2026-02-02 11:32:54.414540686 +0000 UTC m=+0.823498485 container died 15304d333297632b3891dd718df0a7e25ecb6827f0ac8e40ca44bbcdc657299b (image=quay.io/ceph/ceph:v20, name=eloquent_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:54 np0005604943 systemd[1]: var-lib-containers-storage-overlay-bd518d64fe3a6ee5daa4154472e58b7f9bdfa02d014e7a76faa19ee047b913e4-merged.mount: Deactivated successfully.
Feb  2 06:32:54 np0005604943 podman[97407]: 2026-02-02 11:32:54.737077089 +0000 UTC m=+1.146034918 container remove 15304d333297632b3891dd718df0a7e25ecb6827f0ac8e40ca44bbcdc657299b (image=quay.io/ceph/ceph:v20, name=eloquent_elbakyan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 06:32:54 np0005604943 systemd[1]: libpod-conmon-15304d333297632b3891dd718df0a7e25ecb6827f0ac8e40ca44bbcdc657299b.scope: Deactivated successfully.
Feb  2 06:32:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v83: 11 pgs: 11 active+clean; 455 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 4.5 KiB/s wr, 17 op/s
Feb  2 06:32:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:32:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:32:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:32:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:32:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:32:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:32:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:32:55 np0005604943 python3[97714]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:32:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:32:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:32:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:32:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:32:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:32:55 np0005604943 podman[97715]: 2026-02-02 11:32:55.815773476 +0000 UTC m=+0.071283836 container create 61c41d77b0d38ddccbbd80b1f4e9e2585c4f48b833a37da34456b26ea7838cb0 (image=quay.io/ceph/ceph:v20, name=inspiring_kare, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 06:32:55 np0005604943 podman[97715]: 2026-02-02 11:32:55.770785453 +0000 UTC m=+0.026295893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:55 np0005604943 systemd[1]: Started libpod-conmon-61c41d77b0d38ddccbbd80b1f4e9e2585c4f48b833a37da34456b26ea7838cb0.scope.
Feb  2 06:32:55 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e82bfdbf254ef020fceca2d4087b6ea16f6bc463165b63e68c3647a4021d48b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e82bfdbf254ef020fceca2d4087b6ea16f6bc463165b63e68c3647a4021d48b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:56 np0005604943 podman[97715]: 2026-02-02 11:32:56.01892483 +0000 UTC m=+0.274435230 container init 61c41d77b0d38ddccbbd80b1f4e9e2585c4f48b833a37da34456b26ea7838cb0 (image=quay.io/ceph/ceph:v20, name=inspiring_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:56 np0005604943 podman[97715]: 2026-02-02 11:32:56.027747556 +0000 UTC m=+0.283257936 container start 61c41d77b0d38ddccbbd80b1f4e9e2585c4f48b833a37da34456b26ea7838cb0 (image=quay.io/ceph/ceph:v20, name=inspiring_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 06:32:56 np0005604943 podman[97715]: 2026-02-02 11:32:56.110839533 +0000 UTC m=+0.366349893 container attach 61c41d77b0d38ddccbbd80b1f4e9e2585c4f48b833a37da34456b26ea7838cb0 (image=quay.io/ceph/ceph:v20, name=inspiring_kare, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:56 np0005604943 podman[97796]: 2026-02-02 11:32:56.240291963 +0000 UTC m=+0.104177839 container create 19f015a4fe41bd7585ed9b4b668352970b9aa216d857a63103094175105058e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:56 np0005604943 podman[97796]: 2026-02-02 11:32:56.158586626 +0000 UTC m=+0.022472542 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:56 np0005604943 systemd[1]: Started libpod-conmon-19f015a4fe41bd7585ed9b4b668352970b9aa216d857a63103094175105058e1.scope.
Feb  2 06:32:56 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:56 np0005604943 podman[97796]: 2026-02-02 11:32:56.421648086 +0000 UTC m=+0.285534052 container init 19f015a4fe41bd7585ed9b4b668352970b9aa216d857a63103094175105058e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 06:32:56 np0005604943 podman[97796]: 2026-02-02 11:32:56.43213236 +0000 UTC m=+0.296018216 container start 19f015a4fe41bd7585ed9b4b668352970b9aa216d857a63103094175105058e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 06:32:56 np0005604943 angry_saha[97831]: 167 167
Feb  2 06:32:56 np0005604943 systemd[1]: libpod-19f015a4fe41bd7585ed9b4b668352970b9aa216d857a63103094175105058e1.scope: Deactivated successfully.
Feb  2 06:32:56 np0005604943 podman[97796]: 2026-02-02 11:32:56.461731417 +0000 UTC m=+0.325617303 container attach 19f015a4fe41bd7585ed9b4b668352970b9aa216d857a63103094175105058e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:32:56 np0005604943 podman[97796]: 2026-02-02 11:32:56.462320394 +0000 UTC m=+0.326206260 container died 19f015a4fe41bd7585ed9b4b668352970b9aa216d857a63103094175105058e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Feb  2 06:32:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Feb  2 06:32:56 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4198212603' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Feb  2 06:32:56 np0005604943 inspiring_kare[97780]: mimic
Feb  2 06:32:56 np0005604943 systemd[1]: libpod-61c41d77b0d38ddccbbd80b1f4e9e2585c4f48b833a37da34456b26ea7838cb0.scope: Deactivated successfully.
Feb  2 06:32:56 np0005604943 conmon[97780]: conmon 61c41d77b0d38ddccbbd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-61c41d77b0d38ddccbbd80b1f4e9e2585c4f48b833a37da34456b26ea7838cb0.scope/container/memory.events
Feb  2 06:32:56 np0005604943 podman[97715]: 2026-02-02 11:32:56.521962951 +0000 UTC m=+0.777473351 container died 61c41d77b0d38ddccbbd80b1f4e9e2585c4f48b833a37da34456b26ea7838cb0 (image=quay.io/ceph/ceph:v20, name=inspiring_kare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Feb  2 06:32:56 np0005604943 systemd[1]: var-lib-containers-storage-overlay-5e82bfdbf254ef020fceca2d4087b6ea16f6bc463165b63e68c3647a4021d48b-merged.mount: Deactivated successfully.
Feb  2 06:32:56 np0005604943 podman[97715]: 2026-02-02 11:32:56.621780413 +0000 UTC m=+0.877290813 container remove 61c41d77b0d38ddccbbd80b1f4e9e2585c4f48b833a37da34456b26ea7838cb0 (image=quay.io/ceph/ceph:v20, name=inspiring_kare, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:56 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:56 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:56 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:32:56 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:32:56 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:32:56 np0005604943 systemd[1]: libpod-conmon-61c41d77b0d38ddccbbd80b1f4e9e2585c4f48b833a37da34456b26ea7838cb0.scope: Deactivated successfully.
Feb  2 06:32:56 np0005604943 systemd[1]: var-lib-containers-storage-overlay-abe672f64839957265e734b195714943db31e8b564f8141cb1104809b569fef9-merged.mount: Deactivated successfully.
Feb  2 06:32:56 np0005604943 podman[97796]: 2026-02-02 11:32:56.732786029 +0000 UTC m=+0.596671935 container remove 19f015a4fe41bd7585ed9b4b668352970b9aa216d857a63103094175105058e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_saha, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 06:32:56 np0005604943 systemd[1]: libpod-conmon-19f015a4fe41bd7585ed9b4b668352970b9aa216d857a63103094175105058e1.scope: Deactivated successfully.
Feb  2 06:32:56 np0005604943 podman[97872]: 2026-02-02 11:32:56.913652688 +0000 UTC m=+0.061639547 container create a45cb06f982d7f48648ca86641898d40134e73cd04f1aae9742df7d9a452c0a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mestorf, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 06:32:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:32:56 np0005604943 systemd[1]: Started libpod-conmon-a45cb06f982d7f48648ca86641898d40134e73cd04f1aae9742df7d9a452c0a5.scope.
Feb  2 06:32:56 np0005604943 podman[97872]: 2026-02-02 11:32:56.87849908 +0000 UTC m=+0.026485989 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:56 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db35fdb928f2c77f6f22ec42a2585a762c56ad3875234c9141548684f2d2e287/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db35fdb928f2c77f6f22ec42a2585a762c56ad3875234c9141548684f2d2e287/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db35fdb928f2c77f6f22ec42a2585a762c56ad3875234c9141548684f2d2e287/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db35fdb928f2c77f6f22ec42a2585a762c56ad3875234c9141548684f2d2e287/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db35fdb928f2c77f6f22ec42a2585a762c56ad3875234c9141548684f2d2e287/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:57 np0005604943 podman[97872]: 2026-02-02 11:32:57.056279229 +0000 UTC m=+0.204266048 container init a45cb06f982d7f48648ca86641898d40134e73cd04f1aae9742df7d9a452c0a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mestorf, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 06:32:57 np0005604943 podman[97872]: 2026-02-02 11:32:57.061795699 +0000 UTC m=+0.209782508 container start a45cb06f982d7f48648ca86641898d40134e73cd04f1aae9742df7d9a452c0a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mestorf, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 06:32:57 np0005604943 podman[97872]: 2026-02-02 11:32:57.066260038 +0000 UTC m=+0.214246897 container attach a45cb06f982d7f48648ca86641898d40134e73cd04f1aae9742df7d9a452c0a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mestorf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:57 np0005604943 interesting_mestorf[97888]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:32:57 np0005604943 interesting_mestorf[97888]: --> All data devices are unavailable
Feb  2 06:32:57 np0005604943 systemd[1]: libpod-a45cb06f982d7f48648ca86641898d40134e73cd04f1aae9742df7d9a452c0a5.scope: Deactivated successfully.
Feb  2 06:32:57 np0005604943 podman[97872]: 2026-02-02 11:32:57.563403839 +0000 UTC m=+0.711390698 container died a45cb06f982d7f48648ca86641898d40134e73cd04f1aae9742df7d9a452c0a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v84: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.2 KiB/s wr, 158 op/s
Feb  2 06:32:57 np0005604943 python3[97930]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:32:57 np0005604943 systemd[1]: var-lib-containers-storage-overlay-db35fdb928f2c77f6f22ec42a2585a762c56ad3875234c9141548684f2d2e287-merged.mount: Deactivated successfully.
Feb  2 06:32:57 np0005604943 podman[97872]: 2026-02-02 11:32:57.780407815 +0000 UTC m=+0.928394654 container remove a45cb06f982d7f48648ca86641898d40134e73cd04f1aae9742df7d9a452c0a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mestorf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 06:32:57 np0005604943 systemd[1]: libpod-conmon-a45cb06f982d7f48648ca86641898d40134e73cd04f1aae9742df7d9a452c0a5.scope: Deactivated successfully.
Feb  2 06:32:57 np0005604943 podman[97946]: 2026-02-02 11:32:57.826339416 +0000 UTC m=+0.195258358 container create 888e5ff7086986a59f3309d2192a0a0925dd2da20b0727ea9995b51e459f1e60 (image=quay.io/ceph/ceph:v20, name=happy_hellman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:57 np0005604943 systemd[1]: Started libpod-conmon-888e5ff7086986a59f3309d2192a0a0925dd2da20b0727ea9995b51e459f1e60.scope.
Feb  2 06:32:57 np0005604943 podman[97946]: 2026-02-02 11:32:57.798799268 +0000 UTC m=+0.167718260 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:32:57 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25705f399fa27a80891a3486811b21e57d1ce9d1a45605c9f43dd3517062ad63/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25705f399fa27a80891a3486811b21e57d1ce9d1a45605c9f43dd3517062ad63/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:57 np0005604943 podman[97946]: 2026-02-02 11:32:57.930862723 +0000 UTC m=+0.299781675 container init 888e5ff7086986a59f3309d2192a0a0925dd2da20b0727ea9995b51e459f1e60 (image=quay.io/ceph/ceph:v20, name=happy_hellman, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 06:32:57 np0005604943 podman[97946]: 2026-02-02 11:32:57.941591884 +0000 UTC m=+0.310510826 container start 888e5ff7086986a59f3309d2192a0a0925dd2da20b0727ea9995b51e459f1e60 (image=quay.io/ceph/ceph:v20, name=happy_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:57 np0005604943 podman[97946]: 2026-02-02 11:32:57.988524814 +0000 UTC m=+0.357443736 container attach 888e5ff7086986a59f3309d2192a0a0925dd2da20b0727ea9995b51e459f1e60 (image=quay.io/ceph/ceph:v20, name=happy_hellman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:32:58 np0005604943 podman[98048]: 2026-02-02 11:32:58.328286675 +0000 UTC m=+0.101723357 container create a7234321994602e02f122ea781c873b114c50d51fc3682f8ce8b8ebd2a51e42e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:58 np0005604943 podman[98048]: 2026-02-02 11:32:58.258996039 +0000 UTC m=+0.032432791 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Feb  2 06:32:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/653592639' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Feb  2 06:32:58 np0005604943 happy_hellman[97980]: 
Feb  2 06:32:58 np0005604943 happy_hellman[97980]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Feb  2 06:32:58 np0005604943 systemd[1]: libpod-888e5ff7086986a59f3309d2192a0a0925dd2da20b0727ea9995b51e459f1e60.scope: Deactivated successfully.
Feb  2 06:32:58 np0005604943 podman[97946]: 2026-02-02 11:32:58.485961843 +0000 UTC m=+0.854880775 container died 888e5ff7086986a59f3309d2192a0a0925dd2da20b0727ea9995b51e459f1e60 (image=quay.io/ceph/ceph:v20, name=happy_hellman, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:58 np0005604943 systemd[1]: Started libpod-conmon-a7234321994602e02f122ea781c873b114c50d51fc3682f8ce8b8ebd2a51e42e.scope.
Feb  2 06:32:58 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:58 np0005604943 podman[98048]: 2026-02-02 11:32:58.720585779 +0000 UTC m=+0.494022471 container init a7234321994602e02f122ea781c873b114c50d51fc3682f8ce8b8ebd2a51e42e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bell, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:58 np0005604943 podman[98048]: 2026-02-02 11:32:58.725011638 +0000 UTC m=+0.498448320 container start a7234321994602e02f122ea781c873b114c50d51fc3682f8ce8b8ebd2a51e42e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 06:32:58 np0005604943 practical_bell[98078]: 167 167
Feb  2 06:32:58 np0005604943 systemd[1]: libpod-a7234321994602e02f122ea781c873b114c50d51fc3682f8ce8b8ebd2a51e42e.scope: Deactivated successfully.
Feb  2 06:32:58 np0005604943 podman[98048]: 2026-02-02 11:32:58.762332938 +0000 UTC m=+0.535769660 container attach a7234321994602e02f122ea781c873b114c50d51fc3682f8ce8b8ebd2a51e42e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:32:58 np0005604943 podman[98048]: 2026-02-02 11:32:58.762901205 +0000 UTC m=+0.536337927 container died a7234321994602e02f122ea781c873b114c50d51fc3682f8ce8b8ebd2a51e42e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bell, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 06:32:58 np0005604943 systemd[1]: var-lib-containers-storage-overlay-25705f399fa27a80891a3486811b21e57d1ce9d1a45605c9f43dd3517062ad63-merged.mount: Deactivated successfully.
Feb  2 06:32:59 np0005604943 podman[97946]: 2026-02-02 11:32:59.018319404 +0000 UTC m=+1.387238336 container remove 888e5ff7086986a59f3309d2192a0a0925dd2da20b0727ea9995b51e459f1e60 (image=quay.io/ceph/ceph:v20, name=happy_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 06:32:59 np0005604943 systemd[1]: var-lib-containers-storage-overlay-53d5ac86168315e958b809d376cc8974fd850ce1cb8c40ff6b866c488fd134ac-merged.mount: Deactivated successfully.
Feb  2 06:32:59 np0005604943 systemd[1]: libpod-conmon-888e5ff7086986a59f3309d2192a0a0925dd2da20b0727ea9995b51e459f1e60.scope: Deactivated successfully.
Feb  2 06:32:59 np0005604943 podman[98048]: 2026-02-02 11:32:59.097299882 +0000 UTC m=+0.870736594 container remove a7234321994602e02f122ea781c873b114c50d51fc3682f8ce8b8ebd2a51e42e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  2 06:32:59 np0005604943 systemd[1]: libpod-conmon-a7234321994602e02f122ea781c873b114c50d51fc3682f8ce8b8ebd2a51e42e.scope: Deactivated successfully.
Feb  2 06:32:59 np0005604943 podman[98103]: 2026-02-02 11:32:59.201098839 +0000 UTC m=+0.023005618 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:32:59 np0005604943 podman[98103]: 2026-02-02 11:32:59.30298788 +0000 UTC m=+0.124894659 container create 3b34933c26dd5badc09d513b1a0b8575332469d7f71b84c847a370a87051df2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_zhukovsky, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:32:59 np0005604943 systemd[1]: Started libpod-conmon-3b34933c26dd5badc09d513b1a0b8575332469d7f71b84c847a370a87051df2c.scope.
Feb  2 06:32:59 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:32:59 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8a8d56ed6e685acb2e255ab987d4226e40d3b50299f09c1c79998afdbf8038/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:59 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8a8d56ed6e685acb2e255ab987d4226e40d3b50299f09c1c79998afdbf8038/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:59 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8a8d56ed6e685acb2e255ab987d4226e40d3b50299f09c1c79998afdbf8038/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:59 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8a8d56ed6e685acb2e255ab987d4226e40d3b50299f09c1c79998afdbf8038/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:32:59 np0005604943 podman[98103]: 2026-02-02 11:32:59.524833956 +0000 UTC m=+0.346740755 container init 3b34933c26dd5badc09d513b1a0b8575332469d7f71b84c847a370a87051df2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_zhukovsky, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:32:59 np0005604943 podman[98103]: 2026-02-02 11:32:59.530737537 +0000 UTC m=+0.352644316 container start 3b34933c26dd5badc09d513b1a0b8575332469d7f71b84c847a370a87051df2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_zhukovsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:32:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v85: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 7.4 KiB/s wr, 141 op/s
Feb  2 06:32:59 np0005604943 podman[98103]: 2026-02-02 11:32:59.581866079 +0000 UTC m=+0.403772898 container attach 3b34933c26dd5badc09d513b1a0b8575332469d7f71b84c847a370a87051df2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]: {
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:    "0": [
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:        {
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "devices": [
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "/dev/loop3"
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            ],
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "lv_name": "ceph_lv0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "lv_size": "21470642176",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "name": "ceph_lv0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "tags": {
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.cluster_name": "ceph",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.crush_device_class": "",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.encrypted": "0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.objectstore": "bluestore",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.osd_id": "0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.type": "block",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.vdo": "0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.with_tpm": "0"
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            },
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "type": "block",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "vg_name": "ceph_vg0"
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:        }
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:    ],
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:    "1": [
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:        {
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "devices": [
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "/dev/loop4"
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            ],
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "lv_name": "ceph_lv1",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "lv_size": "21470642176",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "name": "ceph_lv1",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "tags": {
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.cluster_name": "ceph",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.crush_device_class": "",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.encrypted": "0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.objectstore": "bluestore",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.osd_id": "1",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.type": "block",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.vdo": "0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.with_tpm": "0"
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            },
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "type": "block",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "vg_name": "ceph_vg1"
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:        }
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:    ],
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:    "2": [
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:        {
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "devices": [
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "/dev/loop5"
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            ],
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "lv_name": "ceph_lv2",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "lv_size": "21470642176",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "name": "ceph_lv2",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "tags": {
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.cluster_name": "ceph",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.crush_device_class": "",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.encrypted": "0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.objectstore": "bluestore",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.osd_id": "2",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.type": "block",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.vdo": "0",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:                "ceph.with_tpm": "0"
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            },
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "type": "block",
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:            "vg_name": "ceph_vg2"
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:        }
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]:    ]
Feb  2 06:32:59 np0005604943 sleepy_zhukovsky[98119]: }
Feb  2 06:32:59 np0005604943 systemd[1]: libpod-3b34933c26dd5badc09d513b1a0b8575332469d7f71b84c847a370a87051df2c.scope: Deactivated successfully.
Feb  2 06:32:59 np0005604943 podman[98103]: 2026-02-02 11:32:59.827261107 +0000 UTC m=+0.649167906 container died 3b34933c26dd5badc09d513b1a0b8575332469d7f71b84c847a370a87051df2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 06:32:59 np0005604943 systemd[1]: var-lib-containers-storage-overlay-da8a8d56ed6e685acb2e255ab987d4226e40d3b50299f09c1c79998afdbf8038-merged.mount: Deactivated successfully.
Feb  2 06:32:59 np0005604943 podman[98103]: 2026-02-02 11:32:59.87087378 +0000 UTC m=+0.692780559 container remove 3b34933c26dd5badc09d513b1a0b8575332469d7f71b84c847a370a87051df2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 06:32:59 np0005604943 systemd[1]: libpod-conmon-3b34933c26dd5badc09d513b1a0b8575332469d7f71b84c847a370a87051df2c.scope: Deactivated successfully.
Feb  2 06:33:00 np0005604943 podman[98204]: 2026-02-02 11:33:00.278374174 +0000 UTC m=+0.035460948 container create afd600559236233a49887f1ec5640f42318436418fbad2572c0d5d958ffad11a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:33:00 np0005604943 systemd[1]: Started libpod-conmon-afd600559236233a49887f1ec5640f42318436418fbad2572c0d5d958ffad11a.scope.
Feb  2 06:33:00 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:33:00 np0005604943 podman[98204]: 2026-02-02 11:33:00.352974144 +0000 UTC m=+0.110060938 container init afd600559236233a49887f1ec5640f42318436418fbad2572c0d5d958ffad11a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:33:00 np0005604943 podman[98204]: 2026-02-02 11:33:00.261806094 +0000 UTC m=+0.018892888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:33:00 np0005604943 podman[98204]: 2026-02-02 11:33:00.359035021 +0000 UTC m=+0.116121805 container start afd600559236233a49887f1ec5640f42318436418fbad2572c0d5d958ffad11a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 06:33:00 np0005604943 podman[98204]: 2026-02-02 11:33:00.362005207 +0000 UTC m=+0.119092031 container attach afd600559236233a49887f1ec5640f42318436418fbad2572c0d5d958ffad11a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 06:33:00 np0005604943 zen_saha[98221]: 167 167
Feb  2 06:33:00 np0005604943 systemd[1]: libpod-afd600559236233a49887f1ec5640f42318436418fbad2572c0d5d958ffad11a.scope: Deactivated successfully.
Feb  2 06:33:00 np0005604943 podman[98226]: 2026-02-02 11:33:00.393580971 +0000 UTC m=+0.019173176 container died afd600559236233a49887f1ec5640f42318436418fbad2572c0d5d958ffad11a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:33:00 np0005604943 systemd[1]: var-lib-containers-storage-overlay-ce6fb81ae1574ada1cfe50506f98c4e017b36a7f39b8277e6a55920763aab7f7-merged.mount: Deactivated successfully.
Feb  2 06:33:00 np0005604943 podman[98226]: 2026-02-02 11:33:00.424314341 +0000 UTC m=+0.049906536 container remove afd600559236233a49887f1ec5640f42318436418fbad2572c0d5d958ffad11a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_saha, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:33:00 np0005604943 systemd[1]: libpod-conmon-afd600559236233a49887f1ec5640f42318436418fbad2572c0d5d958ffad11a.scope: Deactivated successfully.
Feb  2 06:33:00 np0005604943 podman[98248]: 2026-02-02 11:33:00.533805353 +0000 UTC m=+0.038503366 container create 0d8f51ab7c23f3233519b8254dbed3cb56ff53241ea9ea01aebc7d8caca0ca14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:33:00 np0005604943 systemd[1]: Started libpod-conmon-0d8f51ab7c23f3233519b8254dbed3cb56ff53241ea9ea01aebc7d8caca0ca14.scope.
Feb  2 06:33:00 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:33:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a060af89de9877deb0455d2d9ae80138bcdcd9d92056666687f4ca6a5b4ea6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:33:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a060af89de9877deb0455d2d9ae80138bcdcd9d92056666687f4ca6a5b4ea6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:33:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a060af89de9877deb0455d2d9ae80138bcdcd9d92056666687f4ca6a5b4ea6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:33:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a060af89de9877deb0455d2d9ae80138bcdcd9d92056666687f4ca6a5b4ea6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:33:00 np0005604943 podman[98248]: 2026-02-02 11:33:00.59134623 +0000 UTC m=+0.096044353 container init 0d8f51ab7c23f3233519b8254dbed3cb56ff53241ea9ea01aebc7d8caca0ca14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 06:33:00 np0005604943 podman[98248]: 2026-02-02 11:33:00.602852673 +0000 UTC m=+0.107550696 container start 0d8f51ab7c23f3233519b8254dbed3cb56ff53241ea9ea01aebc7d8caca0ca14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 06:33:00 np0005604943 podman[98248]: 2026-02-02 11:33:00.607696433 +0000 UTC m=+0.112394486 container attach 0d8f51ab7c23f3233519b8254dbed3cb56ff53241ea9ea01aebc7d8caca0ca14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  2 06:33:00 np0005604943 podman[98248]: 2026-02-02 11:33:00.518281053 +0000 UTC m=+0.022979086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:33:01 np0005604943 lvm[98342]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:33:01 np0005604943 lvm[98342]: VG ceph_vg0 finished
Feb  2 06:33:01 np0005604943 lvm[98343]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:33:01 np0005604943 lvm[98343]: VG ceph_vg1 finished
Feb  2 06:33:01 np0005604943 lvm[98345]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:33:01 np0005604943 lvm[98345]: VG ceph_vg2 finished
Feb  2 06:33:01 np0005604943 heuristic_sanderson[98264]: {}
Feb  2 06:33:01 np0005604943 systemd[1]: libpod-0d8f51ab7c23f3233519b8254dbed3cb56ff53241ea9ea01aebc7d8caca0ca14.scope: Deactivated successfully.
Feb  2 06:33:01 np0005604943 podman[98248]: 2026-02-02 11:33:01.350236203 +0000 UTC m=+0.854934256 container died 0d8f51ab7c23f3233519b8254dbed3cb56ff53241ea9ea01aebc7d8caca0ca14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sanderson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 06:33:01 np0005604943 systemd[1]: libpod-0d8f51ab7c23f3233519b8254dbed3cb56ff53241ea9ea01aebc7d8caca0ca14.scope: Consumed 1.080s CPU time.
Feb  2 06:33:01 np0005604943 systemd[1]: var-lib-containers-storage-overlay-28a060af89de9877deb0455d2d9ae80138bcdcd9d92056666687f4ca6a5b4ea6-merged.mount: Deactivated successfully.
Feb  2 06:33:01 np0005604943 podman[98248]: 2026-02-02 11:33:01.388152911 +0000 UTC m=+0.892850964 container remove 0d8f51ab7c23f3233519b8254dbed3cb56ff53241ea9ea01aebc7d8caca0ca14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:33:01 np0005604943 systemd[1]: libpod-conmon-0d8f51ab7c23f3233519b8254dbed3cb56ff53241ea9ea01aebc7d8caca0ca14.scope: Deactivated successfully.
Feb  2 06:33:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:33:01 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:33:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:33:01 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:33:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v86: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 6.6 KiB/s wr, 126 op/s
Feb  2 06:33:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:33:02 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:33:02 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:33:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v87: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 5.6 KiB/s wr, 121 op/s
Feb  2 06:33:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v88: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.2 KiB/s wr, 109 op/s
Feb  2 06:33:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:33:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.2 KiB/s wr, 109 op/s
Feb  2 06:33:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:33:09
Feb  2 06:33:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:33:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:33:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'vms', '.rgw.root', 'default.rgw.log']
Feb  2 06:33:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:33:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v90: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 0 B/s wr, 13 op/s
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.756823207435179e-07 of space, bias 4.0, pg target 0.0008108187848922215 quantized to 16 (current 1)
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 06:33:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Feb  2 06:33:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:33:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:33:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v91: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 0 B/s wr, 13 op/s
Feb  2 06:33:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Feb  2 06:33:11 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Feb  2 06:33:11 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Feb  2 06:33:11 np0005604943 ceph-mgr[75558]: [progress INFO root] update: starting ev 9bb4e9be-df3a-4062-910e-3593c828b3e4 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb  2 06:33:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Feb  2 06:33:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:33:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Feb  2 06:33:12 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:12 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Feb  2 06:33:12 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Feb  2 06:33:12 np0005604943 ceph-mgr[75558]: [progress INFO root] update: starting ev 47143cce-ae39-48fd-9989-97746b75bc5a (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb  2 06:33:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Feb  2 06:33:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v94: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:33:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 06:33:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 06:33:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Feb  2 06:33:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Feb  2 06:33:13 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:13 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:13 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:13 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:13 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Feb  2 06:33:13 np0005604943 ceph-mgr[75558]: [progress INFO root] update: starting ev bd58a10d-399c-48c2-9912-26d8e910b772 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb  2 06:33:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Feb  2 06:33:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Feb  2 06:33:14 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:14 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:14 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:14 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Feb  2 06:33:15 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Feb  2 06:33:15 np0005604943 ceph-mgr[75558]: [progress INFO root] update: starting ev e3af81a1-1d28-4c44-9b77-5d5608c9996a (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb  2 06:33:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Feb  2 06:33:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=42 pruub=9.609827042s) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active pruub 83.117309570s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=42 pruub=9.609827042s) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown pruub 83.117309570s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.12( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.c( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.e( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.10( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.14( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.1e( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.1a( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.1( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=17/18 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v97: 73 pgs: 62 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:33:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 06:33:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 06:33:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:15 np0005604943 ceph-mgr[75558]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Feb  2 06:33:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Feb  2 06:33:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Feb  2 06:33:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Feb  2 06:33:16 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=44 pruub=10.678660393s) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active pruub 92.823387146s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:16 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Feb  2 06:33:16 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=44 pruub=10.678660393s) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown pruub 92.823387146s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-mgr[75558]: [progress INFO root] update: starting ev 1c61a694-c0ea-451f-b212-3439788c608f (PG autoscaler increasing pool 6 PGs from 1 to 16)
Feb  2 06:33:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Feb  2 06:33:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=44 pruub=11.626029015s) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active pruub 86.153533936s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=44 pruub=11.626029015s) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown pruub 86.153533936s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:16 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Feb  2 06:33:16 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:16 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.1a( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.14( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.e( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.0( empty local-lis/les=42/44 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.1e( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 44 pg[2.1( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 42 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=42 pruub=9.433838844s) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active pruub 87.982063293s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=42 pruub=9.433838844s) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown pruub 87.982063293s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.2( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.4( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.b( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.d( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.10( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.13( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.14( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.1a( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.19( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.1c( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=18/19 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:33:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Feb  2 06:33:17 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Feb  2 06:33:17 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Feb  2 06:33:17 np0005604943 ceph-mgr[75558]: [progress INFO root] update: starting ev 45f227f1-b2c3-44b3-b7de-305c7e93d0bf (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.1f( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.1e( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.1d( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.b( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.10( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.6( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.1f( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.17( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.8( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.a( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.19( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.3( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.c( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.b( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.15( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.16( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.17( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.6( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.e( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.d( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.1c( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.1b( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=20/21 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.1f( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Feb  2 06:33:17 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:17 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:17 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:17 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.1a( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.1c( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.b( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.19( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.0( empty local-lis/les=42/45 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.4( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.2( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.10( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.14( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.d( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 45 pg[3.13( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=18/18 les/c/f=19/19/0 sis=42) [1] r=0 lpr=42 pi=[18,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.1e( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.1d( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.b( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.6( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.3( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.19( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.0( empty local-lis/les=44/45 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.c( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.17( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.16( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.15( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [0] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.10( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.17( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.8( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.a( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.1f( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.b( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.0( empty local-lis/les=44/45 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.6( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.e( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.d( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.1c( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.1b( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=20/20 les/c/f=21/21/0 sis=44) [2] r=0 lpr=44 pi=[20,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v100: 135 pgs: 32 activating, 31 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:33:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 06:33:17 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Feb  2 06:33:17 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Feb  2 06:33:17 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Feb  2 06:33:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Feb  2 06:33:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Feb  2 06:33:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Feb  2 06:33:18 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Feb  2 06:33:18 np0005604943 ceph-mgr[75558]: [progress INFO root] update: starting ev 8e2639f1-f4a5-44f8-b9fe-7833f45c55f0 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb  2 06:33:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Feb  2 06:33:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:18 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=46 pruub=11.616682053s) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active pruub 92.025070190s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:18 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=46 pruub=11.616682053s) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown pruub 92.025070190s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:18 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:18 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:18 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Feb  2 06:33:18 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:18 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:18 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Feb  2 06:33:18 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Feb  2 06:33:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Feb  2 06:33:19 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Feb  2 06:33:19 np0005604943 ceph-mgr[75558]: [progress INFO root] update: starting ev 834ec13e-7be6-4c75-8700-45ae2cdee3fb (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.13( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.12( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.11( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.15( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.16( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.14( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Feb  2 06:33:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.10( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.b( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.a( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.9( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.8( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.d( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.6( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.4( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.f( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.e( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.c( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.5( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.7( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.1( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.2( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.1c( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.3( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.1d( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.1e( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.1f( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.18( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.19( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.1a( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.1b( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.17( empty local-lis/les=22/23 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.11( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.13( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.15( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.10( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.b( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.12( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.9( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.14( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.16( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=46/47 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.8( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.d( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.6( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.4( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.7( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.5( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.1( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.2( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.1d( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.e( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.3( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.1e( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.1c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.18( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.1f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.1b( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.19( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.1a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 47 pg[7.17( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=22/22 les/c/f=23/23/0 sis=46) [1] r=0 lpr=46 pi=[22,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:19 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:19 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v103: 181 pgs: 32 activating, 46 unknown, 103 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:33:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 06:33:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 06:33:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:19 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Feb  2 06:33:19 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Feb  2 06:33:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Feb  2 06:33:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Feb  2 06:33:20 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Feb  2 06:33:20 np0005604943 ceph-mgr[75558]: [progress INFO root] update: starting ev 54ef8c4b-f1d4-4634-9cd8-20b5d7866752 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb  2 06:33:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 48 pg[8.0( v 32'6 (0'0,32'6] local-lis/les=31/32 n=6 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=48 pruub=11.929622650s) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 32'5 mlcod 32'5 active pruub 94.353576660s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 48 pg[9.0( v 39'483 (0'0,39'483] local-lis/les=33/34 n=210 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=14.076668739s) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 39'482 mlcod 39'482 active pruub 96.500778198s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 48 pg[8.0( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=48 pruub=11.929622650s) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 32'5 mlcod 0'0 unknown pruub 94.353576660s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 48 pg[9.0( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=14.076668739s) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 39'482 mlcod 0'0 unknown pruub 96.500778198s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f107f7e00 space 0x558f10c07740 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1072c300 space 0x558f10c63140 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1072dd80 space 0x558f0ffd5440 0x0~98 clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f107f6f80 space 0x558f10067d40 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f0fd59d80 space 0x558f10c07d40 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f10810200 space 0x558f10c10e40 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f107ae380 space 0x558f10878840 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f0fa5f980 space 0x558f11441140 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f10728100 space 0x558f11424540 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f10729f80 space 0x558f11424e40 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1084ce80 space 0x558f1167c540 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f10811800 space 0x558f0f563a40 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f10784300 space 0x558f10066b40 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1071f000 space 0x558f11440840 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f10729d80 space 0x558f10064240 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1077bc80 space 0x558f10c30240 0x0~98 clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1072d980 space 0x558f10becb40 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f0fd59680 space 0x558f1167d740 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1071f580 space 0x558f10065440 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f10811980 space 0x558f10754e40 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1071e400 space 0x558f1001e840 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1071f380 space 0x558f10064b40 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f107f6080 space 0x558f0f562540 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1071be80 space 0x558f11441a40 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f0fd59f00 space 0x558f10b98840 0x0~98 clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f107faa00 space 0x558f0f562e40 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1080bc00 space 0x558f10bf0840 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1080bf80 space 0x558f100bce40 0x0~98 clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1071e000 space 0x558f1001fa40 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f10784380 space 0x558f10067440 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f10810400 space 0x558f10068540 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1084c180 space 0x558f1167ce40 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1077ba00 space 0x558f10bed140 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1071e380 space 0x558f10759740 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f10811500 space 0x558f11434240 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1077b400 space 0x558f100bc240 0x0~98 clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f10847a00 space 0x558f11435440 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1077a200 space 0x558f100bd740 0x0~98 clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f10810980 space 0x558f108d6240 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1071bc80 space 0x558f10c12540 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f10847c80 space 0x558f11435d40 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f107af780 space 0x558f10879140 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f0fd59e80 space 0x558f10bed740 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f10810300 space 0x558f0fae9140 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f0fd58680 space 0x558f10068e40 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1072c000 space 0x558f10c67440 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1077be80 space 0x558f10758540 0x0~98 clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f107f6a80 space 0x558f10755d40 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f107f6880 space 0x558f10069a40 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1071e200 space 0x558f1001f140 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1072c400 space 0x558f11472240 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1071f780 space 0x558f10065d40 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1072dd00 space 0x558f10c66540 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f107af280 space 0x558f10bf1a40 0x0~9a clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f10728380 space 0x558f11434b40 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f107f6e80 space 0x558f1004c240 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f10810700 space 0x558f0f563740 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f1077a600 space 0x558f10b50b40 0x0~98 clean)
Feb  2 06:33:20 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x558f106e0240) split_cache   moving buffer(0x558f107f6180 space 0x558f10040240 0x0~6e clean)
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 46 pg[6.0( v 34'39 (0'0,34'39] local-lis/les=21/22 n=22 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46 pruub=8.502664566s) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 32'38 mlcod 32'38 active pruub 94.844879150s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 48 pg[6.0( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=21/22 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46 pruub=8.502664566s) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 32'38 mlcod 0'0 unknown pruub 94.844879150s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 48 pg[6.2( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 48 pg[6.6( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 48 pg[6.7( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 48 pg[6.1( v 34'39 (0'0,34'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 48 pg[6.d( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 48 pg[6.e( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 48 pg[6.f( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 48 pg[6.8( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 48 pg[6.9( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 48 pg[6.a( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 48 pg[6.3( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 48 pg[6.4( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 48 pg[6.5( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 48 pg[6.b( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 48 pg[6.c( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:20 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:20 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:20 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:20 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:20 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:20 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Feb  2 06:33:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Feb  2 06:33:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Feb  2 06:33:21 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 49 pg[6.4( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] update: starting ev 5b1d8208-22fe-4cab-ba19-6bac9553c160 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] complete: finished ev 9bb4e9be-df3a-4062-910e-3593c828b3e4 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] Completed event 9bb4e9be-df3a-4062-910e-3593c828b3e4 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] complete: finished ev 47143cce-ae39-48fd-9989-97746b75bc5a (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] Completed event 47143cce-ae39-48fd-9989-97746b75bc5a (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] complete: finished ev bd58a10d-399c-48c2-9912-26d8e910b772 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] Completed event bd58a10d-399c-48c2-9912-26d8e910b772 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] complete: finished ev e3af81a1-1d28-4c44-9b77-5d5608c9996a (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] Completed event e3af81a1-1d28-4c44-9b77-5d5608c9996a (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] complete: finished ev 1c61a694-c0ea-451f-b212-3439788c608f (PG autoscaler increasing pool 6 PGs from 1 to 16)
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] Completed event 1c61a694-c0ea-451f-b212-3439788c608f (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] complete: finished ev 45f227f1-b2c3-44b3-b7de-305c7e93d0bf (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] Completed event 45f227f1-b2c3-44b3-b7de-305c7e93d0bf (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] complete: finished ev 8e2639f1-f4a5-44f8-b9fe-7833f45c55f0 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] Completed event 8e2639f1-f4a5-44f8-b9fe-7833f45c55f0 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] complete: finished ev 834ec13e-7be6-4c75-8700-45ae2cdee3fb (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] Completed event 834ec13e-7be6-4c75-8700-45ae2cdee3fb (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 49 pg[6.9( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] complete: finished ev 54ef8c4b-f1d4-4634-9cd8-20b5d7866752 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 49 pg[6.8( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 49 pg[6.5( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 49 pg[6.a( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 49 pg[6.7( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 49 pg[6.6( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 49 pg[6.1( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 49 pg[6.3( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 49 pg[6.b( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] Completed event 54ef8c4b-f1d4-4634-9cd8-20b5d7866752 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 49 pg[6.0( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 32'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 49 pg[6.e( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 49 pg[6.c( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 49 pg[6.2( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 49 pg[6.d( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 49 pg[6.f( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.14( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] complete: finished ev 5b1d8208-22fe-4cab-ba19-6bac9553c160 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.15( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: [progress INFO root] Completed event 5b1d8208-22fe-4cab-ba19-6bac9553c160 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.15( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.14( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.16( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.17( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.17( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.10( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.11( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.10( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.13( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.11( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.16( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.12( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.13( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.d( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.e( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.8( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.9( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.a( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.2( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.1( v 32'6 (0'0,32'6] local-lis/les=31/32 n=1 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.1( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.3( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.b( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.a( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.9( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.8( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.2( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.3( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.7( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.6( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.6( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.7( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.5( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.4( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.5( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.4( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.1a( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.1b( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.1a( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.1b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.19( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.18( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.18( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.19( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.1e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.1f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.1e( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.1d( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.1d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.1c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.1c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.16( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.17( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.14( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.10( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.12( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.13( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.8( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.a( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.0( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 39'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.2( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.1( v 32'6 (0'0,32'6] local-lis/les=48/49 n=1 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.0( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 32'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.3( v 32'6 (0'0,32'6] local-lis/les=48/49 n=1 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.a( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=48/49 n=1 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.7( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=48/49 n=1 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.5( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.1a( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.19( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.18( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.5( v 32'6 (0'0,32'6] local-lis/les=48/49 n=1 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.4( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.1e( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=48/49 n=1 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=31/31 les/c/f=32/32/0 sis=48) [1] r=0 lpr=48 pi=[31,48)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.1c( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 49 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v106: 243 pgs: 32 activating, 108 unknown, 103 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:33:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 06:33:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 06:33:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Feb  2 06:33:21 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Feb  2 06:33:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Feb  2 06:33:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Feb  2 06:33:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Feb  2 06:33:22 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb  2 06:33:22 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:22 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Feb  2 06:33:22 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Feb  2 06:33:22 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Feb  2 06:33:22 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Feb  2 06:33:22 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 50 pg[10.0( v 39'18 (0'0,39'18] local-lis/les=35/36 n=9 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=50 pruub=13.420560837s) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 39'17 mlcod 39'17 active pruub 94.945480347s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 50 pg[10.0( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=50 pruub=13.420560837s) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 39'17 mlcod 0'0 unknown pruub 94.945480347s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Feb  2 06:33:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Feb  2 06:33:23 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.12( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.11( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.10( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.1f( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.1e( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.1c( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.1b( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:23 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.1a( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.1d( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.7( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.6( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.18( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.19( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.4( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.5( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.3( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.8( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.f( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.9( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.a( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.b( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.c( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.d( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.e( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=35/36 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.2( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.13( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.14( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.15( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.16( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.12( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.17( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.1b( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.1f( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.1d( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.1c( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.18( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.3( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.0( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 39'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.5( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.9( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.a( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.d( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.e( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.c( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.14( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 51 pg[10.15( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [2] r=0 lpr=50 pi=[35,50)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v109: 305 pgs: 62 unknown, 243 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:33:23 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Feb  2 06:33:23 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=50 pruub=14.498394012s) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active pruub 100.974311829s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=50 pruub=14.498394012s) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown pruub 100.974311829s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.6( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.7( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.8( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.f( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.10( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.b( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.c( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.5( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.1a( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.1c( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.4( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.1b( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.18( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.17( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.15( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.19( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.9( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.16( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.a( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.3( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.d( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.e( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.13( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.14( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.2( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.1d( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.1e( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.1( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.1f( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.11( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 51 pg[11.12( empty local-lis/les=37/38 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Feb  2 06:33:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Feb  2 06:33:24 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=50/52 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=37/37 les/c/f=38/38/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:24 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Feb  2 06:33:24 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Feb  2 06:33:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v111: 305 pgs: 62 unknown, 243 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:33:25 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Feb  2 06:33:25 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Feb  2 06:33:25 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Feb  2 06:33:25 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Feb  2 06:33:25 np0005604943 ceph-mgr[75558]: [progress INFO root] Writing back 15 completed events
Feb  2 06:33:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 06:33:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:33:26 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:33:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:33:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.896912575s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 98.015274048s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.605031967s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.723396301s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.896878242s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 98.015274048s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.649876595s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.768287659s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.897793770s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 98.016365051s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.19( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.446365356s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.564956665s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.604990005s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.723396301s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.897500992s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 98.016365051s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.18( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.445954323s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.564918518s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.17( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.423168182s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.542160034s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.17( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.423147202s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.542160034s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.18( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.445917130s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.564918518s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.649595261s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.768287659s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.12( v 52'19 (0'0,52'19] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895855904s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 active pruub 98.015243530s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.896510124s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 98.016021729s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.896488190s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 98.016021729s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.19( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.445418358s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.564956665s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.16( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.445036888s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.564926147s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.15( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.445118904s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565063477s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.16( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.445017815s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.564926147s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.15( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.445099831s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565063477s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.648175240s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.768402100s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.649498940s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.769744873s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.648157120s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.768402100s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.649481773s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.769744873s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.603423119s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.723411560s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.12( v 52'19 (0'0,52'19] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895355225s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 98.015243530s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.603059769s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.723411560s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.649312973s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.769821167s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.13( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.444539070s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565071106s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.649295807s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.769821167s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.13( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.444518089s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565071106s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895417213s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 98.016166687s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647782326s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.768531799s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895399094s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 98.016166687s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647757530s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.768531799s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.897605896s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 98.018402100s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.897566795s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 98.018402100s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.11( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.444192886s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565078735s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.11( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.444176674s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565078735s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647607803s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.768646240s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647589684s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.768646240s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895009041s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 98.016128540s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.894989967s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 98.016128540s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.f( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.444121361s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565269470s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.894975662s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 98.016136169s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.894905090s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 98.016136169s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.f( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.444075584s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565269470s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647732735s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.769035339s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647712708s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.769035339s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.896852493s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 98.018302917s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.d( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.443835258s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565292358s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647960663s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.769668579s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.b( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.443767548s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565361023s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.896492004s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 98.018302917s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.b( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.443468094s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565361023s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.d( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.443391800s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565292358s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647786140s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.769668579s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647262573s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.769294739s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647246361s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.769294739s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.896050453s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 98.018058777s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.7( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.443284035s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565414429s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895911217s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 98.018066406s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895884514s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 98.018058777s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.7( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.443236351s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565414429s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647361755s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.769584656s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647343636s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.769584656s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.8( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.443115234s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565361023s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895870209s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 98.018066406s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.8( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.443078041s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565361023s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.2( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.443022728s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565505981s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.9( v 52'19 (0'0,52'19] local-lis/les=50/51 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895886421s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 active pruub 98.018371582s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.2( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.443002701s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565505981s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.9( v 52'19 (0'0,52'19] local-lis/les=50/51 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895856857s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 98.018371582s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647168159s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.769714355s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895849228s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 98.018424988s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.3( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.442820549s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565414429s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647131920s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.769714355s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.3( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.442790031s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565414429s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647124290s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.769775391s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895810127s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 98.018424988s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647105217s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.769775391s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.4( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.442857742s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565544128s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.4( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.442841530s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565544128s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647763252s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.770545959s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647747993s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.770545959s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.d( v 52'19 (0'0,52'19] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895648003s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 active pruub 98.018508911s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.d( v 52'19 (0'0,52'19] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895598412s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 98.018508911s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647227287s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.770187378s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.5( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.442713737s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565719604s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.e( v 52'19 (0'0,52'19] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895520210s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 active pruub 98.018531799s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.6( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.442512512s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565536499s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.5( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.442694664s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565719604s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.647187233s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.770187378s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.6( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.442494392s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565536499s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.e( v 52'19 (0'0,52'19] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895480156s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 98.018531799s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.9( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.442450523s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565620422s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.646666527s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.769836426s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895376205s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 98.018669128s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895358086s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 98.018669128s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.a( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.442268372s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565612793s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.a( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.442249298s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565612793s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895196915s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 98.018646240s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.646631241s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.769836426s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.895057678s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 98.018646240s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.1b( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.442045212s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565696716s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.1b( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.441968918s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565696716s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.9( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.442434311s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565620422s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.894697189s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 98.018539429s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.1c( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.441810608s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565704346s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.14( v 52'19 (0'0,52'19] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.894836426s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 active pruub 98.018760681s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=50/51 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.894654274s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 98.018539429s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.1c( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.441773415s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565704346s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.15( v 52'19 (0'0,52'19] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.894862175s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 active pruub 98.018852234s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.15( v 52'19 (0'0,52'19] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.894778252s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 98.018852234s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.894514084s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 98.018753052s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.894495964s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 98.018753052s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.645996094s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.770393372s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.645980835s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.770393372s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.1d( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.441167831s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565696716s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.645821571s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.770477295s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.14( v 52'19 (0'0,52'19] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.894105911s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 98.018760681s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.645717621s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.770477295s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.894043922s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 active pruub 98.018821716s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=50/51 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.893972397s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 98.018821716s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.1d( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.440900803s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565696716s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.645315170s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 99.770309448s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=44/45 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.645277977s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 99.770309448s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.1f( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.440801620s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 98.565711975s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[2.1f( empty local-lis/les=42/44 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53 pruub=12.440365791s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 98.565711975s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[2.19( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[5.1e( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[2.18( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[5.11( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[10.9( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[2.17( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[5.7( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[5.13( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[10.8( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[10.15( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[2.15( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[2.1d( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[10.4( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[5.12( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[5.4( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[10.1a( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[2.1c( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[10.19( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[5.16( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[2.f( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[10.7( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[10.6( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[2.2( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[5.9( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[5.5( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[10.17( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[2.d( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[5.f( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[2.1f( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[10.b( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[2.3( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[2.5( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[2.a( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[10.2( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[10.d( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[5.2( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[5.3( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[10.e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[2.b( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[5.c( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[2.8( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[2.9( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[10.1( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[2.4( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[2.16( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[10.1e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[2.7( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[10.f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[5.15( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[2.6( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[5.14( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[5.1( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[2.13( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[10.11( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[2.11( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[10.10( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[10.16( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.586411476s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.396354675s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.586262703s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.396270752s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.586368561s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.396354675s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.586188316s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.396247864s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.586165428s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.396278381s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.586147308s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.396247864s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.586190224s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.396270752s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.586132050s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.396278381s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.585997581s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.396224976s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.585927963s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.396224976s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.585675240s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.395996094s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.584864616s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.395996094s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.584929466s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.396110535s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.584975243s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.396148682s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[6.d( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=9.437724113s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 active pruub 103.248924255s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[6.d( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=9.437700272s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 103.248924255s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.584920883s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.396148682s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.584830284s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.396110535s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.584631920s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.395988464s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[6.f( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=9.437578201s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 active pruub 103.248962402s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[6.f( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=9.437561035s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 103.248962402s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.584589005s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.395988464s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[6.3( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=9.437304497s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 active pruub 103.248832703s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[6.3( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=9.437289238s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 103.248832703s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.584411621s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.395980835s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[6.1( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=9.437219620s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 active pruub 103.248825073s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[6.1( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=9.437201500s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 103.248825073s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.584370613s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.395980835s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.584118843s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.395851135s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[6.b( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=9.437121391s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 active pruub 103.248855591s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.584115982s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.395858765s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.584101677s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.395851135s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[6.b( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=9.437104225s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 103.248855591s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.584074020s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.395858765s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.583995819s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.395851135s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[6.7( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=9.436874390s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 active pruub 103.248764038s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[6.7( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=9.436861038s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 103.248764038s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.583923340s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.395835876s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.583960533s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.395851135s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.583909035s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.395835876s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.583791733s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.395828247s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.583777428s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.395828247s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.583568573s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.395668030s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[6.9( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=9.436457634s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 active pruub 103.248603821s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.583363533s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.395515442s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.583353043s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.395515442s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[6.9( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=9.436440468s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 103.248603821s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.583530426s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.395668030s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.583736420s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.395927429s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.583259583s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.395530701s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[6.5( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=9.436482430s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 active pruub 103.248748779s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.583104134s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 active pruub 107.395515442s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.583078384s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.395515442s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.583576202s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.395927429s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[10.13( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[2.1b( empty local-lis/les=0/0 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[5.1d( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[10.12( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[5.1a( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[10.14( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[5.18( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[5.19( empty local-lis/les=0/0 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.886234283s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.885475159s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.886214256s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.885475159s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.475487709s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.474838257s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.475471497s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.474838257s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.447476387s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 99.446945190s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.439503670s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438980103s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.447463989s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 99.446945190s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.441382408s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.440887451s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.439460754s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438980103s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.441342354s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.440887451s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.574275017s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.573875427s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.574263573s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.573875427s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.439150810s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.439002991s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.439116478s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.439002991s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.446763992s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.446983337s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.446724892s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.446983337s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.891164780s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.891723633s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.891072273s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.891723633s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.446275711s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 99.447021484s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.446261406s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 99.447021484s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[4.18( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.572746277s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.573715210s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.890783310s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.891769409s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.572693825s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.573715210s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.437896729s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438972473s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.437881470s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438972473s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.572508812s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.573722839s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.437778473s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438995361s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.437757492s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438995361s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.572486877s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.573722839s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.445979118s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 99.447303772s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.445682526s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.447052002s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.890406609s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.891769409s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[4.13( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.445643425s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.447052002s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.445811272s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 99.447303772s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.890082359s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.891738892s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[4.11( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.890040398s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.891738892s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.445293427s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.447059631s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.889973640s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.891754150s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.889959335s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.891754150s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.445259094s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.447059631s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[4.e( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.445201874s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.447196960s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.445187569s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.447196960s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.445167542s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 99.447227478s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.445137024s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 99.447227478s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[4.1( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.436746597s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438957214s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.436725616s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438957214s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.889556885s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.891883850s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.571557999s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.573875427s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.571584702s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.573936462s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.889532089s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.891883850s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.571567535s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.573936462s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.571518898s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.573875427s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.889662743s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.892051697s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[4.1a( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=13.582086563s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 unknown NOTIFY pruub 107.395530701s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[6.5( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=9.435258865s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 103.248748779s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[4.1b( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.445100784s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.447593689s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.445062637s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 99.447578430s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.436358452s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438896179s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.445077896s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.447593689s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.889533043s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.892051697s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.436339378s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438896179s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.445045471s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 99.447578430s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.889349937s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.892005920s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.436018944s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438720703s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.436006546s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438720703s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.889303207s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.892005920s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.444441795s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.447280884s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.435806274s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438667297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.889180183s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.892074585s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.444397926s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.447280884s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.889161110s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.892074585s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.570947647s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.573898315s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[4.a( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.435531616s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438667297s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.444147110s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.447319031s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.444111824s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.447319031s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.888871193s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.892120361s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.888857841s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.892120361s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.570528984s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.573905945s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.570519447s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.573905945s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.444473267s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 99.447875977s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.444450378s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 99.447875977s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.888667107s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.892143250s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.444234848s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 99.447731018s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.570812225s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.573898315s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.435048103s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438598633s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.570357323s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.573913574s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.435032845s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438598633s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.444210052s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 99.447731018s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.570312500s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.573913574s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.570205688s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.573921204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.570187569s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.573921204s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[11.17( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.888656616s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.892143250s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.443936348s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 99.447799683s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.443910599s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 99.447799683s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.570146561s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.574066162s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.434630394s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438583374s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[4.1c( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.434718132s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438735962s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.570042610s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.574066162s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.434587479s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438583374s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.434674263s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438735962s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.569829941s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.573959351s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.888038635s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.892189026s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.888026237s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.892189026s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.888100624s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.892166138s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.434267044s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438453674s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.569804192s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.573959351s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.434249878s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438453674s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.443549156s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 99.447845459s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.887889862s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.892166138s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.443536758s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 99.447845459s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.443557739s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.447959900s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.887763023s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.892219543s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.443523407s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.447959900s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.887722969s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.892219543s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.443034172s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.447944641s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.442995071s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.447944641s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.442621231s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.447898865s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.433201790s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438484192s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.442603111s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.447898865s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.433156013s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438484192s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.886839867s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.892280579s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.886822701s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.892280579s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[7.1a( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[8.15( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.568623543s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.574195862s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.442348480s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 99.447921753s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[3.1e( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[3.1f( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.568145752s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.574195862s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.432129860s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438301086s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.432110786s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438301086s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[11.15( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[3.1d( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.441741943s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 99.447921753s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.567803383s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.574089050s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.567791939s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.574089050s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=48/49 n=1 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.441563606s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.447929382s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=48/49 n=1 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.441573143s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.447998047s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.431744576s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438194275s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=48/49 n=1 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.441559792s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.447998047s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.431725502s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438194275s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=48/49 n=1 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.441503525s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.447929382s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.885778427s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.892372131s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.885741234s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.892372131s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.441341400s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 99.447998047s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.441321373s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 99.447998047s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.567317009s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.574073792s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[11.12( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.567299843s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.574073792s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.431384087s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438179016s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.431346893s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438179016s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.567337036s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.574272156s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.567322731s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.574272156s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=48/49 n=1 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.440923691s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.448005676s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=48/49 n=1 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.440886497s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.448005676s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[8.11( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.440800667s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 39'483 active pruub 99.448013306s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.431621552s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438453674s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.440765381s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 39'483 unknown NOTIFY pruub 99.448013306s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.431175232s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438453674s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[11.11( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.884864807s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.892295837s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.886968613s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.894454956s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[8.12( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[7.1c( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[3.7( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[3.18( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[8.14( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[7.2( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[8.d( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[11.d( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.884819031s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.892295837s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.886947632s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.894454956s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.440073013s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.448196411s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.440052032s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.448196411s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[7.1( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[11.b( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[7.5( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[3.5( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[11.9( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[3.8( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.427309990s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.438140869s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[7.e( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.427280426s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.438140869s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.883454323s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.894348145s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.883321762s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.894348145s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.563013077s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.574295044s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.436758995s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.448043823s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[7.c( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.562959671s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.574295044s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.436664581s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.448043823s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[11.3( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[8.10( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[11.2( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.436312675s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 99.448051453s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[11.8( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.436244965s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 99.448051453s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[4.12( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[7.8( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[8.2( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[4.14( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[3.1b( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[3.e( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[7.a( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[8.4( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.881131172s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.894332886s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.881112099s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.894332886s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.560917854s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.574203491s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[11.18( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.560900688s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.574203491s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.880989075s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.894332886s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.880961418s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.894332886s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[8.1b( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[7.15( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.880965233s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.894409180s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.434649467s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 99.448112488s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.880949020s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.894409180s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.434631348s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 99.448112488s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.434644699s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 99.448242188s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.434502602s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.448097229s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[3.11( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.434503555s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.448158264s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.434577942s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 99.448242188s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.434431076s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.448097229s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.413757324s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.427543640s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.413741112s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.427543640s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[11.1a( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.560595512s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.574485779s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.880414009s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.894355774s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[11.1b( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.880398750s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.894355774s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.560560226s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.574485779s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.560332298s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.574508667s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.560317039s) [2] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.574508667s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.434469223s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.448158264s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.880052567s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 active pruub 102.894401550s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.434003830s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 active pruub 99.448387146s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=50/52 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.880032539s) [2] r=-1 lpr=53 pi=[50,53)/1 crt=0'0 unknown NOTIFY pruub 102.894401550s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.433839798s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.448234558s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.433981895s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 99.448387146s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[11.1c( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.433663368s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.448234558s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.412912369s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 active pruub 105.427551270s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[11.1e( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.559703827s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 active pruub 103.574386597s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[11.10( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=46/47 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=15.412868500s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 unknown NOTIFY pruub 105.427551270s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[4.10( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=42/45 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53 pruub=13.559608459s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 unknown NOTIFY pruub 103.574386597s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.433258057s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 active pruub 99.448265076s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=48/49 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=9.433235168s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 99.448265076s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[6.d( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[3.16( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[8.c( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[7.11( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[11.1f( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[4.f( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[6.f( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[4.d( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 53 pg[8.1c( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[11.e( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[6.3( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[6.1( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[8.e( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[3.3( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[3.6( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[3.1( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[4.9( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[3.a( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[6.b( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[8.b( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[4.4( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[6.7( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[8.f( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[8.9( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[4.5( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[4.7( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[3.9( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[6.9( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[3.c( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[4.2( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[8.6( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[4.8( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 53 pg[6.5( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[11.6( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[3.f( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[11.19( empty local-lis/les=0/0 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[8.1a( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[3.12( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[8.18( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[3.15( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[8.1f( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=0/0 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[3.17( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 53 pg[8.1d( empty local-lis/les=0/0 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Feb  2 06:33:27 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Feb  2 06:33:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Feb  2 06:33:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Feb  2 06:33:28 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:33:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:33:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:33:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Feb  2 06:33:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:33:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb  2 06:33:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:33:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:33:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:33:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[5.19( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[5.18( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[10.14( v 52'19 lc 36'7 (0'0,52'19] local-lis/les=53/54 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[5.1a( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[10.12( v 52'19 lc 39'17 (0'0,52'19] local-lis/les=53/54 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[2.1b( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[5.1d( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[5.1( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[2.6( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[2.7( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[2.4( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[4.4( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[6.d( v 34'39 lc 32'8 (0'0,34'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[4.f( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[6.3( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=34'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[4.d( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[2.9( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[6.f( v 34'39 lc 32'1 (0'0,34'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[5.c( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[4.2( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[11.e( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[3.6( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[3.3( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[11.14( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[3.1( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[3.1b( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[7.1f( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[2.19( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[11.10( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[5.1e( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[3.a( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[11.f( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[8.f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=32'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[3.f( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[10.9( v 52'19 lc 36'8 (0'0,52'19] local-lis/les=53/54 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[2.18( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[11.4( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[5.7( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[10.15( v 52'19 lc 36'3 (0'0,52'19] local-lis/les=53/54 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[2.1d( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[2.f( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[8.6( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=1 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[5.4( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[3.c( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[2.2( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[2.1f( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[2.1c( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[5.2( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[5.5( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[10.d( v 52'19 lc 36'5 (0'0,52'19] local-lis/les=53/54 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[5.3( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[11.6( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[10.e( v 52'19 lc 36'4 (0'0,52'19] local-lis/les=53/54 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[2.8( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[2.b( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[11.1( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[3.9( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[3.17( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[3.15( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=1 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=46/22 lis/c=46/46 les/c/f=47/47/0 sis=53) [2] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[6.1( v 34'39 (0'0,34'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[2.16( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[5.15( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=53/54 n=0 ec=42/18 lis/c=42/42 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[5.14( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[2.11( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[2.13( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 54 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=48/31 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=50/37 lis/c=50/50 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[2.a( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[2.5( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[4.7( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[6.5( v 34'39 lc 32'7 (0'0,34'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[4.5( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[5.f( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[2.3( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[2.d( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[6.7( v 34'39 lc 32'21 (0'0,34'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[6.b( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=34'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[4.9( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[4.8( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[6.9( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[5.16( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[4.14( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[5.12( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[2.15( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[5.13( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[2.17( empty local-lis/les=53/54 n=0 ec=42/17 lis/c=42/42 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[4.12( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[5.11( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[4.10( empty local-lis/les=53/54 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:28 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 54 pg[5.9( empty local-lis/les=53/54 n=0 ec=44/20 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v115: 305 pgs: 16 unknown, 73 peering, 216 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:33:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Feb  2 06:33:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Feb  2 06:33:29 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Feb  2 06:33:29 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:29 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:29 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:29 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:29 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:29 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:29 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:29 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:29 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:29 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:29 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:29 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:29 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:29 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 55 pg[9.5( v 49'484 (0'0,49'484] local-lis/les=54/55 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=49'484 lcod 39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:29 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:29 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[48,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Feb  2 06:33:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Feb  2 06:33:30 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.047895432s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 active pruub 108.092330933s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.048052788s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 active pruub 108.092521667s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.047871590s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.092521667s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.047616959s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 active pruub 108.092315674s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.047531128s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.092315674s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.047392845s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.092330933s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.047320366s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 active pruub 108.092414856s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.047264099s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.092414856s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.034152031s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 active pruub 108.079498291s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.047063828s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 active pruub 108.092575073s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.046577454s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 active pruub 108.092117310s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.034036636s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.079498291s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.046530724s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.092117310s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.046464920s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 active pruub 108.092330933s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.046934128s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.092575073s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.046423912s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.092330933s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.044881821s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 active pruub 108.092247009s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:30 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56 pruub=15.044801712s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.092247009s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v118: 305 pgs: 16 unknown, 73 peering, 216 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:33:31 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Feb  2 06:33:31 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Feb  2 06:33:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Feb  2 06:33:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Feb  2 06:33:31 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 pct=0'0 crt=49'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=49'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:31 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.032182693s) [0] async=[0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 active pruub 108.092727661s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:31 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.031960487s) [0] async=[0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 active pruub 108.092514038s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:31 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.031437874s) [0] async=[0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 active pruub 108.092193604s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:31 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.032058716s) [0] async=[0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 active pruub 108.092658997s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:31 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.031791687s) [0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.092727661s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:31 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.031522751s) [0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.092514038s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:31 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.031025887s) [0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.092193604s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:31 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.031177521s) [0] async=[0] r=-1 lpr=57 pi=[48,57)/1 crt=49'484 lcod 55'485 active pruub 108.092712402s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:31 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.031634331s) [0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.092658997s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:31 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.031040192s) [0] r=-1 lpr=57 pi=[48,57)/1 crt=49'484 lcod 55'485 unknown NOTIFY pruub 108.092712402s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:31 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.030065536s) [0] async=[0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 active pruub 108.092750549s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:31 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.029497147s) [0] async=[0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 active pruub 108.092483521s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:31 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.029835701s) [0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.092750549s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:31 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57 pruub=14.029314041s) [0] r=-1 lpr=57 pi=[48,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 108.092483521s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Feb  2 06:33:31 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Feb  2 06:33:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:33:32 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Feb  2 06:33:32 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Feb  2 06:33:32 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Feb  2 06:33:32 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Feb  2 06:33:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Feb  2 06:33:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Feb  2 06:33:32 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Feb  2 06:33:32 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 58 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:32 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=57/58 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=55'486 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:32 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:32 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:32 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:32 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:32 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=48/33 lis/c=54/48 les/c/f=55/49/0 sis=57) [0] r=0 lpr=57 pi=[48,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:32 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Feb  2 06:33:32 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Feb  2 06:33:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v121: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 8.2 KiB/s wr, 178 op/s; 1.5 KiB/s, 2 keys/s, 30 objects/s recovering
Feb  2 06:33:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Feb  2 06:33:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb  2 06:33:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Feb  2 06:33:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb  2 06:33:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Feb  2 06:33:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb  2 06:33:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb  2 06:33:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Feb  2 06:33:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb  2 06:33:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Feb  2 06:33:33 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Feb  2 06:33:34 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Feb  2 06:33:34 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Feb  2 06:33:34 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Feb  2 06:33:34 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb  2 06:33:35 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Feb  2 06:33:35 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Feb  2 06:33:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 6.8 KiB/s wr, 146 op/s; 1.3 KiB/s, 2 keys/s, 25 objects/s recovering
Feb  2 06:33:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Feb  2 06:33:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb  2 06:33:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Feb  2 06:33:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb  2 06:33:35 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Feb  2 06:33:35 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Feb  2 06:33:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Feb  2 06:33:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Feb  2 06:33:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb  2 06:33:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Feb  2 06:33:35 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb  2 06:33:35 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb  2 06:33:35 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 60 pg[6.3( v 34'39 (0'0,34'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=8.872867584s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=34'39 active pruub 107.005912781s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:35 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 60 pg[6.3( v 34'39 (0'0,34'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=8.872781754s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=34'39 unknown NOTIFY pruub 107.005912781s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:35 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 60 pg[6.7( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=8.880386353s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=34'39 active pruub 107.014022827s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:35 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 60 pg[6.f( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=8.872290611s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=34'39 active pruub 107.005973816s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:35 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 60 pg[6.7( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=8.880327225s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=34'39 unknown NOTIFY pruub 107.014022827s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:35 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 60 pg[6.f( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=8.871978760s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=34'39 unknown NOTIFY pruub 107.005973816s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:35 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 60 pg[6.b( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=8.879766464s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=34'39 active pruub 107.014251709s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:35 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 60 pg[6.b( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=8.879552841s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=34'39 unknown NOTIFY pruub 107.014251709s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:35 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Feb  2 06:33:35 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 60 pg[6.f( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:35 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 60 pg[6.3( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:35 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 60 pg[6.b( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:35 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 60 pg[6.7( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:35 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 59 pg[6.a( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=59 pruub=9.293912888s) [1] r=-1 lpr=59 pi=[46,59)/1 crt=34'39 lcod 0'0 active pruub 111.249099731s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:35 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 60 pg[6.a( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=59 pruub=9.293802261s) [1] r=-1 lpr=59 pi=[46,59)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 111.249099731s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:35 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 59 pg[6.6( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=59 pruub=9.293148041s) [1] r=-1 lpr=59 pi=[46,59)/1 crt=34'39 lcod 0'0 active pruub 111.249046326s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:35 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 60 pg[6.6( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=59 pruub=9.293067932s) [1] r=-1 lpr=59 pi=[46,59)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 111.249046326s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:35 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 59 pg[6.2( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=59 pruub=9.292979240s) [1] r=-1 lpr=59 pi=[46,59)/1 crt=34'39 lcod 0'0 active pruub 111.249069214s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:35 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 59 pg[6.e( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=59 pruub=9.292851448s) [1] r=-1 lpr=59 pi=[46,59)/1 crt=34'39 lcod 0'0 active pruub 111.249069214s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:35 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 60 pg[6.2( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=59 pruub=9.292873383s) [1] r=-1 lpr=59 pi=[46,59)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 111.249069214s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:35 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 60 pg[6.e( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=59 pruub=9.292778969s) [1] r=-1 lpr=59 pi=[46,59)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 111.249069214s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:35 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 60 pg[6.a( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=59) [1] r=0 lpr=60 pi=[46,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:35 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 60 pg[6.6( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=59) [1] r=0 lpr=60 pi=[46,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:35 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 60 pg[6.2( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=59) [1] r=0 lpr=60 pi=[46,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:35 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 60 pg[6.e( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=59) [1] r=0 lpr=60 pi=[46,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:35 np0005604943 ceph-mgr[75558]: [progress INFO root] Completed event 6f347e7f-0d73-43ba-979c-4b7ffdd13652 (Global Recovery Event) in 20 seconds
Feb  2 06:33:36 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Feb  2 06:33:36 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Feb  2 06:33:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Feb  2 06:33:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Feb  2 06:33:36 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Feb  2 06:33:36 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 61 pg[6.2( v 34'39 (0'0,34'39] local-lis/les=59/61 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=59) [1] r=0 lpr=60 pi=[46,59)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:36 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 61 pg[6.6( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=59/61 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=59) [1] r=0 lpr=60 pi=[46,59)/1 crt=34'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:36 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Feb  2 06:33:36 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb  2 06:33:36 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 61 pg[6.e( v 34'39 lc 32'11 (0'0,34'39] local-lis/les=59/61 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=59) [1] r=0 lpr=60 pi=[46,59)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:36 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 61 pg[6.a( v 34'39 (0'0,34'39] local-lis/les=59/61 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=59) [1] r=0 lpr=60 pi=[46,59)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:36 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 61 pg[6.f( v 34'39 lc 32'1 (0'0,34'39] local-lis/les=60/61 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:36 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 61 pg[6.b( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=60/61 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=34'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:36 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 61 pg[6.7( v 34'39 lc 32'21 (0'0,34'39] local-lis/les=60/61 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:36 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 61 pg[6.3( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=60/61 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=34'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:33:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v126: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 6.5 KiB/s wr, 139 op/s; 914 B/s, 22 objects/s recovering
Feb  2 06:33:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Feb  2 06:33:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb  2 06:33:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Feb  2 06:33:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb  2 06:33:37 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Feb  2 06:33:37 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Feb  2 06:33:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Feb  2 06:33:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Feb  2 06:33:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb  2 06:33:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Feb  2 06:33:37 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Feb  2 06:33:37 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb  2 06:33:37 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb  2 06:33:38 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Feb  2 06:33:38 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Feb  2 06:33:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Feb  2 06:33:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb  2 06:33:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v128: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 112 B/s, 1 keys/s, 1 objects/s recovering
Feb  2 06:33:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Feb  2 06:33:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb  2 06:33:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Feb  2 06:33:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb  2 06:33:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Feb  2 06:33:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Feb  2 06:33:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb  2 06:33:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Feb  2 06:33:39 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Feb  2 06:33:39 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb  2 06:33:39 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb  2 06:33:39 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.c scrub starts
Feb  2 06:33:39 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.c scrub ok
Feb  2 06:33:40 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 62 pg[6.4( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=62 pruub=12.603176117s) [1] r=-1 lpr=62 pi=[46,62)/1 crt=34'39 lcod 0'0 active pruub 119.243782043s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:40 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 63 pg[6.4( v 34'39 (0'0,34'39] local-lis/les=46/49 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=62 pruub=12.602930069s) [1] r=-1 lpr=62 pi=[46,62)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 119.243782043s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:40 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 62 pg[6.c( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=62 pruub=12.607831955s) [1] r=-1 lpr=62 pi=[46,62)/1 crt=34'39 lcod 0'0 active pruub 119.249198914s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:40 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 63 pg[6.c( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=62 pruub=12.607725143s) [1] r=-1 lpr=62 pi=[46,62)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 119.249198914s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:40 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 63 pg[6.c( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=62) [1] r=0 lpr=63 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:40 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 63 pg[6.4( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=62) [1] r=0 lpr=63 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:40 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 63 pg[6.d( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=12.170778275s) [0] r=-1 lpr=63 pi=[53,63)/1 crt=34'39 active pruub 115.006057739s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:40 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 63 pg[6.d( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=12.170745850s) [0] r=-1 lpr=63 pi=[53,63)/1 crt=34'39 unknown NOTIFY pruub 115.006057739s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:40 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 63 pg[6.5( v 34'39 (0'0,34'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=12.178554535s) [0] r=-1 lpr=63 pi=[53,63)/1 crt=34'39 active pruub 115.014091492s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:40 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 63 pg[6.5( v 34'39 (0'0,34'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=12.178535461s) [0] r=-1 lpr=63 pi=[53,63)/1 crt=34'39 unknown NOTIFY pruub 115.014091492s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:40 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 63 pg[6.d( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [0] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:40 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 63 pg[6.5( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [0] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:40 np0005604943 python3[98411]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:33:40 np0005604943 podman[98412]: 2026-02-02 11:33:40.634691478 +0000 UTC m=+0.057918380 container create 58207885e0c3e1ff71a0d835ac79b12b0203953b91b30b6dfd83cdc8b76fcb16 (image=quay.io/ceph/ceph:v20, name=xenodochial_kalam, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 06:33:40 np0005604943 systemd[76640]: Starting Mark boot as successful...
Feb  2 06:33:40 np0005604943 systemd[76640]: Finished Mark boot as successful.
Feb  2 06:33:40 np0005604943 systemd[1]: Started libpod-conmon-58207885e0c3e1ff71a0d835ac79b12b0203953b91b30b6dfd83cdc8b76fcb16.scope.
Feb  2 06:33:40 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:33:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755b3cd98597884bd3b9bd54720657cf64493c3bf6755a22fb8ed7fcf16bec4b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:33:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755b3cd98597884bd3b9bd54720657cf64493c3bf6755a22fb8ed7fcf16bec4b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:33:40 np0005604943 podman[98412]: 2026-02-02 11:33:40.60494464 +0000 UTC m=+0.028171572 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:33:40 np0005604943 podman[98412]: 2026-02-02 11:33:40.713519157 +0000 UTC m=+0.136746049 container init 58207885e0c3e1ff71a0d835ac79b12b0203953b91b30b6dfd83cdc8b76fcb16 (image=quay.io/ceph/ceph:v20, name=xenodochial_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 06:33:40 np0005604943 podman[98412]: 2026-02-02 11:33:40.719480915 +0000 UTC m=+0.142707787 container start 58207885e0c3e1ff71a0d835ac79b12b0203953b91b30b6dfd83cdc8b76fcb16 (image=quay.io/ceph/ceph:v20, name=xenodochial_kalam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:33:40 np0005604943 podman[98412]: 2026-02-02 11:33:40.730029656 +0000 UTC m=+0.153256548 container attach 58207885e0c3e1ff71a0d835ac79b12b0203953b91b30b6dfd83cdc8b76fcb16 (image=quay.io/ceph/ceph:v20, name=xenodochial_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 06:33:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:33:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:33:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:33:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:33:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:33:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:33:40 np0005604943 ceph-mgr[75558]: [progress INFO root] Writing back 16 completed events
Feb  2 06:33:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 06:33:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:33:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Feb  2 06:33:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Feb  2 06:33:40 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Feb  2 06:33:40 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 64 pg[6.4( v 34'39 lc 32'9 (0'0,34'39] local-lis/les=62/64 n=2 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=62) [1] r=0 lpr=63 pi=[46,62)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:40 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 64 pg[6.5( v 34'39 lc 32'7 (0'0,34'39] local-lis/les=63/64 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [0] r=0 lpr=63 pi=[53,63)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:40 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 64 pg[6.d( v 34'39 lc 32'8 (0'0,34'39] local-lis/les=63/64 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [0] r=0 lpr=63 pi=[53,63)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:40 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Feb  2 06:33:40 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb  2 06:33:40 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 64 pg[6.c( v 34'39 lc 32'10 (0'0,34'39] local-lis/les=62/64 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=62) [1] r=0 lpr=63 pi=[46,62)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:41 np0005604943 xenodochial_kalam[98428]: could not fetch user info: no user info saved
Feb  2 06:33:41 np0005604943 systemd[1]: libpod-58207885e0c3e1ff71a0d835ac79b12b0203953b91b30b6dfd83cdc8b76fcb16.scope: Deactivated successfully.
Feb  2 06:33:41 np0005604943 podman[98412]: 2026-02-02 11:33:41.04917282 +0000 UTC m=+0.472399692 container died 58207885e0c3e1ff71a0d835ac79b12b0203953b91b30b6dfd83cdc8b76fcb16 (image=quay.io/ceph/ceph:v20, name=xenodochial_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:33:41 np0005604943 systemd[1]: var-lib-containers-storage-overlay-755b3cd98597884bd3b9bd54720657cf64493c3bf6755a22fb8ed7fcf16bec4b-merged.mount: Deactivated successfully.
Feb  2 06:33:41 np0005604943 podman[98412]: 2026-02-02 11:33:41.122393156 +0000 UTC m=+0.545620028 container remove 58207885e0c3e1ff71a0d835ac79b12b0203953b91b30b6dfd83cdc8b76fcb16 (image=quay.io/ceph/ceph:v20, name=xenodochial_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:33:41 np0005604943 systemd[1]: libpod-conmon-58207885e0c3e1ff71a0d835ac79b12b0203953b91b30b6dfd83cdc8b76fcb16.scope: Deactivated successfully.
Feb  2 06:33:41 np0005604943 python3[98553]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 4548a36b-7cdc-5e3e-a814-4e1571be1fae -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:33:41 np0005604943 podman[98554]: 2026-02-02 11:33:41.474454147 +0000 UTC m=+0.034434306 container create 697acc830f6d27a7b345f7084211ac55c622236d2ebbffc295e179dfc7d4ac40 (image=quay.io/ceph/ceph:v20, name=sad_shirley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 06:33:41 np0005604943 systemd[1]: Started libpod-conmon-697acc830f6d27a7b345f7084211ac55c622236d2ebbffc295e179dfc7d4ac40.scope.
Feb  2 06:33:41 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:33:41 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503319aef7fc9db299009a7796a78b394736d4037821413dc58dd6c203c60d86/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:33:41 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503319aef7fc9db299009a7796a78b394736d4037821413dc58dd6c203c60d86/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:33:41 np0005604943 podman[98554]: 2026-02-02 11:33:41.551371911 +0000 UTC m=+0.111352060 container init 697acc830f6d27a7b345f7084211ac55c622236d2ebbffc295e179dfc7d4ac40 (image=quay.io/ceph/ceph:v20, name=sad_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 06:33:41 np0005604943 podman[98554]: 2026-02-02 11:33:41.458148601 +0000 UTC m=+0.018128760 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb  2 06:33:41 np0005604943 podman[98554]: 2026-02-02 11:33:41.558608742 +0000 UTC m=+0.118588871 container start 697acc830f6d27a7b345f7084211ac55c622236d2ebbffc295e179dfc7d4ac40 (image=quay.io/ceph/ceph:v20, name=sad_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 06:33:41 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Feb  2 06:33:41 np0005604943 podman[98554]: 2026-02-02 11:33:41.56787778 +0000 UTC m=+0.127857999 container attach 697acc830f6d27a7b345f7084211ac55c622236d2ebbffc295e179dfc7d4ac40 (image=quay.io/ceph/ceph:v20, name=sad_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 06:33:41 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Feb  2 06:33:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v131: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 136 B/s, 2 keys/s, 1 objects/s recovering
Feb  2 06:33:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Feb  2 06:33:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb  2 06:33:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Feb  2 06:33:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb  2 06:33:41 np0005604943 sad_shirley[98569]: {
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "user_id": "openstack",
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "display_name": "openstack",
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "email": "",
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "suspended": 0,
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "max_buckets": 1000,
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "subusers": [],
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "keys": [
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:        {
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:            "user": "openstack",
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:            "access_key": "SWBRDOEF4WT2SAOS4V9J",
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:            "secret_key": "xoaJbU9JfQysinONvclGPBrrGDeqQwrBMweCOvdQ",
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:            "active": true,
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:            "create_date": "2026-02-02T11:33:41.811693Z"
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:        }
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    ],
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "swift_keys": [],
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "caps": [],
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "op_mask": "read, write, delete",
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "default_placement": "",
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "default_storage_class": "",
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "placement_tags": [],
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "bucket_quota": {
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:        "enabled": false,
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:        "check_on_raw": false,
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:        "max_size": -1,
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:        "max_size_kb": 0,
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:        "max_objects": -1
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    },
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "user_quota": {
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:        "enabled": false,
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:        "check_on_raw": false,
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:        "max_size": -1,
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:        "max_size_kb": 0,
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:        "max_objects": -1
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    },
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "temp_url_keys": [],
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "type": "rgw",
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "mfa_ids": [],
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "account_id": "",
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "path": "/",
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "create_date": "2026-02-02T11:33:41.811242Z",
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "tags": [],
Feb  2 06:33:41 np0005604943 sad_shirley[98569]:    "group_ids": []
Feb  2 06:33:41 np0005604943 sad_shirley[98569]: }
Feb  2 06:33:41 np0005604943 sad_shirley[98569]: 
Feb  2 06:33:41 np0005604943 systemd[1]: libpod-697acc830f6d27a7b345f7084211ac55c622236d2ebbffc295e179dfc7d4ac40.scope: Deactivated successfully.
Feb  2 06:33:41 np0005604943 podman[98655]: 2026-02-02 11:33:41.904134273 +0000 UTC m=+0.034129640 container died 697acc830f6d27a7b345f7084211ac55c622236d2ebbffc295e179dfc7d4ac40 (image=quay.io/ceph/ceph:v20, name=sad_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 06:33:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Feb  2 06:33:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Feb  2 06:33:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb  2 06:33:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Feb  2 06:33:41 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Feb  2 06:33:41 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Feb  2 06:33:41 np0005604943 systemd[1]: var-lib-containers-storage-overlay-503319aef7fc9db299009a7796a78b394736d4037821413dc58dd6c203c60d86-merged.mount: Deactivated successfully.
Feb  2 06:33:41 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Feb  2 06:33:41 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:33:41 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb  2 06:33:41 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb  2 06:33:41 np0005604943 podman[98655]: 2026-02-02 11:33:41.988465211 +0000 UTC m=+0.118460528 container remove 697acc830f6d27a7b345f7084211ac55c622236d2ebbffc295e179dfc7d4ac40 (image=quay.io/ceph/ceph:v20, name=sad_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:33:41 np0005604943 systemd[1]: libpod-conmon-697acc830f6d27a7b345f7084211ac55c622236d2ebbffc295e179dfc7d4ac40.scope: Deactivated successfully.
Feb  2 06:33:41 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Feb  2 06:33:42 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Feb  2 06:33:42 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=10.770839691s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=39'483 lcod 0'0 active pruub 115.447471619s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:42 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=10.770784378s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 115.447471619s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:42 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 65 pg[9.e( v 64'489 (0'0,64'489] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=10.771222115s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=64'488 lcod 64'488 active pruub 115.448417664s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:42 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:42 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=10.771521568s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=39'483 lcod 0'0 active pruub 115.448822021s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:42 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 65 pg[9.e( v 64'489 (0'0,64'489] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=10.771156311s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=64'488 lcod 64'488 unknown NOTIFY pruub 115.448417664s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:42 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=10.771500587s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 115.448822021s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:42 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 65 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=10.771339417s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=63'484 lcod 63'484 active pruub 115.448913574s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:42 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 65 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=10.771310806s) [2] r=-1 lpr=65 pi=[48,65)/1 crt=63'484 lcod 63'484 unknown NOTIFY pruub 115.448913574s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:42 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:42 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:42 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=65) [2] r=0 lpr=65 pi=[48,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:42 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Feb  2 06:33:42 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Feb  2 06:33:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Feb  2 06:33:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Feb  2 06:33:42 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Feb  2 06:33:42 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[48,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:42 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[48,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:42 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[48,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:42 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[48,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:42 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[48,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:42 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[48,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:42 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[48,66)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:42 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[48,66)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:43 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Feb  2 06:33:43 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb  2 06:33:43 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 66 pg[9.e( v 64'489 (0'0,64'489] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=0 lpr=66 pi=[48,66)/1 crt=64'488 lcod 64'488 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:43 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 66 pg[9.e( v 64'489 (0'0,64'489] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=0 lpr=66 pi=[48,66)/1 crt=64'488 lcod 64'488 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:43 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=0 lpr=66 pi=[48,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:43 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=0 lpr=66 pi=[48,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:43 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=0 lpr=66 pi=[48,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:43 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 66 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=0 lpr=66 pi=[48,66)/1 crt=63'484 lcod 63'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:43 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=0 lpr=66 pi=[48,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:43 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 66 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] r=0 lpr=66 pi=[48,66)/1 crt=63'484 lcod 63'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:43 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Feb  2 06:33:43 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Feb  2 06:33:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v134: 305 pgs: 4 unknown, 1 active+clean+scrubbing, 300 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 33 op/s; 445 B/s, 2 objects/s recovering
Feb  2 06:33:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Feb  2 06:33:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Feb  2 06:33:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Feb  2 06:33:44 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 67 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=66/67 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[48,66)/1 crt=64'485 lcod 63'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:44 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=66/67 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[48,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:44 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 67 pg[9.e( v 64'489 (0'0,64'489] local-lis/les=66/67 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[48,66)/1 crt=64'489 lcod 64'488 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:44 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=66/67 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[48,66)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:44 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Feb  2 06:33:44 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Feb  2 06:33:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Feb  2 06:33:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Feb  2 06:33:45 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Feb  2 06:33:45 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=66/67 n=6 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68 pruub=15.381020546s) [2] async=[2] r=-1 lpr=68 pi=[48,68)/1 crt=39'483 lcod 0'0 active pruub 122.760429382s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:45 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=66/67 n=6 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68 pruub=15.380953789s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 122.760429382s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:45 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 68 pg[9.e( v 64'489 (0'0,64'489] local-lis/les=66/67 n=7 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68 pruub=15.388247490s) [2] async=[2] r=-1 lpr=68 pi=[48,68)/1 crt=64'489 lcod 64'488 active pruub 122.768058777s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:45 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=66/67 n=7 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68 pruub=15.388137817s) [2] async=[2] r=-1 lpr=68 pi=[48,68)/1 crt=39'483 lcod 0'0 active pruub 122.768074036s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:45 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=66/67 n=7 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68 pruub=15.388106346s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 122.768074036s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:45 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 68 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=66/67 n=6 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68 pruub=15.380139351s) [2] async=[2] r=-1 lpr=68 pi=[48,68)/1 crt=64'485 lcod 63'484 active pruub 122.760269165s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:45 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 68 pg[9.e( v 64'489 (0'0,64'489] local-lis/les=66/67 n=7 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68 pruub=15.388110161s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=64'489 lcod 64'488 unknown NOTIFY pruub 122.768058777s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:45 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 68 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=66/67 n=6 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68 pruub=15.380033493s) [2] r=-1 lpr=68 pi=[48,68)/1 crt=64'485 lcod 63'484 unknown NOTIFY pruub 122.760269165s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:45 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:45 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:45 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 68 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=0/0 n=6 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 pct=0'0 crt=64'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:45 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 68 pg[9.e( v 64'489 (0'0,64'489] local-lis/les=0/0 n=7 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 pct=0'0 crt=64'489 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:45 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:45 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 68 pg[9.e( v 64'489 (0'0,64'489] local-lis/les=0/0 n=7 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=64'489 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:45 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:45 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 68 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=0/0 n=6 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=64'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v137: 305 pgs: 4 unknown, 1 active+clean+scrubbing, 300 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 33 op/s; 445 B/s, 2 objects/s recovering
Feb  2 06:33:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Feb  2 06:33:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Feb  2 06:33:46 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Feb  2 06:33:46 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Feb  2 06:33:46 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 69 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:46 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 69 pg[9.e( v 64'489 (0'0,64'489] local-lis/les=68/69 n=7 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=64'489 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:46 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 69 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=68/69 n=7 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:46 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 69 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=68/69 n=6 ec=48/33 lis/c=66/48 les/c/f=67/49/0 sis=68) [2] r=0 lpr=68 pi=[48,68)/1 crt=64'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:46 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Feb  2 06:33:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:33:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v139: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 69 op/s; 214 B/s, 6 objects/s recovering
Feb  2 06:33:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Feb  2 06:33:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb  2 06:33:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Feb  2 06:33:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb  2 06:33:47 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Feb  2 06:33:47 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Feb  2 06:33:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Feb  2 06:33:48 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb  2 06:33:48 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb  2 06:33:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Feb  2 06:33:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb  2 06:33:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Feb  2 06:33:48 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Feb  2 06:33:48 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 70 pg[9.7( v 64'487 (0'0,64'487] local-lis/les=56/57 n=7 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=15.413840294s) [2] r=-1 lpr=70 pi=[56,70)/1 crt=64'486 lcod 64'486 active pruub 129.877838135s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:48 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 70 pg[9.17( v 64'485 (0'0,64'485] local-lis/les=56/57 n=6 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=15.413784981s) [2] r=-1 lpr=70 pi=[56,70)/1 crt=64'484 lcod 64'484 active pruub 129.877838135s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:48 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 70 pg[9.7( v 64'487 (0'0,64'487] local-lis/les=56/57 n=7 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=15.413771629s) [2] r=-1 lpr=70 pi=[56,70)/1 crt=64'486 lcod 64'486 unknown NOTIFY pruub 129.877838135s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:48 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 70 pg[9.17( v 64'485 (0'0,64'485] local-lis/les=56/57 n=6 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=15.413745880s) [2] r=-1 lpr=70 pi=[56,70)/1 crt=64'484 lcod 64'484 unknown NOTIFY pruub 129.877838135s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:48 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 70 pg[9.f( v 64'485 (0'0,64'485] local-lis/les=56/57 n=7 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=15.413588524s) [2] r=-1 lpr=70 pi=[56,70)/1 crt=64'484 lcod 64'484 active pruub 129.877899170s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:48 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 70 pg[9.f( v 64'485 (0'0,64'485] local-lis/les=56/57 n=7 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=15.413571358s) [2] r=-1 lpr=70 pi=[56,70)/1 crt=64'484 lcod 64'484 unknown NOTIFY pruub 129.877899170s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:48 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.531422615s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=39'483 active pruub 122.995864868s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:48 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.531409264s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=39'483 unknown NOTIFY pruub 122.995864868s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:48 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 70 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:48 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 70 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=70) [2] r=0 lpr=70 pi=[56,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:48 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 70 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=70) [2] r=0 lpr=70 pi=[56,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:48 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 70 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=70) [2] r=0 lpr=70 pi=[56,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:48 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Feb  2 06:33:48 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Feb  2 06:33:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Feb  2 06:33:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Feb  2 06:33:49 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Feb  2 06:33:49 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 71 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=71) [2]/[0] r=-1 lpr=71 pi=[56,71)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:49 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 71 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=71) [2]/[0] r=-1 lpr=71 pi=[56,71)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:49 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 71 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=71) [2]/[0] r=-1 lpr=71 pi=[56,71)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:49 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 71 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=71) [2]/[0] r=-1 lpr=71 pi=[56,71)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:49 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 71 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=71) [2]/[0] r=-1 lpr=71 pi=[56,71)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:49 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 71 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=71) [2]/[0] r=-1 lpr=71 pi=[56,71)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:49 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 71 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[0] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:49 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 71 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[0] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:49 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 71 pg[9.f( v 64'485 (0'0,64'485] local-lis/les=56/57 n=7 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=71) [2]/[0] r=0 lpr=71 pi=[56,71)/1 crt=64'484 lcod 64'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:49 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 71 pg[9.7( v 64'487 (0'0,64'487] local-lis/les=56/57 n=7 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=71) [2]/[0] r=0 lpr=71 pi=[56,71)/1 crt=64'486 lcod 64'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:49 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 71 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[0] r=0 lpr=71 pi=[57,71)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:49 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 71 pg[9.f( v 64'485 (0'0,64'485] local-lis/les=56/57 n=7 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=71) [2]/[0] r=0 lpr=71 pi=[56,71)/1 crt=64'484 lcod 64'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:49 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 71 pg[9.7( v 64'487 (0'0,64'487] local-lis/les=56/57 n=7 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=71) [2]/[0] r=0 lpr=71 pi=[56,71)/1 crt=64'486 lcod 64'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:49 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 71 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[0] r=0 lpr=71 pi=[57,71)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:49 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 71 pg[9.17( v 64'485 (0'0,64'485] local-lis/les=56/57 n=6 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=71) [2]/[0] r=0 lpr=71 pi=[56,71)/1 crt=64'484 lcod 64'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:49 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 71 pg[9.17( v 64'485 (0'0,64'485] local-lis/les=56/57 n=6 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=71) [2]/[0] r=0 lpr=71 pi=[56,71)/1 crt=64'484 lcod 64'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:49 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Feb  2 06:33:49 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb  2 06:33:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v142: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 KiB/s wr, 45 op/s; 183 B/s, 5 objects/s recovering
Feb  2 06:33:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Feb  2 06:33:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb  2 06:33:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Feb  2 06:33:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb  2 06:33:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Feb  2 06:33:50 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Feb  2 06:33:50 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb  2 06:33:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Feb  2 06:33:50 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Feb  2 06:33:50 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 72 pg[6.8( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=72 pruub=10.777210236s) [2] r=-1 lpr=72 pi=[46,72)/1 crt=34'39 lcod 0'0 active pruub 127.249359131s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:50 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 72 pg[6.8( v 34'39 (0'0,34'39] local-lis/les=46/49 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=72 pruub=10.777157784s) [2] r=-1 lpr=72 pi=[46,72)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 127.249359131s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:50 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 72 pg[6.8( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=72) [2] r=0 lpr=72 pi=[46,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:50 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 72 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=71/72 n=6 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:50 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 72 pg[9.f( v 64'485 (0'0,64'485] local-lis/les=71/72 n=7 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[56,71)/1 crt=64'485 lcod 64'484 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:50 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 72 pg[9.7( v 64'487 (0'0,64'487] local-lis/les=71/72 n=7 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[56,71)/1 crt=64'487 lcod 64'486 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:50 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 72 pg[9.17( v 64'485 (0'0,64'485] local-lis/les=71/72 n=6 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=71) [2]/[0] async=[2] r=0 lpr=71 pi=[56,71)/1 crt=64'485 lcod 64'484 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:50 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb  2 06:33:50 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb  2 06:33:50 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Feb  2 06:33:50 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb  2 06:33:50 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=72 pruub=10.671212196s) [2] r=-1 lpr=72 pi=[48,72)/1 crt=39'483 lcod 0'0 active pruub 123.448806763s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:50 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=72 pruub=10.671151161s) [2] r=-1 lpr=72 pi=[48,72)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 123.448806763s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:50 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=72) [2] r=0 lpr=72 pi=[48,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:50 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 72 pg[9.18( v 64'487 (0'0,64'487] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=72 pruub=10.670752525s) [2] r=-1 lpr=72 pi=[48,72)/1 crt=64'486 lcod 64'486 active pruub 123.448936462s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:50 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 72 pg[9.18( v 64'487 (0'0,64'487] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=72 pruub=10.670725822s) [2] r=-1 lpr=72 pi=[48,72)/1 crt=64'486 lcod 64'486 unknown NOTIFY pruub 123.448936462s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:50 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=72) [2] r=0 lpr=72 pi=[48,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:50 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Feb  2 06:33:50 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Feb  2 06:33:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Feb  2 06:33:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Feb  2 06:33:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Feb  2 06:33:51 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 73 pg[9.8( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[48,73)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:51 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 73 pg[9.8( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[48,73)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:51 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 73 pg[9.17( v 64'485 (0'0,64'485] local-lis/les=0/0 n=6 ec=48/33 lis/c=71/56 les/c/f=72/57/0 sis=73) [2] r=0 lpr=73 pi=[56,73)/1 pct=0'0 crt=64'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:51 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 73 pg[9.17( v 64'485 (0'0,64'485] local-lis/les=0/0 n=6 ec=48/33 lis/c=71/56 les/c/f=72/57/0 sis=73) [2] r=0 lpr=73 pi=[56,73)/1 crt=64'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:51 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 73 pg[9.7( v 64'487 (0'0,64'487] local-lis/les=71/72 n=7 ec=48/33 lis/c=71/56 les/c/f=72/57/0 sis=73 pruub=15.006018639s) [2] async=[2] r=-1 lpr=73 pi=[56,73)/1 crt=64'487 lcod 64'486 active pruub 132.484069824s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:51 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 73 pg[9.7( v 64'487 (0'0,64'487] local-lis/les=71/72 n=7 ec=48/33 lis/c=71/56 les/c/f=72/57/0 sis=73 pruub=15.005968094s) [2] r=-1 lpr=73 pi=[56,73)/1 crt=64'487 lcod 64'486 unknown NOTIFY pruub 132.484069824s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:51 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 73 pg[9.17( v 64'485 (0'0,64'485] local-lis/les=71/72 n=6 ec=48/33 lis/c=71/56 les/c/f=72/57/0 sis=73 pruub=15.006196022s) [2] async=[2] r=-1 lpr=73 pi=[56,73)/1 crt=64'485 lcod 64'484 active pruub 132.484359741s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:51 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 73 pg[9.17( v 64'485 (0'0,64'485] local-lis/les=71/72 n=6 ec=48/33 lis/c=71/56 les/c/f=72/57/0 sis=73 pruub=15.006158829s) [2] r=-1 lpr=73 pi=[56,73)/1 crt=64'485 lcod 64'484 unknown NOTIFY pruub 132.484359741s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:51 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 73 pg[9.f( v 64'485 (0'0,64'485] local-lis/les=0/0 n=7 ec=48/33 lis/c=71/56 les/c/f=72/57/0 sis=73) [2] r=0 lpr=73 pi=[56,73)/1 pct=0'0 crt=64'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:51 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 73 pg[9.f( v 64'485 (0'0,64'485] local-lis/les=0/0 n=7 ec=48/33 lis/c=71/56 les/c/f=72/57/0 sis=73) [2] r=0 lpr=73 pi=[56,73)/1 crt=64'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:51 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 73 pg[9.f( v 64'485 (0'0,64'485] local-lis/les=71/72 n=7 ec=48/33 lis/c=71/56 les/c/f=72/57/0 sis=73 pruub=15.005282402s) [2] async=[2] r=-1 lpr=73 pi=[56,73)/1 crt=64'485 lcod 64'484 active pruub 132.484054565s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:51 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 73 pg[9.18( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[48,73)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:51 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 73 pg[9.f( v 64'485 (0'0,64'485] local-lis/les=71/72 n=7 ec=48/33 lis/c=71/56 les/c/f=72/57/0 sis=73 pruub=15.005175591s) [2] r=-1 lpr=73 pi=[56,73)/1 crt=64'485 lcod 64'484 unknown NOTIFY pruub 132.484054565s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:51 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 73 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=71/72 n=6 ec=48/33 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.001982689s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=39'483 active pruub 132.480957031s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:51 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 73 pg[9.18( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[48,73)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:51 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 73 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:51 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 73 pg[9.7( v 64'487 (0'0,64'487] local-lis/les=0/0 n=7 ec=48/33 lis/c=71/56 les/c/f=72/57/0 sis=73) [2] r=0 lpr=73 pi=[56,73)/1 pct=0'0 crt=64'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:51 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 73 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=71/72 n=6 ec=48/33 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.001864433s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=39'483 unknown NOTIFY pruub 132.480957031s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:51 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 73 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:51 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 73 pg[9.7( v 64'487 (0'0,64'487] local-lis/les=0/0 n=7 ec=48/33 lis/c=71/56 les/c/f=72/57/0 sis=73) [2] r=0 lpr=73 pi=[56,73)/1 crt=64'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:51 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 73 pg[9.18( v 64'487 (0'0,64'487] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=73) [2]/[1] r=0 lpr=73 pi=[48,73)/1 crt=64'486 lcod 64'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:51 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=73) [2]/[1] r=0 lpr=73 pi=[48,73)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:51 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=73) [2]/[1] r=0 lpr=73 pi=[48,73)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:51 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 73 pg[9.18( v 64'487 (0'0,64'487] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=73) [2]/[1] r=0 lpr=73 pi=[48,73)/1 crt=64'486 lcod 64'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:51 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 73 pg[6.8( v 34'39 (0'0,34'39] local-lis/les=72/73 n=1 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=72) [2] r=0 lpr=72 pi=[46,72)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:51 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.e scrub starts
Feb  2 06:33:51 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.e scrub ok
Feb  2 06:33:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v145: 305 pgs: 305 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:33:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Feb  2 06:33:51 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb  2 06:33:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Feb  2 06:33:51 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb  2 06:33:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:33:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Feb  2 06:33:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Feb  2 06:33:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb  2 06:33:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Feb  2 06:33:52 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Feb  2 06:33:52 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 74 pg[6.9( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=8.342340469s) [0] r=-1 lpr=74 pi=[53,74)/1 crt=34'39 lcod 0'0 active pruub 123.014564514s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:52 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 74 pg[6.9( v 34'39 (0'0,34'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=8.342309952s) [0] r=-1 lpr=74 pi=[53,74)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 123.014564514s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:52 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 74 pg[9.17( v 64'485 (0'0,64'485] local-lis/les=73/74 n=6 ec=48/33 lis/c=71/56 les/c/f=72/57/0 sis=73) [2] r=0 lpr=73 pi=[56,73)/1 crt=64'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:52 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 74 pg[9.f( v 64'485 (0'0,64'485] local-lis/les=73/74 n=7 ec=48/33 lis/c=71/56 les/c/f=72/57/0 sis=73) [2] r=0 lpr=73 pi=[56,73)/1 crt=64'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:52 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 74 pg[9.7( v 64'487 (0'0,64'487] local-lis/les=73/74 n=7 ec=48/33 lis/c=71/56 les/c/f=72/57/0 sis=73) [2] r=0 lpr=73 pi=[56,73)/1 crt=64'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:52 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=73/74 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[48,73)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:52 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 74 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=73/74 n=6 ec=48/33 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:52 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 74 pg[6.9( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=74) [0] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:52 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 74 pg[9.18( v 64'487 (0'0,64'487] local-lis/les=73/74 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[48,73)/1 crt=64'487 lcod 64'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:52 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb  2 06:33:52 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb  2 06:33:52 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Feb  2 06:33:52 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb  2 06:33:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Feb  2 06:33:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Feb  2 06:33:53 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Feb  2 06:33:53 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=73/74 n=7 ec=48/33 lis/c=73/48 les/c/f=74/49/0 sis=75 pruub=15.003133774s) [2] async=[2] r=-1 lpr=75 pi=[48,75)/1 crt=39'483 lcod 0'0 active pruub 130.677566528s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:53 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=73/74 n=7 ec=48/33 lis/c=73/48 les/c/f=74/49/0 sis=75 pruub=15.003046989s) [2] r=-1 lpr=75 pi=[48,75)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 130.677566528s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:53 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 75 pg[9.18( v 64'487 (0'0,64'487] local-lis/les=73/74 n=6 ec=48/33 lis/c=73/48 les/c/f=74/49/0 sis=75 pruub=15.003237724s) [2] async=[2] r=-1 lpr=75 pi=[48,75)/1 crt=64'487 lcod 64'486 active pruub 130.677902222s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:53 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 75 pg[9.18( v 64'487 (0'0,64'487] local-lis/les=73/74 n=6 ec=48/33 lis/c=73/48 les/c/f=74/49/0 sis=75 pruub=15.003158569s) [2] r=-1 lpr=75 pi=[48,75)/1 crt=64'487 lcod 64'486 unknown NOTIFY pruub 130.677902222s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:53 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=73/48 les/c/f=74/49/0 sis=75) [2] r=0 lpr=75 pi=[48,75)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:53 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=73/48 les/c/f=74/49/0 sis=75) [2] r=0 lpr=75 pi=[48,75)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:53 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 75 pg[9.18( v 64'487 (0'0,64'487] local-lis/les=0/0 n=6 ec=48/33 lis/c=73/48 les/c/f=74/49/0 sis=75) [2] r=0 lpr=75 pi=[48,75)/1 pct=0'0 crt=64'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:53 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 75 pg[9.18( v 64'487 (0'0,64'487] local-lis/les=0/0 n=6 ec=48/33 lis/c=73/48 les/c/f=74/49/0 sis=75) [2] r=0 lpr=75 pi=[48,75)/1 crt=64'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:53 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 75 pg[6.9( v 34'39 (0'0,34'39] local-lis/les=74/75 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=74) [0] r=0 lpr=74 pi=[53,74)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:53 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Feb  2 06:33:53 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Feb  2 06:33:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 2 active+remapped, 303 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 422 B/s, 9 objects/s recovering
Feb  2 06:33:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Feb  2 06:33:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb  2 06:33:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Feb  2 06:33:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb  2 06:33:53 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.b scrub starts
Feb  2 06:33:53 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 4.b scrub ok
Feb  2 06:33:54 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Feb  2 06:33:54 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Feb  2 06:33:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Feb  2 06:33:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Feb  2 06:33:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb  2 06:33:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Feb  2 06:33:54 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb  2 06:33:54 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb  2 06:33:54 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Feb  2 06:33:54 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 76 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=75/76 n=7 ec=48/33 lis/c=73/48 les/c/f=74/49/0 sis=75) [2] r=0 lpr=75 pi=[48,75)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:54 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 76 pg[9.18( v 64'487 (0'0,64'487] local-lis/les=75/76 n=6 ec=48/33 lis/c=73/48 les/c/f=74/49/0 sis=75) [2] r=0 lpr=75 pi=[48,75)/1 crt=64'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:54 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.a scrub starts
Feb  2 06:33:54 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.a scrub ok
Feb  2 06:33:54 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 76 pg[6.a( v 34'39 (0'0,34'39] local-lis/les=59/61 n=1 ec=46/21 lis/c=59/59 les/c/f=61/61/0 sis=76 pruub=14.266259193s) [0] r=-1 lpr=76 pi=[59,76)/1 crt=34'39 lcod 0'0 active pruub 131.146820068s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:54 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 76 pg[6.a( v 34'39 (0'0,34'39] local-lis/les=59/61 n=1 ec=46/21 lis/c=59/59 les/c/f=61/61/0 sis=76 pruub=14.266130447s) [0] r=-1 lpr=76 pi=[59,76)/1 crt=34'39 lcod 0'0 unknown NOTIFY pruub 131.146820068s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:54 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 76 pg[6.a( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=59/59 les/c/f=61/61/0 sis=76) [0] r=0 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:55 np0005604943 systemd-logind[786]: New session 33 of user zuul.
Feb  2 06:33:55 np0005604943 systemd[1]: Started Session 33 of User zuul.
Feb  2 06:33:55 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Feb  2 06:33:55 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Feb  2 06:33:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Feb  2 06:33:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Feb  2 06:33:55 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Feb  2 06:33:55 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Feb  2 06:33:55 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb  2 06:33:55 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 77 pg[6.a( v 34'39 (0'0,34'39] local-lis/les=76/77 n=1 ec=46/21 lis/c=59/59 les/c/f=61/61/0 sis=76) [0] r=0 lpr=76 pi=[59,76)/1 crt=34'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v151: 305 pgs: 2 active+remapped, 303 active+clean; 462 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 422 B/s, 9 objects/s recovering
Feb  2 06:33:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Feb  2 06:33:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb  2 06:33:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Feb  2 06:33:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb  2 06:33:55 np0005604943 python3.9[98824]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:33:56 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Feb  2 06:33:56 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Feb  2 06:33:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Feb  2 06:33:56 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Feb  2 06:33:56 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb  2 06:33:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Feb  2 06:33:56 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Feb  2 06:33:56 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb  2 06:33:56 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb  2 06:33:56 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Feb  2 06:33:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:33:56 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Feb  2 06:33:57 np0005604943 python3.9[99042]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:33:57 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 78 pg[6.b( v 34'39 (0'0,34'39] local-lis/les=60/61 n=1 ec=46/21 lis/c=60/60 les/c/f=61/61/0 sis=78 pruub=11.535334587s) [1] r=-1 lpr=78 pi=[60,78)/1 crt=34'39 active pruub 134.964843750s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:57 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 78 pg[6.b( v 34'39 (0'0,34'39] local-lis/les=60/61 n=1 ec=46/21 lis/c=60/60 les/c/f=61/61/0 sis=78 pruub=11.535291672s) [1] r=-1 lpr=78 pi=[60,78)/1 crt=34'39 unknown NOTIFY pruub 134.964843750s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:57 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=60/60 les/c/f=61/61/0 sis=78) [1] r=0 lpr=78 pi=[60,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Feb  2 06:33:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Feb  2 06:33:57 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Feb  2 06:33:57 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Feb  2 06:33:57 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb  2 06:33:57 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 79 pg[6.b( v 34'39 lc 0'0 (0'0,34'39] local-lis/les=78/79 n=1 ec=46/21 lis/c=60/60 les/c/f=61/61/0 sis=78) [1] r=0 lpr=78 pi=[60,78)/1 crt=34'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:33:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v154: 305 pgs: 305 active+clean; 462 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Feb  2 06:33:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Feb  2 06:33:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb  2 06:33:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Feb  2 06:33:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb  2 06:33:58 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Feb  2 06:33:58 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Feb  2 06:33:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Feb  2 06:33:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Feb  2 06:33:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb  2 06:33:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Feb  2 06:33:58 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Feb  2 06:33:58 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb  2 06:33:58 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb  2 06:33:58 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=80 pruub=10.315998077s) [2] r=-1 lpr=80 pi=[48,80)/1 crt=39'483 lcod 0'0 active pruub 131.448654175s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:58 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=80 pruub=10.315964699s) [2] r=-1 lpr=80 pi=[48,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 131.448654175s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:58 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 80 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=80 pruub=10.315821648s) [2] r=-1 lpr=80 pi=[48,80)/1 crt=64'486 lcod 64'486 active pruub 131.449264526s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:58 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 80 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=80 pruub=10.315778732s) [2] r=-1 lpr=80 pi=[48,80)/1 crt=64'486 lcod 64'486 unknown NOTIFY pruub 131.449264526s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:58 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=80) [2] r=0 lpr=80 pi=[48,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:58 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=80) [2] r=0 lpr=80 pi=[48,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:58 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Feb  2 06:33:58 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Feb  2 06:33:59 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Feb  2 06:33:59 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Feb  2 06:33:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Feb  2 06:33:59 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Feb  2 06:33:59 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb  2 06:33:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Feb  2 06:33:59 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Feb  2 06:33:59 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[48,81)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:59 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[48,81)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:59 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[48,81)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:59 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[48,81)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:33:59 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 81 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=81) [2]/[1] r=0 lpr=81 pi=[48,81)/1 crt=64'486 lcod 64'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:59 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 81 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=48/49 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=81) [2]/[1] r=0 lpr=81 pi=[48,81)/1 crt=64'486 lcod 64'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:59 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=81) [2]/[1] r=0 lpr=81 pi=[48,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:33:59 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=48/49 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=81) [2]/[1] r=0 lpr=81 pi=[48,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:33:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v157: 305 pgs: 305 active+clean; 462 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Feb  2 06:33:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Feb  2 06:33:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb  2 06:33:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Feb  2 06:33:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb  2 06:34:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Feb  2 06:34:00 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Feb  2 06:34:00 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb  2 06:34:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Feb  2 06:34:00 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb  2 06:34:00 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb  2 06:34:00 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Feb  2 06:34:00 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 82 pg[6.d( v 34'39 (0'0,34'39] local-lis/les=63/64 n=1 ec=46/21 lis/c=63/63 les/c/f=64/64/0 sis=82 pruub=12.440239906s) [1] r=-1 lpr=82 pi=[63,82)/1 crt=34'39 active pruub 139.039031982s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:00 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 82 pg[6.d( v 34'39 (0'0,34'39] local-lis/les=63/64 n=1 ec=46/21 lis/c=63/63 les/c/f=64/64/0 sis=82 pruub=12.440140724s) [1] r=-1 lpr=82 pi=[63,82)/1 crt=34'39 unknown NOTIFY pruub 139.039031982s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:00 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 82 pg[6.d( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=63/63 les/c/f=64/64/0 sis=82) [1] r=0 lpr=82 pi=[63,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:00 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Feb  2 06:34:00 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Feb  2 06:34:01 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[48,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:01 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 82 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=81/82 n=6 ec=48/33 lis/c=48/48 les/c/f=49/49/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[48,81)/1 crt=64'487 lcod 64'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Feb  2 06:34:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Feb  2 06:34:01 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Feb  2 06:34:01 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 83 pg[6.d( v 34'39 lc 32'8 (0'0,34'39] local-lis/les=82/83 n=1 ec=46/21 lis/c=63/63 les/c/f=64/64/0 sis=82) [1] r=0 lpr=82 pi=[63,82)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 305 active+clean; 462 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:34:01 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Feb  2 06:34:01 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb  2 06:34:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Feb  2 06:34:01 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb  2 06:34:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Feb  2 06:34:01 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb  2 06:34:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:34:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Feb  2 06:34:01 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Feb  2 06:34:01 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb  2 06:34:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Feb  2 06:34:01 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Feb  2 06:34:01 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=48/33 lis/c=81/48 les/c/f=82/49/0 sis=84 pruub=15.451940536s) [2] async=[2] r=-1 lpr=84 pi=[48,84)/1 crt=39'483 lcod 0'0 active pruub 139.739059448s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:01 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=48/33 lis/c=81/48 les/c/f=82/49/0 sis=84 pruub=15.451580048s) [2] r=-1 lpr=84 pi=[48,84)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 139.739059448s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:01 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 84 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=81/82 n=6 ec=48/33 lis/c=81/48 les/c/f=82/49/0 sis=84 pruub=15.483525276s) [2] async=[2] r=-1 lpr=84 pi=[48,84)/1 crt=64'487 lcod 64'486 active pruub 139.771423340s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:01 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 84 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=81/82 n=6 ec=48/33 lis/c=81/48 les/c/f=82/49/0 sis=84 pruub=15.483476639s) [2] r=-1 lpr=84 pi=[48,84)/1 crt=64'487 lcod 64'486 unknown NOTIFY pruub 139.771423340s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:02 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 84 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=0/0 n=6 ec=48/33 lis/c=81/48 les/c/f=82/49/0 sis=84) [2] r=0 lpr=84 pi=[48,84)/1 pct=0'0 crt=64'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:02 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 84 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=0/0 n=6 ec=48/33 lis/c=81/48 les/c/f=82/49/0 sis=84) [2] r=0 lpr=84 pi=[48,84)/1 crt=64'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:02 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=81/48 les/c/f=82/49/0 sis=84) [2] r=0 lpr=84 pi=[48,84)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:02 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=48/33 lis/c=81/48 les/c/f=82/49/0 sis=84) [2] r=0 lpr=84 pi=[48,84)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:34:02 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Feb  2 06:34:02 np0005604943 podman[99208]: 2026-02-02 11:34:02.433344743 +0000 UTC m=+0.053792905 container create dbafcf81500aeff79612954584eeff81400abbee52551c9e82af22c042067478 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_austin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:34:02 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Feb  2 06:34:02 np0005604943 podman[99208]: 2026-02-02 11:34:02.400657451 +0000 UTC m=+0.021105643 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:34:02 np0005604943 systemd[1]: Started libpod-conmon-dbafcf81500aeff79612954584eeff81400abbee52551c9e82af22c042067478.scope.
Feb  2 06:34:02 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:34:02 np0005604943 podman[99208]: 2026-02-02 11:34:02.580260366 +0000 UTC m=+0.200708618 container init dbafcf81500aeff79612954584eeff81400abbee52551c9e82af22c042067478 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_austin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 06:34:02 np0005604943 podman[99208]: 2026-02-02 11:34:02.585981289 +0000 UTC m=+0.206429481 container start dbafcf81500aeff79612954584eeff81400abbee52551c9e82af22c042067478 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:34:02 np0005604943 intelligent_austin[99224]: 167 167
Feb  2 06:34:02 np0005604943 systemd[1]: libpod-dbafcf81500aeff79612954584eeff81400abbee52551c9e82af22c042067478.scope: Deactivated successfully.
Feb  2 06:34:02 np0005604943 conmon[99224]: conmon dbafcf81500aeff79612 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dbafcf81500aeff79612954584eeff81400abbee52551c9e82af22c042067478.scope/container/memory.events
Feb  2 06:34:02 np0005604943 podman[99208]: 2026-02-02 11:34:02.634180582 +0000 UTC m=+0.254628774 container attach dbafcf81500aeff79612954584eeff81400abbee52551c9e82af22c042067478 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_austin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 06:34:02 np0005604943 podman[99208]: 2026-02-02 11:34:02.634644441 +0000 UTC m=+0.255092603 container died dbafcf81500aeff79612954584eeff81400abbee52551c9e82af22c042067478 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_austin, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:34:02 np0005604943 systemd[1]: var-lib-containers-storage-overlay-62676aad719cbe3c94cea4ae1ca7bcc6c26e927391c716c476bce9b1c0977546-merged.mount: Deactivated successfully.
Feb  2 06:34:02 np0005604943 podman[99208]: 2026-02-02 11:34:02.826570369 +0000 UTC m=+0.447018531 container remove dbafcf81500aeff79612954584eeff81400abbee52551c9e82af22c042067478 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 06:34:02 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Feb  2 06:34:02 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Feb  2 06:34:02 np0005604943 systemd[1]: libpod-conmon-dbafcf81500aeff79612954584eeff81400abbee52551c9e82af22c042067478.scope: Deactivated successfully.
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Feb  2 06:34:02 np0005604943 podman[99248]: 2026-02-02 11:34:02.957162675 +0000 UTC m=+0.047004642 container create de7fe8315a28b368c94e781e7df32d6cd9f99dbc5449c5da8b525e06280ceec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_euler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Feb  2 06:34:02 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Feb  2 06:34:02 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 85 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=84/85 n=7 ec=48/33 lis/c=81/48 les/c/f=82/49/0 sis=84) [2] r=0 lpr=84 pi=[48,84)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:02 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 85 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=84/85 n=6 ec=48/33 lis/c=81/48 les/c/f=82/49/0 sis=84) [2] r=0 lpr=84 pi=[48,84)/1 crt=64'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:03 np0005604943 systemd[1]: Started libpod-conmon-de7fe8315a28b368c94e781e7df32d6cd9f99dbc5449c5da8b525e06280ceec8.scope.
Feb  2 06:34:03 np0005604943 podman[99248]: 2026-02-02 11:34:02.930153456 +0000 UTC m=+0.019995443 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:34:03 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:34:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19096423325bcb348ecc5ff7f8557718ccadeb944cf0bd382b40b8452b42b1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:34:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19096423325bcb348ecc5ff7f8557718ccadeb944cf0bd382b40b8452b42b1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:34:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19096423325bcb348ecc5ff7f8557718ccadeb944cf0bd382b40b8452b42b1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:34:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19096423325bcb348ecc5ff7f8557718ccadeb944cf0bd382b40b8452b42b1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:34:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19096423325bcb348ecc5ff7f8557718ccadeb944cf0bd382b40b8452b42b1c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:34:03 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Feb  2 06:34:03 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Feb  2 06:34:03 np0005604943 podman[99248]: 2026-02-02 11:34:03.084362321 +0000 UTC m=+0.174204308 container init de7fe8315a28b368c94e781e7df32d6cd9f99dbc5449c5da8b525e06280ceec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 06:34:03 np0005604943 podman[99248]: 2026-02-02 11:34:03.090083594 +0000 UTC m=+0.179925561 container start de7fe8315a28b368c94e781e7df32d6cd9f99dbc5449c5da8b525e06280ceec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:34:03 np0005604943 podman[99248]: 2026-02-02 11:34:03.098937324 +0000 UTC m=+0.188779311 container attach de7fe8315a28b368c94e781e7df32d6cd9f99dbc5449c5da8b525e06280ceec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 06:34:03 np0005604943 dreamy_euler[99265]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:34:03 np0005604943 dreamy_euler[99265]: --> All data devices are unavailable
Feb  2 06:34:03 np0005604943 systemd[1]: libpod-de7fe8315a28b368c94e781e7df32d6cd9f99dbc5449c5da8b525e06280ceec8.scope: Deactivated successfully.
Feb  2 06:34:03 np0005604943 podman[99248]: 2026-02-02 11:34:03.486108641 +0000 UTC m=+0.575950608 container died de7fe8315a28b368c94e781e7df32d6cd9f99dbc5449c5da8b525e06280ceec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:34:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 2 peering, 303 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 121 B/s, 3 objects/s recovering
Feb  2 06:34:03 np0005604943 systemd[1]: var-lib-containers-storage-overlay-d19096423325bcb348ecc5ff7f8557718ccadeb944cf0bd382b40b8452b42b1c-merged.mount: Deactivated successfully.
Feb  2 06:34:03 np0005604943 podman[99248]: 2026-02-02 11:34:03.654077185 +0000 UTC m=+0.743919162 container remove de7fe8315a28b368c94e781e7df32d6cd9f99dbc5449c5da8b525e06280ceec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:34:03 np0005604943 systemd[1]: libpod-conmon-de7fe8315a28b368c94e781e7df32d6cd9f99dbc5449c5da8b525e06280ceec8.scope: Deactivated successfully.
Feb  2 06:34:04 np0005604943 podman[99367]: 2026-02-02 11:34:04.04946487 +0000 UTC m=+0.050218871 container create 42357f08ff4f2704b710fd9a1e14af913bc0784b13a694bcd8a55008d6c5f17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_chaum, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:34:04 np0005604943 systemd[1]: Started libpod-conmon-42357f08ff4f2704b710fd9a1e14af913bc0784b13a694bcd8a55008d6c5f17a.scope.
Feb  2 06:34:04 np0005604943 podman[99367]: 2026-02-02 11:34:04.017659764 +0000 UTC m=+0.018413845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:34:04 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:34:04 np0005604943 podman[99367]: 2026-02-02 11:34:04.16701413 +0000 UTC m=+0.167768161 container init 42357f08ff4f2704b710fd9a1e14af913bc0784b13a694bcd8a55008d6c5f17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 06:34:04 np0005604943 podman[99367]: 2026-02-02 11:34:04.173950876 +0000 UTC m=+0.174704877 container start 42357f08ff4f2704b710fd9a1e14af913bc0784b13a694bcd8a55008d6c5f17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_chaum, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:34:04 np0005604943 elastic_chaum[99384]: 167 167
Feb  2 06:34:04 np0005604943 systemd[1]: libpod-42357f08ff4f2704b710fd9a1e14af913bc0784b13a694bcd8a55008d6c5f17a.scope: Deactivated successfully.
Feb  2 06:34:04 np0005604943 podman[99367]: 2026-02-02 11:34:04.190244092 +0000 UTC m=+0.190998113 container attach 42357f08ff4f2704b710fd9a1e14af913bc0784b13a694bcd8a55008d6c5f17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_chaum, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 06:34:04 np0005604943 podman[99367]: 2026-02-02 11:34:04.190607638 +0000 UTC m=+0.191361669 container died 42357f08ff4f2704b710fd9a1e14af913bc0784b13a694bcd8a55008d6c5f17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_chaum, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:34:04 np0005604943 systemd[1]: var-lib-containers-storage-overlay-816ea6f345c52f214c989b58d88b2d3c3588a11ac4be3beb9011e6bb6daa9328-merged.mount: Deactivated successfully.
Feb  2 06:34:04 np0005604943 podman[99367]: 2026-02-02 11:34:04.352524543 +0000 UTC m=+0.353278544 container remove 42357f08ff4f2704b710fd9a1e14af913bc0784b13a694bcd8a55008d6c5f17a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_chaum, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 06:34:04 np0005604943 systemd[1]: libpod-conmon-42357f08ff4f2704b710fd9a1e14af913bc0784b13a694bcd8a55008d6c5f17a.scope: Deactivated successfully.
Feb  2 06:34:04 np0005604943 podman[99435]: 2026-02-02 11:34:04.520366495 +0000 UTC m=+0.100703307 container create de92a81aa4afabd159c3f1a7fd238ac11bfbe2bcfb34392647e927af33635074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 06:34:04 np0005604943 podman[99435]: 2026-02-02 11:34:04.450872735 +0000 UTC m=+0.031209537 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:34:04 np0005604943 systemd[1]: session-33.scope: Deactivated successfully.
Feb  2 06:34:04 np0005604943 systemd[1]: session-33.scope: Consumed 7.601s CPU time.
Feb  2 06:34:04 np0005604943 systemd-logind[786]: Session 33 logged out. Waiting for processes to exit.
Feb  2 06:34:04 np0005604943 systemd-logind[786]: Removed session 33.
Feb  2 06:34:04 np0005604943 systemd[1]: Started libpod-conmon-de92a81aa4afabd159c3f1a7fd238ac11bfbe2bcfb34392647e927af33635074.scope.
Feb  2 06:34:04 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:34:04 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/964bb6af486acca373643cb66a0a5f0272ed0edb21359a3525c5171a7c068632/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:34:04 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/964bb6af486acca373643cb66a0a5f0272ed0edb21359a3525c5171a7c068632/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:34:04 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/964bb6af486acca373643cb66a0a5f0272ed0edb21359a3525c5171a7c068632/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:34:04 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/964bb6af486acca373643cb66a0a5f0272ed0edb21359a3525c5171a7c068632/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:34:04 np0005604943 podman[99435]: 2026-02-02 11:34:04.696065458 +0000 UTC m=+0.276402260 container init de92a81aa4afabd159c3f1a7fd238ac11bfbe2bcfb34392647e927af33635074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:34:04 np0005604943 podman[99435]: 2026-02-02 11:34:04.701377214 +0000 UTC m=+0.281713986 container start de92a81aa4afabd159c3f1a7fd238ac11bfbe2bcfb34392647e927af33635074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ishizaka, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:34:04 np0005604943 podman[99435]: 2026-02-02 11:34:04.734535456 +0000 UTC m=+0.314872248 container attach de92a81aa4afabd159c3f1a7fd238ac11bfbe2bcfb34392647e927af33635074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]: {
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:    "0": [
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:        {
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "devices": [
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "/dev/loop3"
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            ],
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "lv_name": "ceph_lv0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "lv_size": "21470642176",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "name": "ceph_lv0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "tags": {
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.cluster_name": "ceph",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.crush_device_class": "",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.encrypted": "0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.objectstore": "bluestore",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.osd_id": "0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.type": "block",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.vdo": "0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.with_tpm": "0"
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            },
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "type": "block",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "vg_name": "ceph_vg0"
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:        }
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:    ],
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:    "1": [
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:        {
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "devices": [
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "/dev/loop4"
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            ],
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "lv_name": "ceph_lv1",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "lv_size": "21470642176",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "name": "ceph_lv1",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "tags": {
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.cluster_name": "ceph",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.crush_device_class": "",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.encrypted": "0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.objectstore": "bluestore",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.osd_id": "1",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.type": "block",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.vdo": "0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.with_tpm": "0"
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            },
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "type": "block",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "vg_name": "ceph_vg1"
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:        }
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:    ],
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:    "2": [
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:        {
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "devices": [
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "/dev/loop5"
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            ],
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "lv_name": "ceph_lv2",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "lv_size": "21470642176",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "name": "ceph_lv2",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "tags": {
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.cluster_name": "ceph",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.crush_device_class": "",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.encrypted": "0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.objectstore": "bluestore",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.osd_id": "2",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.type": "block",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.vdo": "0",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:                "ceph.with_tpm": "0"
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            },
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "type": "block",
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:            "vg_name": "ceph_vg2"
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:        }
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]:    ]
Feb  2 06:34:04 np0005604943 strange_ishizaka[99451]: }
Feb  2 06:34:04 np0005604943 systemd[1]: libpod-de92a81aa4afabd159c3f1a7fd238ac11bfbe2bcfb34392647e927af33635074.scope: Deactivated successfully.
Feb  2 06:34:04 np0005604943 podman[99435]: 2026-02-02 11:34:04.963552765 +0000 UTC m=+0.543889547 container died de92a81aa4afabd159c3f1a7fd238ac11bfbe2bcfb34392647e927af33635074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 06:34:05 np0005604943 systemd[1]: var-lib-containers-storage-overlay-964bb6af486acca373643cb66a0a5f0272ed0edb21359a3525c5171a7c068632-merged.mount: Deactivated successfully.
Feb  2 06:34:05 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Feb  2 06:34:05 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Feb  2 06:34:05 np0005604943 podman[99435]: 2026-02-02 11:34:05.193672985 +0000 UTC m=+0.774009747 container remove de92a81aa4afabd159c3f1a7fd238ac11bfbe2bcfb34392647e927af33635074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ishizaka, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:34:05 np0005604943 systemd[1]: libpod-conmon-de92a81aa4afabd159c3f1a7fd238ac11bfbe2bcfb34392647e927af33635074.scope: Deactivated successfully.
Feb  2 06:34:05 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.b scrub starts
Feb  2 06:34:05 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.b scrub ok
Feb  2 06:34:05 np0005604943 podman[99536]: 2026-02-02 11:34:05.588354728 +0000 UTC m=+0.061010526 container create 12d8a860801b564bc276581e31ae606d190a221d4c25630cc00ad7039effd89b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_buck, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 06:34:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 2 peering, 303 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 95 B/s, 3 objects/s recovering
Feb  2 06:34:05 np0005604943 podman[99536]: 2026-02-02 11:34:05.548119269 +0000 UTC m=+0.020775067 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:34:05 np0005604943 systemd[1]: Started libpod-conmon-12d8a860801b564bc276581e31ae606d190a221d4c25630cc00ad7039effd89b.scope.
Feb  2 06:34:05 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:34:05 np0005604943 podman[99536]: 2026-02-02 11:34:05.744495368 +0000 UTC m=+0.217151166 container init 12d8a860801b564bc276581e31ae606d190a221d4c25630cc00ad7039effd89b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 06:34:05 np0005604943 podman[99536]: 2026-02-02 11:34:05.748809236 +0000 UTC m=+0.221465024 container start 12d8a860801b564bc276581e31ae606d190a221d4c25630cc00ad7039effd89b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 06:34:05 np0005604943 suspicious_buck[99552]: 167 167
Feb  2 06:34:05 np0005604943 systemd[1]: libpod-12d8a860801b564bc276581e31ae606d190a221d4c25630cc00ad7039effd89b.scope: Deactivated successfully.
Feb  2 06:34:05 np0005604943 podman[99536]: 2026-02-02 11:34:05.78045842 +0000 UTC m=+0.253114228 container attach 12d8a860801b564bc276581e31ae606d190a221d4c25630cc00ad7039effd89b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_buck, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Feb  2 06:34:05 np0005604943 podman[99536]: 2026-02-02 11:34:05.780848016 +0000 UTC m=+0.253503804 container died 12d8a860801b564bc276581e31ae606d190a221d4c25630cc00ad7039effd89b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:34:05 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.e scrub starts
Feb  2 06:34:05 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.e scrub ok
Feb  2 06:34:05 np0005604943 systemd[1]: var-lib-containers-storage-overlay-0bb27889d9508066f3a1400bd72097e52ad0876c5ad366e94980661d2a63c664-merged.mount: Deactivated successfully.
Feb  2 06:34:06 np0005604943 podman[99536]: 2026-02-02 11:34:06.171469315 +0000 UTC m=+0.644125103 container remove 12d8a860801b564bc276581e31ae606d190a221d4c25630cc00ad7039effd89b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_buck, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:34:06 np0005604943 systemd[1]: libpod-conmon-12d8a860801b564bc276581e31ae606d190a221d4c25630cc00ad7039effd89b.scope: Deactivated successfully.
Feb  2 06:34:06 np0005604943 podman[99576]: 2026-02-02 11:34:06.320048828 +0000 UTC m=+0.053121844 container create 2c342d8eae312d3d52d8b621f64003096c6b80124585e92a09cd175ea099bb25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_jemison, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 06:34:06 np0005604943 podman[99576]: 2026-02-02 11:34:06.289739948 +0000 UTC m=+0.022812944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:34:06 np0005604943 systemd[1]: Started libpod-conmon-2c342d8eae312d3d52d8b621f64003096c6b80124585e92a09cd175ea099bb25.scope.
Feb  2 06:34:06 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:34:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cdb16e376fa93de6c1a3b19b8d5f2cb8d728b1323bcdd191ebe888556ed804/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:34:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cdb16e376fa93de6c1a3b19b8d5f2cb8d728b1323bcdd191ebe888556ed804/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:34:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cdb16e376fa93de6c1a3b19b8d5f2cb8d728b1323bcdd191ebe888556ed804/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:34:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cdb16e376fa93de6c1a3b19b8d5f2cb8d728b1323bcdd191ebe888556ed804/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:34:06 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Feb  2 06:34:06 np0005604943 podman[99576]: 2026-02-02 11:34:06.472332458 +0000 UTC m=+0.205405464 container init 2c342d8eae312d3d52d8b621f64003096c6b80124585e92a09cd175ea099bb25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:34:06 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Feb  2 06:34:06 np0005604943 podman[99576]: 2026-02-02 11:34:06.477014172 +0000 UTC m=+0.210087148 container start 2c342d8eae312d3d52d8b621f64003096c6b80124585e92a09cd175ea099bb25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 06:34:06 np0005604943 podman[99576]: 2026-02-02 11:34:06.488816017 +0000 UTC m=+0.221889023 container attach 2c342d8eae312d3d52d8b621f64003096c6b80124585e92a09cd175ea099bb25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_jemison, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 06:34:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:34:07 np0005604943 lvm[99668]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:34:07 np0005604943 lvm[99668]: VG ceph_vg0 finished
Feb  2 06:34:07 np0005604943 lvm[99671]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:34:07 np0005604943 lvm[99671]: VG ceph_vg1 finished
Feb  2 06:34:07 np0005604943 lvm[99673]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:34:07 np0005604943 lvm[99673]: VG ceph_vg2 finished
Feb  2 06:34:07 np0005604943 vibrant_jemison[99592]: {}
Feb  2 06:34:07 np0005604943 systemd[1]: libpod-2c342d8eae312d3d52d8b621f64003096c6b80124585e92a09cd175ea099bb25.scope: Deactivated successfully.
Feb  2 06:34:07 np0005604943 podman[99576]: 2026-02-02 11:34:07.179512293 +0000 UTC m=+0.912585269 container died 2c342d8eae312d3d52d8b621f64003096c6b80124585e92a09cd175ea099bb25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 06:34:07 np0005604943 systemd[1]: var-lib-containers-storage-overlay-78cdb16e376fa93de6c1a3b19b8d5f2cb8d728b1323bcdd191ebe888556ed804-merged.mount: Deactivated successfully.
Feb  2 06:34:07 np0005604943 podman[99576]: 2026-02-02 11:34:07.341864355 +0000 UTC m=+1.074937351 container remove 2c342d8eae312d3d52d8b621f64003096c6b80124585e92a09cd175ea099bb25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:34:07 np0005604943 systemd[1]: libpod-conmon-2c342d8eae312d3d52d8b621f64003096c6b80124585e92a09cd175ea099bb25.scope: Deactivated successfully.
Feb  2 06:34:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:34:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:34:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:34:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:34:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v165: 305 pgs: 305 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 80 B/s, 2 objects/s recovering
Feb  2 06:34:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Feb  2 06:34:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb  2 06:34:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Feb  2 06:34:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb  2 06:34:08 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Feb  2 06:34:08 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Feb  2 06:34:08 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:34:08 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:34:08 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb  2 06:34:08 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb  2 06:34:08 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.c scrub starts
Feb  2 06:34:08 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.c scrub ok
Feb  2 06:34:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Feb  2 06:34:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Feb  2 06:34:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb  2 06:34:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Feb  2 06:34:08 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Feb  2 06:34:08 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.e scrub starts
Feb  2 06:34:08 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.e scrub ok
Feb  2 06:34:09 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Feb  2 06:34:09 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Feb  2 06:34:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:34:09
Feb  2 06:34:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:34:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:34:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['.mgr', 'volumes', 'images', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta']
Feb  2 06:34:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:34:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 305 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 63 B/s, 2 objects/s recovering
Feb  2 06:34:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Feb  2 06:34:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Feb  2 06:34:09 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Feb  2 06:34:09 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb  2 06:34:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Feb  2 06:34:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb  2 06:34:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Feb  2 06:34:09 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Feb  2 06:34:09 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.c scrub starts
Feb  2 06:34:09 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 86 pg[6.f( v 34'39 (0'0,34'39] local-lis/les=60/61 n=1 ec=46/21 lis/c=60/60 les/c/f=61/61/0 sis=86 pruub=15.036943436s) [2] r=-1 lpr=86 pi=[60,86)/1 crt=34'39 active pruub 150.962066650s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:09 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 87 pg[6.f( v 34'39 (0'0,34'39] local-lis/les=60/61 n=1 ec=46/21 lis/c=60/60 les/c/f=61/61/0 sis=86 pruub=15.036884308s) [2] r=-1 lpr=86 pi=[60,86)/1 crt=34'39 unknown NOTIFY pruub 150.962066650s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:09 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.c scrub ok
Feb  2 06:34:09 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 87 pg[6.f( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=60/60 les/c/f=61/61/0 sis=86) [2] r=0 lpr=87 pi=[60,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:10 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.a scrub starts
Feb  2 06:34:10 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.a scrub ok
Feb  2 06:34:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:34:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:34:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:34:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:34:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:34:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:34:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:34:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:34:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:34:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:34:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:34:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:34:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:34:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:34:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:34:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:34:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Feb  2 06:34:10 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Feb  2 06:34:10 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb  2 06:34:10 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Feb  2 06:34:10 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Feb  2 06:34:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Feb  2 06:34:10 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Feb  2 06:34:10 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 88 pg[6.f( v 34'39 lc 32'1 (0'0,34'39] local-lis/les=86/88 n=1 ec=46/21 lis/c=60/60 les/c/f=61/61/0 sis=86) [2] r=0 lpr=87 pi=[60,86)/1 crt=34'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:11 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Feb  2 06:34:11 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Feb  2 06:34:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 305 active+clean; 462 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:34:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Feb  2 06:34:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Feb  2 06:34:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Feb  2 06:34:11 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Feb  2 06:34:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb  2 06:34:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Feb  2 06:34:11 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Feb  2 06:34:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:34:12 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb  2 06:34:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 122 B/s, 0 objects/s recovering
Feb  2 06:34:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Feb  2 06:34:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Feb  2 06:34:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Feb  2 06:34:14 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Feb  2 06:34:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb  2 06:34:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Feb  2 06:34:14 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Feb  2 06:34:14 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Feb  2 06:34:14 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Feb  2 06:34:15 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Feb  2 06:34:15 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Feb  2 06:34:15 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb  2 06:34:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 105 B/s, 0 objects/s recovering
Feb  2 06:34:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Feb  2 06:34:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Feb  2 06:34:16 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Feb  2 06:34:16 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Feb  2 06:34:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Feb  2 06:34:16 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Feb  2 06:34:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb  2 06:34:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Feb  2 06:34:16 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 91 pg[9.13( v 64'485 (0'0,64'485] local-lis/les=57/58 n=6 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=91 pruub=12.458911896s) [2] r=-1 lpr=91 pi=[57,91)/1 crt=63'484 lcod 63'484 active pruub 154.964309692s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:16 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 91 pg[9.13( v 64'485 (0'0,64'485] local-lis/les=57/58 n=6 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=91 pruub=12.458766937s) [2] r=-1 lpr=91 pi=[57,91)/1 crt=63'484 lcod 63'484 unknown NOTIFY pruub 154.964309692s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:16 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Feb  2 06:34:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 91 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=91) [2] r=0 lpr=91 pi=[57,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:34:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Feb  2 06:34:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Feb  2 06:34:16 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Feb  2 06:34:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=92) [2]/[0] r=-1 lpr=92 pi=[57,92)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:16 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=92) [2]/[0] r=-1 lpr=92 pi=[57,92)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:16 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 92 pg[9.13( v 64'485 (0'0,64'485] local-lis/les=57/58 n=6 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=92) [2]/[0] r=0 lpr=92 pi=[57,92)/1 crt=63'484 lcod 63'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:16 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 92 pg[9.13( v 64'485 (0'0,64'485] local-lis/les=57/58 n=6 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=92) [2]/[0] r=0 lpr=92 pi=[57,92)/1 crt=63'484 lcod 63'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:16 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Feb  2 06:34:16 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Feb  2 06:34:17 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb  2 06:34:17 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Feb  2 06:34:17 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Feb  2 06:34:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 107 B/s, 0 objects/s recovering
Feb  2 06:34:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Feb  2 06:34:17 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Feb  2 06:34:17 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Feb  2 06:34:17 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Feb  2 06:34:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Feb  2 06:34:17 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb  2 06:34:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Feb  2 06:34:17 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Feb  2 06:34:17 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 93 pg[9.13( v 64'485 (0'0,64'485] local-lis/les=92/93 n=6 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=92) [2]/[0] async=[2] r=0 lpr=92 pi=[57,92)/1 crt=64'485 lcod 63'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:17 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.b scrub starts
Feb  2 06:34:17 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.b scrub ok
Feb  2 06:34:18 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Feb  2 06:34:18 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb  2 06:34:18 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Feb  2 06:34:18 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Feb  2 06:34:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Feb  2 06:34:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Feb  2 06:34:18 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Feb  2 06:34:18 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 94 pg[9.13( v 64'485 (0'0,64'485] local-lis/les=92/93 n=6 ec=48/33 lis/c=92/57 les/c/f=93/58/0 sis=94 pruub=15.001072884s) [2] async=[2] r=-1 lpr=94 pi=[57,94)/1 crt=64'485 lcod 63'484 active pruub 160.088973999s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:18 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 94 pg[9.13( v 64'485 (0'0,64'485] local-lis/les=92/93 n=6 ec=48/33 lis/c=92/57 les/c/f=93/58/0 sis=94 pruub=15.000780106s) [2] r=-1 lpr=94 pi=[57,94)/1 crt=64'485 lcod 63'484 unknown NOTIFY pruub 160.088973999s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:18 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 94 pg[9.13( v 64'485 (0'0,64'485] local-lis/les=0/0 n=6 ec=48/33 lis/c=92/57 les/c/f=93/58/0 sis=94) [2] r=0 lpr=94 pi=[57,94)/1 pct=0'0 crt=64'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:18 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 94 pg[9.13( v 64'485 (0'0,64'485] local-lis/les=0/0 n=6 ec=48/33 lis/c=92/57 les/c/f=93/58/0 sis=94) [2] r=0 lpr=94 pi=[57,94)/1 crt=64'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:19 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Feb  2 06:34:19 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Feb  2 06:34:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v180: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Feb  2 06:34:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Feb  2 06:34:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Feb  2 06:34:19 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Feb  2 06:34:19 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Feb  2 06:34:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Feb  2 06:34:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb  2 06:34:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Feb  2 06:34:19 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Feb  2 06:34:19 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 95 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=95 pruub=15.783133507s) [1] r=-1 lpr=95 pi=[56,95)/1 crt=39'483 active pruub 161.879928589s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:19 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 95 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=95 pruub=15.783088684s) [1] r=-1 lpr=95 pi=[56,95)/1 crt=39'483 unknown NOTIFY pruub 161.879928589s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:19 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=95) [1] r=0 lpr=95 pi=[56,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:19 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 95 pg[9.13( v 64'485 (0'0,64'485] local-lis/les=94/95 n=6 ec=48/33 lis/c=92/57 les/c/f=93/58/0 sis=94) [2] r=0 lpr=94 pi=[57,94)/1 crt=64'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:20 np0005604943 systemd-logind[786]: New session 34 of user zuul.
Feb  2 06:34:20 np0005604943 systemd[1]: Started Session 34 of User zuul.
Feb  2 06:34:20 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Feb  2 06:34:20 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb  2 06:34:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Feb  2 06:34:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Feb  2 06:34:21 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Feb  2 06:34:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 96 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=96) [1]/[0] r=0 lpr=96 pi=[56,96)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:21 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 96 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=96) [1]/[0] r=0 lpr=96 pi=[56,96)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[56,96)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:21 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[56,96)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:21 np0005604943 python3.9[99868]: ansible-ansible.legacy.ping Invoked with data=pong
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8047744612865494e-06 of space, bias 4.0, pg target 0.0021657293535438595 quantized to 16 (current 16)
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:34:21 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Feb  2 06:34:21 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Feb  2 06:34:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Feb  2 06:34:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Feb  2 06:34:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Feb  2 06:34:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:34:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Feb  2 06:34:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb  2 06:34:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Feb  2 06:34:22 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Feb  2 06:34:22 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 97 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=48/33 lis/c=68/68 les/c/f=69/69/0 sis=97 pruub=12.068267822s) [0] r=-1 lpr=97 pi=[68,97)/1 crt=39'483 active pruub 152.541000366s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:22 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 97 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=48/33 lis/c=68/68 les/c/f=69/69/0 sis=97 pruub=12.068224907s) [0] r=-1 lpr=97 pi=[68,97)/1 crt=39'483 unknown NOTIFY pruub 152.541000366s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Feb  2 06:34:22 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 97 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=68/68 les/c/f=69/69/0 sis=97) [0] r=0 lpr=97 pi=[68,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:22 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 97 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=96/97 n=6 ec=48/33 lis/c=56/56 les/c/f=57/57/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[56,96)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:22 np0005604943 python3.9[100042]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:34:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Feb  2 06:34:23 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb  2 06:34:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Feb  2 06:34:23 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Feb  2 06:34:23 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=96/56 les/c/f=97/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:23 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=96/56 les/c/f=97/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:23 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=96/97 n=6 ec=48/33 lis/c=96/56 les/c/f=97/57/0 sis=98 pruub=15.001524925s) [1] async=[1] r=-1 lpr=98 pi=[56,98)/1 crt=39'483 active pruub 164.157272339s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:23 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=96/97 n=6 ec=48/33 lis/c=96/56 les/c/f=97/57/0 sis=98 pruub=15.001461983s) [1] r=-1 lpr=98 pi=[56,98)/1 crt=39'483 unknown NOTIFY pruub 164.157272339s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:23 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 98 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=68/68 les/c/f=69/69/0 sis=98) [0]/[2] r=-1 lpr=98 pi=[68,98)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:23 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 98 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=68/68 les/c/f=69/69/0 sis=98) [0]/[2] r=-1 lpr=98 pi=[68,98)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 98 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=48/33 lis/c=68/68 les/c/f=69/69/0 sis=98) [0]/[2] r=0 lpr=98 pi=[68,98)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:23 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 98 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=68/69 n=6 ec=48/33 lis/c=68/68 les/c/f=69/69/0 sis=98) [0]/[2] r=0 lpr=98 pi=[68,98)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:23 np0005604943 python3.9[100198]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:34:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:34:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Feb  2 06:34:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Feb  2 06:34:23 np0005604943 python3.9[100351]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:34:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Feb  2 06:34:24 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 11.c scrub starts
Feb  2 06:34:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb  2 06:34:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Feb  2 06:34:24 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 11.c scrub ok
Feb  2 06:34:24 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Feb  2 06:34:24 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Feb  2 06:34:24 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 99 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=98/99 n=6 ec=48/33 lis/c=68/68 les/c/f=69/69/0 sis=98) [0]/[2] async=[0] r=0 lpr=98 pi=[68,98)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:24 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=98/99 n=6 ec=48/33 lis/c=96/56 les/c/f=97/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:24 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.a scrub starts
Feb  2 06:34:24 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.a scrub ok
Feb  2 06:34:24 np0005604943 python3.9[100505]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:34:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Feb  2 06:34:25 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb  2 06:34:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Feb  2 06:34:25 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Feb  2 06:34:25 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Feb  2 06:34:25 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Feb  2 06:34:25 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=98/68 les/c/f=99/69/0 sis=100) [0] r=0 lpr=100 pi=[68,100)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:25 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=98/99 n=6 ec=48/33 lis/c=98/68 les/c/f=99/69/0 sis=100 pruub=14.910334587s) [0] async=[0] r=-1 lpr=100 pi=[68,100)/1 crt=39'483 active pruub 158.505386353s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:25 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=98/99 n=6 ec=48/33 lis/c=98/68 les/c/f=99/69/0 sis=100 pruub=14.910277367s) [0] r=-1 lpr=100 pi=[68,100)/1 crt=39'483 unknown NOTIFY pruub 158.505386353s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:25 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=98/68 les/c/f=99/69/0 sis=100) [0] r=0 lpr=100 pi=[68,100)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:25 np0005604943 python3.9[100657]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:34:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:34:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Feb  2 06:34:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Feb  2 06:34:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Feb  2 06:34:26 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb  2 06:34:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Feb  2 06:34:26 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Feb  2 06:34:26 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Feb  2 06:34:26 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=100/101 n=6 ec=48/33 lis/c=98/68 les/c/f=99/69/0 sis=100) [0] r=0 lpr=100 pi=[68,100)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:26 np0005604943 python3.9[100807]: ansible-ansible.builtin.service_facts Invoked
Feb  2 06:34:26 np0005604943 network[100824]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 06:34:26 np0005604943 network[100825]: 'network-scripts' will be removed from distribution in near future.
Feb  2 06:34:26 np0005604943 network[100826]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 06:34:26 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Feb  2 06:34:26 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Feb  2 06:34:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:34:27 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Feb  2 06:34:27 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Feb  2 06:34:27 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb  2 06:34:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 0 objects/s recovering
Feb  2 06:34:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Feb  2 06:34:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Feb  2 06:34:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Feb  2 06:34:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Feb  2 06:34:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb  2 06:34:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Feb  2 06:34:28 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Feb  2 06:34:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 102 pg[9.19( v 64'487 (0'0,64'487] local-lis/les=57/58 n=6 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=102 pruub=8.615420341s) [2] r=-1 lpr=102 pi=[57,102)/1 crt=64'486 lcod 64'486 active pruub 162.996902466s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:28 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 102 pg[9.19( v 64'487 (0'0,64'487] local-lis/les=57/58 n=6 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=102 pruub=8.615286827s) [2] r=-1 lpr=102 pi=[57,102)/1 crt=64'486 lcod 64'486 unknown NOTIFY pruub 162.996902466s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:28 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=102) [2] r=0 lpr=102 pi=[57,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Feb  2 06:34:29 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb  2 06:34:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Feb  2 06:34:29 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Feb  2 06:34:29 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=103) [2]/[0] r=-1 lpr=103 pi=[57,103)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:29 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=103) [2]/[0] r=-1 lpr=103 pi=[57,103)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:29 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 103 pg[9.19( v 64'487 (0'0,64'487] local-lis/les=57/58 n=6 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=103) [2]/[0] r=0 lpr=103 pi=[57,103)/1 crt=64'486 lcod 64'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:29 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 103 pg[9.19( v 64'487 (0'0,64'487] local-lis/les=57/58 n=6 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=103) [2]/[0] r=0 lpr=103 pi=[57,103)/1 crt=64'486 lcod 64'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 48 B/s, 1 objects/s recovering
Feb  2 06:34:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Feb  2 06:34:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Feb  2 06:34:29 np0005604943 python3.9[101086]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:34:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Feb  2 06:34:30 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Feb  2 06:34:30 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb  2 06:34:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Feb  2 06:34:30 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Feb  2 06:34:30 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.c scrub starts
Feb  2 06:34:30 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 10.c scrub ok
Feb  2 06:34:30 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 104 pg[9.19( v 64'487 (0'0,64'487] local-lis/les=103/104 n=6 ec=48/33 lis/c=57/57 les/c/f=58/58/0 sis=103) [2]/[0] async=[2] r=0 lpr=103 pi=[57,103)/1 crt=64'487 lcod 64'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:30 np0005604943 python3.9[101236]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:34:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Feb  2 06:34:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Feb  2 06:34:31 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Feb  2 06:34:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 105 pg[9.19( v 64'487 (0'0,64'487] local-lis/les=103/104 n=6 ec=48/33 lis/c=103/57 les/c/f=104/58/0 sis=105 pruub=15.241011620s) [2] async=[2] r=-1 lpr=105 pi=[57,105)/1 crt=64'487 lcod 64'486 active pruub 172.704605103s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:31 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 105 pg[9.19( v 64'487 (0'0,64'487] local-lis/les=103/104 n=6 ec=48/33 lis/c=103/57 les/c/f=104/58/0 sis=105 pruub=15.240708351s) [2] r=-1 lpr=105 pi=[57,105)/1 crt=64'487 lcod 64'486 unknown NOTIFY pruub 172.704605103s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:31 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb  2 06:34:31 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 105 pg[9.19( v 64'487 (0'0,64'487] local-lis/les=0/0 n=6 ec=48/33 lis/c=103/57 les/c/f=104/58/0 sis=105) [2] r=0 lpr=105 pi=[57,105)/1 pct=0'0 crt=64'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:31 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 105 pg[9.19( v 64'487 (0'0,64'487] local-lis/les=0/0 n=6 ec=48/33 lis/c=103/57 les/c/f=104/58/0 sis=105) [2] r=0 lpr=105 pi=[57,105)/1 crt=64'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:31 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Feb  2 06:34:31 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Feb  2 06:34:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Feb  2 06:34:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Feb  2 06:34:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Feb  2 06:34:31 np0005604943 python3.9[101390]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:34:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:34:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Feb  2 06:34:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb  2 06:34:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Feb  2 06:34:32 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Feb  2 06:34:32 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Feb  2 06:34:32 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 106 pg[9.19( v 64'487 (0'0,64'487] local-lis/les=105/106 n=6 ec=48/33 lis/c=103/57 les/c/f=104/58/0 sis=105) [2] r=0 lpr=105 pi=[57,105)/1 crt=64'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:32 np0005604943 python3.9[101548]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:34:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb  2 06:34:33 np0005604943 python3.9[101632]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:34:33 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Feb  2 06:34:33 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Feb  2 06:34:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 95 B/s, 2 objects/s recovering
Feb  2 06:34:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Feb  2 06:34:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Feb  2 06:34:33 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.e scrub starts
Feb  2 06:34:33 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.e scrub ok
Feb  2 06:34:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Feb  2 06:34:34 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Feb  2 06:34:34 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Feb  2 06:34:34 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb  2 06:34:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Feb  2 06:34:34 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Feb  2 06:34:34 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Feb  2 06:34:34 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.d scrub starts
Feb  2 06:34:34 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.d scrub ok
Feb  2 06:34:35 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Feb  2 06:34:35 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Feb  2 06:34:35 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 107 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=84/85 n=6 ec=48/33 lis/c=84/84 les/c/f=85/85/0 sis=107 pruub=15.813035011s) [0] r=-1 lpr=107 pi=[84,107)/1 crt=64'487 active pruub 169.444915771s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:35 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 107 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=84/85 n=6 ec=48/33 lis/c=84/84 les/c/f=85/85/0 sis=107 pruub=15.812887192s) [0] r=-1 lpr=107 pi=[84,107)/1 crt=64'487 unknown NOTIFY pruub 169.444915771s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:35 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 107 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=84/84 les/c/f=85/85/0 sis=107) [0] r=0 lpr=107 pi=[84,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Feb  2 06:34:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Feb  2 06:34:35 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Feb  2 06:34:35 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 108 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=84/85 n=6 ec=48/33 lis/c=84/84 les/c/f=85/85/0 sis=108) [0]/[2] r=0 lpr=108 pi=[84,108)/1 crt=64'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:35 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 108 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=84/85 n=6 ec=48/33 lis/c=84/84 les/c/f=85/85/0 sis=108) [0]/[2] r=0 lpr=108 pi=[84,108)/1 crt=64'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:35 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 108 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=84/84 les/c/f=85/85/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[84,108)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:35 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 108 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=84/84 les/c/f=85/85/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[84,108)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:35 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb  2 06:34:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 97 B/s, 2 objects/s recovering
Feb  2 06:34:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Feb  2 06:34:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Feb  2 06:34:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Feb  2 06:34:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb  2 06:34:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Feb  2 06:34:36 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Feb  2 06:34:36 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Feb  2 06:34:36 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb  2 06:34:36 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 109 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=108/109 n=6 ec=48/33 lis/c=84/84 les/c/f=85/85/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[84,108)/1 crt=64'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:36 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Feb  2 06:34:36 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Feb  2 06:34:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:34:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Feb  2 06:34:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Feb  2 06:34:36 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Feb  2 06:34:36 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 110 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=108/109 n=6 ec=48/33 lis/c=108/84 les/c/f=109/85/0 sis=110 pruub=15.543957710s) [0] async=[0] r=-1 lpr=110 pi=[84,110)/1 crt=64'487 active pruub 170.968292236s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:36 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 110 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=108/109 n=6 ec=48/33 lis/c=108/84 les/c/f=109/85/0 sis=110 pruub=15.543823242s) [0] r=-1 lpr=110 pi=[84,110)/1 crt=64'487 unknown NOTIFY pruub 170.968292236s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:36 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 110 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=0/0 n=6 ec=48/33 lis/c=108/84 les/c/f=109/85/0 sis=110) [0] r=0 lpr=110 pi=[84,110)/1 pct=0'0 crt=64'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:36 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 110 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=0/0 n=6 ec=48/33 lis/c=108/84 les/c/f=109/85/0 sis=110) [0] r=0 lpr=110 pi=[84,110)/1 crt=64'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:37 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 11.a scrub starts
Feb  2 06:34:37 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 11.a scrub ok
Feb  2 06:34:37 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Feb  2 06:34:37 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Feb  2 06:34:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:34:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Feb  2 06:34:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Feb  2 06:34:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Feb  2 06:34:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb  2 06:34:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Feb  2 06:34:37 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Feb  2 06:34:37 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 111 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=68/69 n=6 ec=48/33 lis/c=68/68 les/c/f=69/69/0 sis=111 pruub=12.103483200s) [0] r=-1 lpr=111 pi=[68,111)/1 crt=64'485 active pruub 168.543350220s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:37 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 111 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=68/69 n=6 ec=48/33 lis/c=68/68 les/c/f=69/69/0 sis=111 pruub=12.103440285s) [0] r=-1 lpr=111 pi=[68,111)/1 crt=64'485 unknown NOTIFY pruub 168.543350220s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:37 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Feb  2 06:34:37 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=68/68 les/c/f=69/69/0 sis=111) [0] r=0 lpr=111 pi=[68,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:37 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 111 pg[9.1c( v 64'487 (0'0,64'487] local-lis/les=110/111 n=6 ec=48/33 lis/c=108/84 les/c/f=109/85/0 sis=110) [0] r=0 lpr=110 pi=[84,110)/1 crt=64'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Feb  2 06:34:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Feb  2 06:34:39 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Feb  2 06:34:39 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=68/68 les/c/f=69/69/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[68,112)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:39 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=68/68 les/c/f=69/69/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[68,112)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:39 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb  2 06:34:39 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 112 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=68/69 n=6 ec=48/33 lis/c=68/68 les/c/f=69/69/0 sis=112) [0]/[2] r=0 lpr=112 pi=[68,112)/1 crt=64'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:39 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 112 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=68/69 n=6 ec=48/33 lis/c=68/68 les/c/f=69/69/0 sis=112) [0]/[2] r=0 lpr=112 pi=[68,112)/1 crt=64'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 0 objects/s recovering
Feb  2 06:34:39 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Feb  2 06:34:39 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Feb  2 06:34:40 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Feb  2 06:34:40 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Feb  2 06:34:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Feb  2 06:34:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Feb  2 06:34:40 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Feb  2 06:34:40 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.f scrub starts
Feb  2 06:34:40 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.f scrub ok
Feb  2 06:34:40 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 113 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=112/113 n=6 ec=48/33 lis/c=68/68 les/c/f=69/69/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[68,112)/1 crt=64'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:34:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:34:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:34:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:34:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:34:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:34:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Feb  2 06:34:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Feb  2 06:34:41 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Feb  2 06:34:41 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 114 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=0/0 n=6 ec=48/33 lis/c=112/68 les/c/f=113/69/0 sis=114) [0] r=0 lpr=114 pi=[68,114)/1 pct=0'0 crt=64'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:41 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 114 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=112/113 n=6 ec=48/33 lis/c=112/68 les/c/f=113/69/0 sis=114 pruub=15.159193993s) [0] async=[0] r=-1 lpr=114 pi=[68,114)/1 crt=64'485 active pruub 175.055786133s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:41 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 114 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=112/113 n=6 ec=48/33 lis/c=112/68 les/c/f=113/69/0 sis=114 pruub=15.159018517s) [0] r=-1 lpr=114 pi=[68,114)/1 crt=64'485 unknown NOTIFY pruub 175.055786133s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:41 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 114 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=0/0 n=6 ec=48/33 lis/c=112/68 les/c/f=113/69/0 sis=114) [0] r=0 lpr=114 pi=[68,114)/1 crt=64'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 0 objects/s recovering
Feb  2 06:34:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:34:42 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.d scrub starts
Feb  2 06:34:42 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.d scrub ok
Feb  2 06:34:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Feb  2 06:34:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Feb  2 06:34:42 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Feb  2 06:34:42 np0005604943 ceph-osd[86144]: osd.0 pg_epoch: 115 pg[9.1e( v 64'485 (0'0,64'485] local-lis/les=114/115 n=6 ec=48/33 lis/c=112/68 les/c/f=113/69/0 sis=114) [0] r=0 lpr=114 pi=[68,114)/1 crt=64'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:43 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Feb  2 06:34:43 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Feb  2 06:34:43 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Feb  2 06:34:43 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Feb  2 06:34:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 447 B/s wr, 10 op/s; 66 B/s, 2 objects/s recovering
Feb  2 06:34:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 06:34:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:34:44 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Feb  2 06:34:44 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Feb  2 06:34:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Feb  2 06:34:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:34:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Feb  2 06:34:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Feb  2 06:34:44 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 116 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=73/74 n=6 ec=48/33 lis/c=73/73 les/c/f=74/74/0 sis=116 pruub=11.912998199s) [1] r=-1 lpr=116 pi=[73,116)/1 crt=39'483 active pruub 174.826309204s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:44 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 116 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=73/74 n=6 ec=48/33 lis/c=73/73 les/c/f=74/74/0 sis=116 pruub=11.912327766s) [1] r=-1 lpr=116 pi=[73,116)/1 crt=39'483 unknown NOTIFY pruub 174.826309204s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:44 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb  2 06:34:44 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=73/73 les/c/f=74/74/0 sis=116) [1] r=0 lpr=116 pi=[73,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:44 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.a scrub starts
Feb  2 06:34:44 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.a scrub ok
Feb  2 06:34:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Feb  2 06:34:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Feb  2 06:34:45 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Feb  2 06:34:45 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=73/73 les/c/f=74/74/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[73,117)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:45 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/33 lis/c=73/73 les/c/f=74/74/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[73,117)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:45 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 06:34:45 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 117 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=73/74 n=6 ec=48/33 lis/c=73/73 les/c/f=74/74/0 sis=117) [1]/[2] r=0 lpr=117 pi=[73,117)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:45 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 117 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=73/74 n=6 ec=48/33 lis/c=73/73 les/c/f=74/74/0 sis=117) [1]/[2] r=0 lpr=117 pi=[73,117)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 483 B/s wr, 11 op/s; 71 B/s, 2 objects/s recovering
Feb  2 06:34:46 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.d scrub starts
Feb  2 06:34:46 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.d scrub ok
Feb  2 06:34:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Feb  2 06:34:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Feb  2 06:34:46 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Feb  2 06:34:46 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Feb  2 06:34:46 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Feb  2 06:34:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:34:47 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=117/118 n=6 ec=48/33 lis/c=73/73 les/c/f=74/74/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[73,117)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Feb  2 06:34:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Feb  2 06:34:47 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Feb  2 06:34:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Feb  2 06:34:47 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=117/73 les/c/f=118/74/0 sis=119) [1] r=0 lpr=119 pi=[73,119)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:47 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=48/33 lis/c=117/73 les/c/f=118/74/0 sis=119) [1] r=0 lpr=119 pi=[73,119)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 06:34:47 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=117/118 n=6 ec=48/33 lis/c=117/73 les/c/f=118/74/0 sis=119 pruub=15.588129997s) [1] async=[1] r=-1 lpr=119 pi=[73,119)/1 crt=39'483 active pruub 181.655395508s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb  2 06:34:47 np0005604943 ceph-osd[88236]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=117/118 n=6 ec=48/33 lis/c=117/73 les/c/f=118/74/0 sis=119 pruub=15.588050842s) [1] r=-1 lpr=119 pi=[73,119)/1 crt=39'483 unknown NOTIFY pruub 181.655395508s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 06:34:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Feb  2 06:34:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Feb  2 06:34:48 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Feb  2 06:34:48 np0005604943 ceph-osd[87192]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=48/33 lis/c=117/73 les/c/f=118/74/0 sis=119) [1] r=0 lpr=119 pi=[73,119)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 06:34:49 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Feb  2 06:34:49 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Feb  2 06:34:49 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.b scrub starts
Feb  2 06:34:49 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.b scrub ok
Feb  2 06:34:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 1 objects/s recovering
Feb  2 06:34:50 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Feb  2 06:34:50 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Feb  2 06:34:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Feb  2 06:34:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:34:52 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.b scrub starts
Feb  2 06:34:52 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.b scrub ok
Feb  2 06:34:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Feb  2 06:34:53 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Feb  2 06:34:53 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Feb  2 06:34:54 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Feb  2 06:34:54 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Feb  2 06:34:54 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Feb  2 06:34:54 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Feb  2 06:34:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:34:56 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Feb  2 06:34:56 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Feb  2 06:34:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:34:57 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.f scrub starts
Feb  2 06:34:57 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.f scrub ok
Feb  2 06:34:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:34:57 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Feb  2 06:34:57 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Feb  2 06:34:58 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Feb  2 06:34:58 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Feb  2 06:34:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:00 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.e scrub starts
Feb  2 06:35:00 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.e scrub ok
Feb  2 06:35:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:35:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:04 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Feb  2 06:35:04 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Feb  2 06:35:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:35:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:07 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Feb  2 06:35:07 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Feb  2 06:35:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:35:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:35:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:35:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:35:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:35:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:35:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:35:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:35:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:35:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:35:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:35:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:35:08 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Feb  2 06:35:08 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Feb  2 06:35:08 np0005604943 podman[101921]: 2026-02-02 11:35:08.518463141 +0000 UTC m=+0.039284571 container create f35412d6a2a8501995d8332e4ff7b08658e51e347b7d5bf0aa41a6f207ee8bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:35:08 np0005604943 systemd[1]: Started libpod-conmon-f35412d6a2a8501995d8332e4ff7b08658e51e347b7d5bf0aa41a6f207ee8bc8.scope.
Feb  2 06:35:08 np0005604943 podman[101921]: 2026-02-02 11:35:08.498511858 +0000 UTC m=+0.019333268 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:35:08 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:35:08 np0005604943 podman[101921]: 2026-02-02 11:35:08.628196485 +0000 UTC m=+0.149017925 container init f35412d6a2a8501995d8332e4ff7b08658e51e347b7d5bf0aa41a6f207ee8bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jemison, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:35:08 np0005604943 podman[101921]: 2026-02-02 11:35:08.635514271 +0000 UTC m=+0.156335701 container start f35412d6a2a8501995d8332e4ff7b08658e51e347b7d5bf0aa41a6f207ee8bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jemison, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 06:35:08 np0005604943 podman[101921]: 2026-02-02 11:35:08.639284831 +0000 UTC m=+0.160106241 container attach f35412d6a2a8501995d8332e4ff7b08658e51e347b7d5bf0aa41a6f207ee8bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:35:08 np0005604943 admiring_jemison[101938]: 167 167
Feb  2 06:35:08 np0005604943 systemd[1]: libpod-f35412d6a2a8501995d8332e4ff7b08658e51e347b7d5bf0aa41a6f207ee8bc8.scope: Deactivated successfully.
Feb  2 06:35:08 np0005604943 podman[101921]: 2026-02-02 11:35:08.641569463 +0000 UTC m=+0.162390893 container died f35412d6a2a8501995d8332e4ff7b08658e51e347b7d5bf0aa41a6f207ee8bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  2 06:35:08 np0005604943 systemd[1]: var-lib-containers-storage-overlay-590e325856614acbf5ce2f46074584e77ed36a5e1576cba3223429fb4c853cb6-merged.mount: Deactivated successfully.
Feb  2 06:35:08 np0005604943 podman[101921]: 2026-02-02 11:35:08.688159578 +0000 UTC m=+0.208981008 container remove f35412d6a2a8501995d8332e4ff7b08658e51e347b7d5bf0aa41a6f207ee8bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jemison, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:35:08 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:35:08 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:35:08 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:35:08 np0005604943 systemd[1]: libpod-conmon-f35412d6a2a8501995d8332e4ff7b08658e51e347b7d5bf0aa41a6f207ee8bc8.scope: Deactivated successfully.
Feb  2 06:35:08 np0005604943 podman[101963]: 2026-02-02 11:35:08.815985105 +0000 UTC m=+0.047644034 container create 06e1c478bc60637dfca9d115bc867eac021de4e09f7c47ae5fab57df18835017 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 06:35:08 np0005604943 systemd[1]: Started libpod-conmon-06e1c478bc60637dfca9d115bc867eac021de4e09f7c47ae5fab57df18835017.scope.
Feb  2 06:35:08 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:35:08 np0005604943 podman[101963]: 2026-02-02 11:35:08.79594725 +0000 UTC m=+0.027606209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:35:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a084e0c86e1692bd9c2235420fa42ccb82db57c1862aca7bb3e4ed532658ac55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:35:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a084e0c86e1692bd9c2235420fa42ccb82db57c1862aca7bb3e4ed532658ac55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:35:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a084e0c86e1692bd9c2235420fa42ccb82db57c1862aca7bb3e4ed532658ac55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:35:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a084e0c86e1692bd9c2235420fa42ccb82db57c1862aca7bb3e4ed532658ac55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:35:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a084e0c86e1692bd9c2235420fa42ccb82db57c1862aca7bb3e4ed532658ac55/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:35:08 np0005604943 podman[101963]: 2026-02-02 11:35:08.90216868 +0000 UTC m=+0.133827679 container init 06e1c478bc60637dfca9d115bc867eac021de4e09f7c47ae5fab57df18835017 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_robinson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:35:08 np0005604943 podman[101963]: 2026-02-02 11:35:08.912135116 +0000 UTC m=+0.143794085 container start 06e1c478bc60637dfca9d115bc867eac021de4e09f7c47ae5fab57df18835017 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:35:08 np0005604943 podman[101963]: 2026-02-02 11:35:08.917202172 +0000 UTC m=+0.148861121 container attach 06e1c478bc60637dfca9d115bc867eac021de4e09f7c47ae5fab57df18835017 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_robinson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 06:35:09 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Feb  2 06:35:09 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Feb  2 06:35:09 np0005604943 fervent_robinson[101980]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:35:09 np0005604943 fervent_robinson[101980]: --> All data devices are unavailable
Feb  2 06:35:09 np0005604943 systemd[1]: libpod-06e1c478bc60637dfca9d115bc867eac021de4e09f7c47ae5fab57df18835017.scope: Deactivated successfully.
Feb  2 06:35:09 np0005604943 podman[102000]: 2026-02-02 11:35:09.389638913 +0000 UTC m=+0.028505613 container died 06e1c478bc60637dfca9d115bc867eac021de4e09f7c47ae5fab57df18835017 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:35:09 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a084e0c86e1692bd9c2235420fa42ccb82db57c1862aca7bb3e4ed532658ac55-merged.mount: Deactivated successfully.
Feb  2 06:35:09 np0005604943 podman[102000]: 2026-02-02 11:35:09.423088427 +0000 UTC m=+0.061955097 container remove 06e1c478bc60637dfca9d115bc867eac021de4e09f7c47ae5fab57df18835017 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_robinson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 06:35:09 np0005604943 systemd[1]: libpod-conmon-06e1c478bc60637dfca9d115bc867eac021de4e09f7c47ae5fab57df18835017.scope: Deactivated successfully.
Feb  2 06:35:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:35:09
Feb  2 06:35:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:35:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:35:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'images']
Feb  2 06:35:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:35:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:09 np0005604943 podman[102075]: 2026-02-02 11:35:09.811503552 +0000 UTC m=+0.035931401 container create d4bbd21a5ae8b7a1ae6ec063afb4506d958fad6e1b796753943d21ffdb859a97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 06:35:09 np0005604943 systemd[1]: Started libpod-conmon-d4bbd21a5ae8b7a1ae6ec063afb4506d958fad6e1b796753943d21ffdb859a97.scope.
Feb  2 06:35:09 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Feb  2 06:35:09 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Feb  2 06:35:09 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:35:09 np0005604943 podman[102075]: 2026-02-02 11:35:09.793585544 +0000 UTC m=+0.018013373 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:35:09 np0005604943 podman[102075]: 2026-02-02 11:35:09.893786532 +0000 UTC m=+0.118214361 container init d4bbd21a5ae8b7a1ae6ec063afb4506d958fad6e1b796753943d21ffdb859a97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 06:35:09 np0005604943 podman[102075]: 2026-02-02 11:35:09.8989335 +0000 UTC m=+0.123361319 container start d4bbd21a5ae8b7a1ae6ec063afb4506d958fad6e1b796753943d21ffdb859a97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:35:09 np0005604943 podman[102075]: 2026-02-02 11:35:09.902376222 +0000 UTC m=+0.126804041 container attach d4bbd21a5ae8b7a1ae6ec063afb4506d958fad6e1b796753943d21ffdb859a97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 06:35:09 np0005604943 happy_vaughan[102091]: 167 167
Feb  2 06:35:09 np0005604943 systemd[1]: libpod-d4bbd21a5ae8b7a1ae6ec063afb4506d958fad6e1b796753943d21ffdb859a97.scope: Deactivated successfully.
Feb  2 06:35:09 np0005604943 podman[102075]: 2026-02-02 11:35:09.90531199 +0000 UTC m=+0.129739799 container died d4bbd21a5ae8b7a1ae6ec063afb4506d958fad6e1b796753943d21ffdb859a97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:35:09 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b771f4cc8980af5bb6c33ca52be6d180604a36a7f700a1275c3995adb905c12d-merged.mount: Deactivated successfully.
Feb  2 06:35:09 np0005604943 podman[102075]: 2026-02-02 11:35:09.936813143 +0000 UTC m=+0.161240952 container remove d4bbd21a5ae8b7a1ae6ec063afb4506d958fad6e1b796753943d21ffdb859a97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 06:35:09 np0005604943 systemd[1]: libpod-conmon-d4bbd21a5ae8b7a1ae6ec063afb4506d958fad6e1b796753943d21ffdb859a97.scope: Deactivated successfully.
Feb  2 06:35:10 np0005604943 podman[102116]: 2026-02-02 11:35:10.041353048 +0000 UTC m=+0.031233657 container create ef192d0894dda38fbd013c59ca8353554faf806593a1cb0b0d8a922b6f2ec278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ramanujan, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:35:10 np0005604943 systemd[1]: Started libpod-conmon-ef192d0894dda38fbd013c59ca8353554faf806593a1cb0b0d8a922b6f2ec278.scope.
Feb  2 06:35:10 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:35:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023ad032865389f4a1aa4199cc4c630ec0bbc0d563ea415a7f6015eb7a656a52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:35:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023ad032865389f4a1aa4199cc4c630ec0bbc0d563ea415a7f6015eb7a656a52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:35:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023ad032865389f4a1aa4199cc4c630ec0bbc0d563ea415a7f6015eb7a656a52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:35:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023ad032865389f4a1aa4199cc4c630ec0bbc0d563ea415a7f6015eb7a656a52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:35:10 np0005604943 podman[102116]: 2026-02-02 11:35:10.028220296 +0000 UTC m=+0.018100935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:35:10 np0005604943 podman[102116]: 2026-02-02 11:35:10.125798996 +0000 UTC m=+0.115679605 container init ef192d0894dda38fbd013c59ca8353554faf806593a1cb0b0d8a922b6f2ec278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  2 06:35:10 np0005604943 podman[102116]: 2026-02-02 11:35:10.136698477 +0000 UTC m=+0.126579086 container start ef192d0894dda38fbd013c59ca8353554faf806593a1cb0b0d8a922b6f2ec278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 06:35:10 np0005604943 podman[102116]: 2026-02-02 11:35:10.140996772 +0000 UTC m=+0.130877571 container attach ef192d0894dda38fbd013c59ca8353554faf806593a1cb0b0d8a922b6f2ec278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ramanujan, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]: {
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:    "0": [
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:        {
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "devices": [
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "/dev/loop3"
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            ],
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "lv_name": "ceph_lv0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "lv_size": "21470642176",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "name": "ceph_lv0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "tags": {
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.cluster_name": "ceph",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.crush_device_class": "",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.encrypted": "0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.objectstore": "bluestore",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.osd_id": "0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.type": "block",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.vdo": "0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.with_tpm": "0"
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            },
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "type": "block",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "vg_name": "ceph_vg0"
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:        }
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:    ],
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:    "1": [
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:        {
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "devices": [
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "/dev/loop4"
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            ],
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "lv_name": "ceph_lv1",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "lv_size": "21470642176",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "name": "ceph_lv1",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "tags": {
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.cluster_name": "ceph",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.crush_device_class": "",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.encrypted": "0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.objectstore": "bluestore",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.osd_id": "1",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.type": "block",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.vdo": "0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.with_tpm": "0"
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            },
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "type": "block",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "vg_name": "ceph_vg1"
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:        }
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:    ],
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:    "2": [
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:        {
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "devices": [
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "/dev/loop5"
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            ],
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "lv_name": "ceph_lv2",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "lv_size": "21470642176",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "name": "ceph_lv2",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "tags": {
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.cluster_name": "ceph",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.crush_device_class": "",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.encrypted": "0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.objectstore": "bluestore",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.osd_id": "2",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.type": "block",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.vdo": "0",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:                "ceph.with_tpm": "0"
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            },
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "type": "block",
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:            "vg_name": "ceph_vg2"
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:        }
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]:    ]
Feb  2 06:35:10 np0005604943 affectionate_ramanujan[102133]: }
Feb  2 06:35:10 np0005604943 systemd[1]: libpod-ef192d0894dda38fbd013c59ca8353554faf806593a1cb0b0d8a922b6f2ec278.scope: Deactivated successfully.
Feb  2 06:35:10 np0005604943 podman[102116]: 2026-02-02 11:35:10.385120119 +0000 UTC m=+0.375000728 container died ef192d0894dda38fbd013c59ca8353554faf806593a1cb0b0d8a922b6f2ec278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ramanujan, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:35:10 np0005604943 systemd[1]: var-lib-containers-storage-overlay-023ad032865389f4a1aa4199cc4c630ec0bbc0d563ea415a7f6015eb7a656a52-merged.mount: Deactivated successfully.
Feb  2 06:35:10 np0005604943 podman[102116]: 2026-02-02 11:35:10.417040922 +0000 UTC m=+0.406921531 container remove ef192d0894dda38fbd013c59ca8353554faf806593a1cb0b0d8a922b6f2ec278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ramanujan, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:35:10 np0005604943 systemd[1]: libpod-conmon-ef192d0894dda38fbd013c59ca8353554faf806593a1cb0b0d8a922b6f2ec278.scope: Deactivated successfully.
Feb  2 06:35:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:35:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:35:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:35:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:35:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:35:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:35:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:35:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:35:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:35:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:35:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:35:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:35:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:35:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:35:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:35:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:35:10 np0005604943 podman[102216]: 2026-02-02 11:35:10.82093375 +0000 UTC m=+0.035845628 container create 4761c1d52c27ccc79cfba6bd693dd79d5a55b3aa04bdd08129403ca30e37db61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_diffie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 06:35:10 np0005604943 systemd[1]: Started libpod-conmon-4761c1d52c27ccc79cfba6bd693dd79d5a55b3aa04bdd08129403ca30e37db61.scope.
Feb  2 06:35:10 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:35:10 np0005604943 podman[102216]: 2026-02-02 11:35:10.892293288 +0000 UTC m=+0.107205206 container init 4761c1d52c27ccc79cfba6bd693dd79d5a55b3aa04bdd08129403ca30e37db61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 06:35:10 np0005604943 podman[102216]: 2026-02-02 11:35:10.899335707 +0000 UTC m=+0.114247585 container start 4761c1d52c27ccc79cfba6bd693dd79d5a55b3aa04bdd08129403ca30e37db61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 06:35:10 np0005604943 podman[102216]: 2026-02-02 11:35:10.805330563 +0000 UTC m=+0.020242451 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:35:10 np0005604943 podman[102216]: 2026-02-02 11:35:10.903005535 +0000 UTC m=+0.117917423 container attach 4761c1d52c27ccc79cfba6bd693dd79d5a55b3aa04bdd08129403ca30e37db61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_diffie, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 06:35:10 np0005604943 practical_diffie[102232]: 167 167
Feb  2 06:35:10 np0005604943 systemd[1]: libpod-4761c1d52c27ccc79cfba6bd693dd79d5a55b3aa04bdd08129403ca30e37db61.scope: Deactivated successfully.
Feb  2 06:35:10 np0005604943 podman[102237]: 2026-02-02 11:35:10.961735415 +0000 UTC m=+0.041396158 container died 4761c1d52c27ccc79cfba6bd693dd79d5a55b3aa04bdd08129403ca30e37db61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 06:35:10 np0005604943 systemd[1]: var-lib-containers-storage-overlay-677f7661ec8533617b3ff32214156b854bbe068759cb55f185d157185c09dd0c-merged.mount: Deactivated successfully.
Feb  2 06:35:10 np0005604943 podman[102237]: 2026-02-02 11:35:10.998307733 +0000 UTC m=+0.077968486 container remove 4761c1d52c27ccc79cfba6bd693dd79d5a55b3aa04bdd08129403ca30e37db61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_diffie, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:35:11 np0005604943 systemd[1]: libpod-conmon-4761c1d52c27ccc79cfba6bd693dd79d5a55b3aa04bdd08129403ca30e37db61.scope: Deactivated successfully.
Feb  2 06:35:11 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Feb  2 06:35:11 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Feb  2 06:35:11 np0005604943 podman[102259]: 2026-02-02 11:35:11.189072593 +0000 UTC m=+0.055156025 container create 7fcddbac783d2deba824960ffe07f8ee40f57199710bdde683aa0b5e4bbd5ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 06:35:11 np0005604943 systemd[1]: Started libpod-conmon-7fcddbac783d2deba824960ffe07f8ee40f57199710bdde683aa0b5e4bbd5ef6.scope.
Feb  2 06:35:11 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Feb  2 06:35:11 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Feb  2 06:35:11 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:35:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219565b71bfef48680c2247e9a180291ea216ebd5fb108b0560f2d269b0dc5f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:35:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219565b71bfef48680c2247e9a180291ea216ebd5fb108b0560f2d269b0dc5f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:35:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219565b71bfef48680c2247e9a180291ea216ebd5fb108b0560f2d269b0dc5f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:35:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219565b71bfef48680c2247e9a180291ea216ebd5fb108b0560f2d269b0dc5f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:35:11 np0005604943 podman[102259]: 2026-02-02 11:35:11.163744106 +0000 UTC m=+0.029827558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:35:11 np0005604943 podman[102259]: 2026-02-02 11:35:11.292311324 +0000 UTC m=+0.158394826 container init 7fcddbac783d2deba824960ffe07f8ee40f57199710bdde683aa0b5e4bbd5ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:35:11 np0005604943 podman[102259]: 2026-02-02 11:35:11.299851935 +0000 UTC m=+0.165935407 container start 7fcddbac783d2deba824960ffe07f8ee40f57199710bdde683aa0b5e4bbd5ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_satoshi, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:35:11 np0005604943 podman[102259]: 2026-02-02 11:35:11.303360429 +0000 UTC m=+0.169443901 container attach 7fcddbac783d2deba824960ffe07f8ee40f57199710bdde683aa0b5e4bbd5ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_satoshi, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:35:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:11 np0005604943 lvm[102355]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:35:11 np0005604943 lvm[102355]: VG ceph_vg1 finished
Feb  2 06:35:11 np0005604943 lvm[102354]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:35:11 np0005604943 lvm[102354]: VG ceph_vg0 finished
Feb  2 06:35:11 np0005604943 lvm[102357]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:35:11 np0005604943 lvm[102357]: VG ceph_vg2 finished
Feb  2 06:35:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:35:11 np0005604943 nice_satoshi[102276]: {}
Feb  2 06:35:12 np0005604943 systemd[1]: libpod-7fcddbac783d2deba824960ffe07f8ee40f57199710bdde683aa0b5e4bbd5ef6.scope: Deactivated successfully.
Feb  2 06:35:12 np0005604943 systemd[1]: libpod-7fcddbac783d2deba824960ffe07f8ee40f57199710bdde683aa0b5e4bbd5ef6.scope: Consumed 1.082s CPU time.
Feb  2 06:35:12 np0005604943 podman[102259]: 2026-02-02 11:35:12.014046051 +0000 UTC m=+0.880129513 container died 7fcddbac783d2deba824960ffe07f8ee40f57199710bdde683aa0b5e4bbd5ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Feb  2 06:35:12 np0005604943 systemd[1]: var-lib-containers-storage-overlay-219565b71bfef48680c2247e9a180291ea216ebd5fb108b0560f2d269b0dc5f3-merged.mount: Deactivated successfully.
Feb  2 06:35:12 np0005604943 podman[102259]: 2026-02-02 11:35:12.060183503 +0000 UTC m=+0.926266965 container remove 7fcddbac783d2deba824960ffe07f8ee40f57199710bdde683aa0b5e4bbd5ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:35:12 np0005604943 systemd[1]: libpod-conmon-7fcddbac783d2deba824960ffe07f8ee40f57199710bdde683aa0b5e4bbd5ef6.scope: Deactivated successfully.
Feb  2 06:35:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:35:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:35:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:35:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:35:12 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:35:12 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:35:13 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Feb  2 06:35:13 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Feb  2 06:35:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:16 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Feb  2 06:35:16 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Feb  2 06:35:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:35:17 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Feb  2 06:35:17 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Feb  2 06:35:17 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.f scrub starts
Feb  2 06:35:17 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.f scrub ok
Feb  2 06:35:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:18 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Feb  2 06:35:18 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Feb  2 06:35:18 np0005604943 python3.9[102547]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:35:18 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 8.d scrub starts
Feb  2 06:35:18 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 8.d scrub ok
Feb  2 06:35:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:20 np0005604943 python3.9[102834]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb  2 06:35:20 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.d scrub starts
Feb  2 06:35:20 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.d scrub ok
Feb  2 06:35:20 np0005604943 python3.9[102986]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb  2 06:35:21 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Feb  2 06:35:21 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:35:21 np0005604943 python3.9[103138]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:35:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:35:22 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Feb  2 06:35:22 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Feb  2 06:35:22 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Feb  2 06:35:22 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Feb  2 06:35:22 np0005604943 python3.9[103290]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb  2 06:35:23 np0005604943 python3.9[103442]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:35:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:23 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.b scrub starts
Feb  2 06:35:23 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.b scrub ok
Feb  2 06:35:24 np0005604943 python3.9[103594]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:35:24 np0005604943 python3.9[103672]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:35:25 np0005604943 python3.9[103824]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:35:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:25 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Feb  2 06:35:25 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Feb  2 06:35:26 np0005604943 python3.9[103978]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb  2 06:35:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:35:27 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Feb  2 06:35:27 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Feb  2 06:35:27 np0005604943 python3.9[104131]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb  2 06:35:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:27 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Feb  2 06:35:28 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Feb  2 06:35:28 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Feb  2 06:35:28 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Feb  2 06:35:28 np0005604943 python3.9[104284]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 06:35:28 np0005604943 python3.9[104436]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb  2 06:35:28 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Feb  2 06:35:28 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Feb  2 06:35:29 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Feb  2 06:35:29 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Feb  2 06:35:29 np0005604943 python3.9[104588]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:35:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:31 np0005604943 python3.9[104741]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:35:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:31 np0005604943 python3.9[104893]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:35:31 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Feb  2 06:35:31 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Feb  2 06:35:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:35:32 np0005604943 python3.9[104971]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:35:32 np0005604943 python3.9[105123]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:35:33 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Feb  2 06:35:33 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Feb  2 06:35:33 np0005604943 python3.9[105201]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:35:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:34 np0005604943 python3.9[105353]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:35:34 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Feb  2 06:35:34 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Feb  2 06:35:34 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Feb  2 06:35:34 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Feb  2 06:35:35 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Feb  2 06:35:35 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Feb  2 06:35:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:35 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.f scrub starts
Feb  2 06:35:35 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.f scrub ok
Feb  2 06:35:36 np0005604943 python3.9[105504]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:35:36 np0005604943 python3.9[105656]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb  2 06:35:36 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Feb  2 06:35:36 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Feb  2 06:35:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:35:37 np0005604943 python3.9[105806]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:35:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:37 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Feb  2 06:35:37 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Feb  2 06:35:37 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Feb  2 06:35:38 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Feb  2 06:35:38 np0005604943 python3.9[105958]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:35:38 np0005604943 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb  2 06:35:38 np0005604943 systemd[1]: tuned.service: Deactivated successfully.
Feb  2 06:35:38 np0005604943 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb  2 06:35:38 np0005604943 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb  2 06:35:38 np0005604943 systemd[1]: Started Dynamic System Tuning Daemon.
Feb  2 06:35:38 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Feb  2 06:35:38 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Feb  2 06:35:39 np0005604943 python3.9[106120]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb  2 06:35:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:39 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.c scrub starts
Feb  2 06:35:39 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.c scrub ok
Feb  2 06:35:40 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Feb  2 06:35:40 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Feb  2 06:35:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:35:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:35:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:35:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:35:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:35:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:35:40 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Feb  2 06:35:40 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Feb  2 06:35:41 np0005604943 python3.9[106272]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:35:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:41 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Feb  2 06:35:41 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Feb  2 06:35:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:35:42 np0005604943 python3.9[106426]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:35:42 np0005604943 systemd[1]: session-34.scope: Deactivated successfully.
Feb  2 06:35:42 np0005604943 systemd[1]: session-34.scope: Consumed 1min 949ms CPU time.
Feb  2 06:35:42 np0005604943 systemd-logind[786]: Session 34 logged out. Waiting for processes to exit.
Feb  2 06:35:42 np0005604943 systemd-logind[786]: Removed session 34.
Feb  2 06:35:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:44 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Feb  2 06:35:44 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Feb  2 06:35:44 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Feb  2 06:35:44 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Feb  2 06:35:45 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.c scrub starts
Feb  2 06:35:45 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.c scrub ok
Feb  2 06:35:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:45 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Feb  2 06:35:45 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Feb  2 06:35:46 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Feb  2 06:35:46 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Feb  2 06:35:46 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Feb  2 06:35:46 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Feb  2 06:35:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:35:47 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Feb  2 06:35:47 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Feb  2 06:35:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:47 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Feb  2 06:35:47 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Feb  2 06:35:48 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Feb  2 06:35:48 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Feb  2 06:35:48 np0005604943 systemd-logind[786]: New session 35 of user zuul.
Feb  2 06:35:48 np0005604943 systemd[1]: Started Session 35 of User zuul.
Feb  2 06:35:49 np0005604943 python3.9[106606]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:35:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:49 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Feb  2 06:35:49 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Feb  2 06:35:50 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Feb  2 06:35:50 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Feb  2 06:35:50 np0005604943 python3.9[106762]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb  2 06:35:51 np0005604943 python3.9[106915]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:35:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:35:51 np0005604943 python3.9[106999]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb  2 06:35:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:54 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Feb  2 06:35:54 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Feb  2 06:35:54 np0005604943 python3.9[107153]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:35:55 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Feb  2 06:35:55 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Feb  2 06:35:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:55 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Feb  2 06:35:55 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Feb  2 06:35:56 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Feb  2 06:35:56 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Feb  2 06:35:56 np0005604943 python3.9[107306]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 06:35:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:35:57 np0005604943 python3.9[107459]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:35:57 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Feb  2 06:35:57 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Feb  2 06:35:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:57 np0005604943 python3.9[107611]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb  2 06:35:58 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Feb  2 06:35:58 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Feb  2 06:35:58 np0005604943 python3.9[107761]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:35:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:35:59 np0005604943 python3.9[107919]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:36:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:01 np0005604943 python3.9[108072]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:36:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:36:02 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Feb  2 06:36:02 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Feb  2 06:36:03 np0005604943 python3.9[108359]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb  2 06:36:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:03 np0005604943 python3.9[108509]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:36:03 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Feb  2 06:36:03 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Feb  2 06:36:04 np0005604943 python3.9[108663]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:36:04 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Feb  2 06:36:04 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Feb  2 06:36:04 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Feb  2 06:36:05 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Feb  2 06:36:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:05 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.b scrub starts
Feb  2 06:36:05 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.b scrub ok
Feb  2 06:36:05 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Feb  2 06:36:05 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Feb  2 06:36:06 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Feb  2 06:36:06 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Feb  2 06:36:06 np0005604943 python3.9[108816]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:36:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:36:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:07 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Feb  2 06:36:07 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Feb  2 06:36:08 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Feb  2 06:36:08 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Feb  2 06:36:08 np0005604943 python3.9[108969]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:36:08 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Feb  2 06:36:08 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Feb  2 06:36:09 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Feb  2 06:36:09 np0005604943 python3.9[109123]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Feb  2 06:36:09 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Feb  2 06:36:09 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Feb  2 06:36:09 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Feb  2 06:36:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:36:09
Feb  2 06:36:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:36:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:36:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['backups', 'vms', 'volumes', '.rgw.root', '.mgr', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log']
Feb  2 06:36:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:36:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:09 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Feb  2 06:36:09 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Feb  2 06:36:09 np0005604943 systemd[1]: session-35.scope: Deactivated successfully.
Feb  2 06:36:09 np0005604943 systemd-logind[786]: Session 35 logged out. Waiting for processes to exit.
Feb  2 06:36:09 np0005604943 systemd[1]: session-35.scope: Consumed 16.798s CPU time.
Feb  2 06:36:09 np0005604943 systemd-logind[786]: Removed session 35.
Feb  2 06:36:10 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.f scrub starts
Feb  2 06:36:10 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.f scrub ok
Feb  2 06:36:10 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Feb  2 06:36:10 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Feb  2 06:36:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:36:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:36:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:36:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:36:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:36:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:36:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:36:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:36:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:36:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:36:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:36:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:36:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:36:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:36:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:36:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:36:11 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Feb  2 06:36:11 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Feb  2 06:36:11 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.e scrub starts
Feb  2 06:36:11 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.e scrub ok
Feb  2 06:36:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:11 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Feb  2 06:36:11 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Feb  2 06:36:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:36:12 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Feb  2 06:36:12 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Feb  2 06:36:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:36:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:36:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:36:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:36:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:36:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:36:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:36:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:36:12 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:36:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:36:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:36:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:36:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:36:12 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Feb  2 06:36:12 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Feb  2 06:36:13 np0005604943 podman[109291]: 2026-02-02 11:36:13.268035412 +0000 UTC m=+0.043232084 container create 9b528c50112572f789ffd85897fc5623755935937db67deb638928055528d825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:36:13 np0005604943 systemd[1]: Started libpod-conmon-9b528c50112572f789ffd85897fc5623755935937db67deb638928055528d825.scope.
Feb  2 06:36:13 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:36:13 np0005604943 podman[109291]: 2026-02-02 11:36:13.250466755 +0000 UTC m=+0.025663447 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:36:13 np0005604943 podman[109291]: 2026-02-02 11:36:13.359949568 +0000 UTC m=+0.135146340 container init 9b528c50112572f789ffd85897fc5623755935937db67deb638928055528d825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_banach, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 06:36:13 np0005604943 podman[109291]: 2026-02-02 11:36:13.368015507 +0000 UTC m=+0.143212179 container start 9b528c50112572f789ffd85897fc5623755935937db67deb638928055528d825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_banach, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Feb  2 06:36:13 np0005604943 podman[109291]: 2026-02-02 11:36:13.371294913 +0000 UTC m=+0.146491605 container attach 9b528c50112572f789ffd85897fc5623755935937db67deb638928055528d825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_banach, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:36:13 np0005604943 objective_banach[109307]: 167 167
Feb  2 06:36:13 np0005604943 systemd[1]: libpod-9b528c50112572f789ffd85897fc5623755935937db67deb638928055528d825.scope: Deactivated successfully.
Feb  2 06:36:13 np0005604943 podman[109291]: 2026-02-02 11:36:13.375589754 +0000 UTC m=+0.150786456 container died 9b528c50112572f789ffd85897fc5623755935937db67deb638928055528d825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 06:36:13 np0005604943 systemd[1]: var-lib-containers-storage-overlay-459eb476a4219de091071c309fc5229813fe94bb7efbbc1d3d18265672587699-merged.mount: Deactivated successfully.
Feb  2 06:36:13 np0005604943 podman[109291]: 2026-02-02 11:36:13.41355217 +0000 UTC m=+0.188748862 container remove 9b528c50112572f789ffd85897fc5623755935937db67deb638928055528d825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_banach, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:36:13 np0005604943 systemd[1]: libpod-conmon-9b528c50112572f789ffd85897fc5623755935937db67deb638928055528d825.scope: Deactivated successfully.
Feb  2 06:36:13 np0005604943 podman[109330]: 2026-02-02 11:36:13.561371407 +0000 UTC m=+0.053998502 container create 8983e6dc85b81fcaa273f1b615de6fe2ddf6dfb3338b3a8eee1f6ae5d21dd234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 06:36:13 np0005604943 systemd[1]: Started libpod-conmon-8983e6dc85b81fcaa273f1b615de6fe2ddf6dfb3338b3a8eee1f6ae5d21dd234.scope.
Feb  2 06:36:13 np0005604943 podman[109330]: 2026-02-02 11:36:13.540050084 +0000 UTC m=+0.032677169 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:36:13 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:36:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c5f09ad2c1cf96bf98c065a414b6379572dab4e4b722b7ff0bb19b71ede8e14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:36:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c5f09ad2c1cf96bf98c065a414b6379572dab4e4b722b7ff0bb19b71ede8e14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:36:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c5f09ad2c1cf96bf98c065a414b6379572dab4e4b722b7ff0bb19b71ede8e14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:36:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c5f09ad2c1cf96bf98c065a414b6379572dab4e4b722b7ff0bb19b71ede8e14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:36:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c5f09ad2c1cf96bf98c065a414b6379572dab4e4b722b7ff0bb19b71ede8e14/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:36:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:13 np0005604943 podman[109330]: 2026-02-02 11:36:13.662585396 +0000 UTC m=+0.155212461 container init 8983e6dc85b81fcaa273f1b615de6fe2ddf6dfb3338b3a8eee1f6ae5d21dd234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 06:36:13 np0005604943 podman[109330]: 2026-02-02 11:36:13.674913175 +0000 UTC m=+0.167540250 container start 8983e6dc85b81fcaa273f1b615de6fe2ddf6dfb3338b3a8eee1f6ae5d21dd234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:36:13 np0005604943 podman[109330]: 2026-02-02 11:36:13.679187357 +0000 UTC m=+0.171814432 container attach 8983e6dc85b81fcaa273f1b615de6fe2ddf6dfb3338b3a8eee1f6ae5d21dd234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_ardinghelli, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 06:36:13 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:36:13 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:36:14 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.a scrub starts
Feb  2 06:36:14 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.a scrub ok
Feb  2 06:36:14 np0005604943 elegant_ardinghelli[109347]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:36:14 np0005604943 elegant_ardinghelli[109347]: --> All data devices are unavailable
Feb  2 06:36:14 np0005604943 systemd[1]: libpod-8983e6dc85b81fcaa273f1b615de6fe2ddf6dfb3338b3a8eee1f6ae5d21dd234.scope: Deactivated successfully.
Feb  2 06:36:14 np0005604943 podman[109330]: 2026-02-02 11:36:14.218767235 +0000 UTC m=+0.711394320 container died 8983e6dc85b81fcaa273f1b615de6fe2ddf6dfb3338b3a8eee1f6ae5d21dd234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_ardinghelli, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:36:14 np0005604943 systemd[1]: var-lib-containers-storage-overlay-2c5f09ad2c1cf96bf98c065a414b6379572dab4e4b722b7ff0bb19b71ede8e14-merged.mount: Deactivated successfully.
Feb  2 06:36:14 np0005604943 podman[109330]: 2026-02-02 11:36:14.265022386 +0000 UTC m=+0.757649441 container remove 8983e6dc85b81fcaa273f1b615de6fe2ddf6dfb3338b3a8eee1f6ae5d21dd234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:36:14 np0005604943 systemd[1]: libpod-conmon-8983e6dc85b81fcaa273f1b615de6fe2ddf6dfb3338b3a8eee1f6ae5d21dd234.scope: Deactivated successfully.
Feb  2 06:36:14 np0005604943 podman[109439]: 2026-02-02 11:36:14.751103006 +0000 UTC m=+0.051953820 container create 8e170472b0cd6752cf5cc9f074f74280347494378eeb0811ff839352f1091f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_beaver, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:36:14 np0005604943 systemd[1]: Started libpod-conmon-8e170472b0cd6752cf5cc9f074f74280347494378eeb0811ff839352f1091f3f.scope.
Feb  2 06:36:14 np0005604943 podman[109439]: 2026-02-02 11:36:14.723765547 +0000 UTC m=+0.024616401 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:36:14 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:36:14 np0005604943 podman[109439]: 2026-02-02 11:36:14.83945586 +0000 UTC m=+0.140306704 container init 8e170472b0cd6752cf5cc9f074f74280347494378eeb0811ff839352f1091f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_beaver, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:36:14 np0005604943 podman[109439]: 2026-02-02 11:36:14.847842618 +0000 UTC m=+0.148693392 container start 8e170472b0cd6752cf5cc9f074f74280347494378eeb0811ff839352f1091f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_beaver, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:36:14 np0005604943 podman[109439]: 2026-02-02 11:36:14.852681524 +0000 UTC m=+0.153532378 container attach 8e170472b0cd6752cf5cc9f074f74280347494378eeb0811ff839352f1091f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_beaver, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:36:14 np0005604943 wizardly_beaver[109455]: 167 167
Feb  2 06:36:14 np0005604943 systemd[1]: libpod-8e170472b0cd6752cf5cc9f074f74280347494378eeb0811ff839352f1091f3f.scope: Deactivated successfully.
Feb  2 06:36:14 np0005604943 podman[109439]: 2026-02-02 11:36:14.855532518 +0000 UTC m=+0.156383322 container died 8e170472b0cd6752cf5cc9f074f74280347494378eeb0811ff839352f1091f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 06:36:14 np0005604943 systemd[1]: var-lib-containers-storage-overlay-694c381abf242c631eb17507264ba9bfb6c8013da78494c6b64f4a9e78f1207f-merged.mount: Deactivated successfully.
Feb  2 06:36:14 np0005604943 podman[109439]: 2026-02-02 11:36:14.90108197 +0000 UTC m=+0.201932744 container remove 8e170472b0cd6752cf5cc9f074f74280347494378eeb0811ff839352f1091f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 06:36:14 np0005604943 systemd[1]: libpod-conmon-8e170472b0cd6752cf5cc9f074f74280347494378eeb0811ff839352f1091f3f.scope: Deactivated successfully.
Feb  2 06:36:15 np0005604943 podman[109479]: 2026-02-02 11:36:15.094382888 +0000 UTC m=+0.063700804 container create e57aefc6f0b03192fc5d8720a68f1689da87ab5d1e00981dd6b19ab455f3e1f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_swirles, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:36:15 np0005604943 systemd[1]: Started libpod-conmon-e57aefc6f0b03192fc5d8720a68f1689da87ab5d1e00981dd6b19ab455f3e1f9.scope.
Feb  2 06:36:15 np0005604943 podman[109479]: 2026-02-02 11:36:15.066481384 +0000 UTC m=+0.035799350 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:36:15 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:36:15 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e750e93a5edc0f158db09886912d16e8dbf5d1469221f067a4a9c9b8b4857052/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:36:15 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e750e93a5edc0f158db09886912d16e8dbf5d1469221f067a4a9c9b8b4857052/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:36:15 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e750e93a5edc0f158db09886912d16e8dbf5d1469221f067a4a9c9b8b4857052/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:36:15 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e750e93a5edc0f158db09886912d16e8dbf5d1469221f067a4a9c9b8b4857052/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:36:15 np0005604943 podman[109479]: 2026-02-02 11:36:15.219530998 +0000 UTC m=+0.188848954 container init e57aefc6f0b03192fc5d8720a68f1689da87ab5d1e00981dd6b19ab455f3e1f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_swirles, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:36:15 np0005604943 podman[109479]: 2026-02-02 11:36:15.22924105 +0000 UTC m=+0.198558956 container start e57aefc6f0b03192fc5d8720a68f1689da87ab5d1e00981dd6b19ab455f3e1f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:36:15 np0005604943 podman[109479]: 2026-02-02 11:36:15.236077177 +0000 UTC m=+0.205395093 container attach e57aefc6f0b03192fc5d8720a68f1689da87ab5d1e00981dd6b19ab455f3e1f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 06:36:15 np0005604943 systemd-logind[786]: New session 36 of user zuul.
Feb  2 06:36:15 np0005604943 systemd[1]: Started Session 36 of User zuul.
Feb  2 06:36:15 np0005604943 focused_swirles[109496]: {
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:    "0": [
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:        {
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "devices": [
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "/dev/loop3"
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            ],
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "lv_name": "ceph_lv0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "lv_size": "21470642176",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "name": "ceph_lv0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "tags": {
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.cluster_name": "ceph",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.crush_device_class": "",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.encrypted": "0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.objectstore": "bluestore",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.osd_id": "0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.type": "block",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.vdo": "0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.with_tpm": "0"
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            },
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "type": "block",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "vg_name": "ceph_vg0"
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:        }
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:    ],
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:    "1": [
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:        {
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "devices": [
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "/dev/loop4"
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            ],
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "lv_name": "ceph_lv1",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "lv_size": "21470642176",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "name": "ceph_lv1",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "tags": {
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.cluster_name": "ceph",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.crush_device_class": "",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.encrypted": "0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.objectstore": "bluestore",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.osd_id": "1",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.type": "block",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.vdo": "0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.with_tpm": "0"
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            },
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "type": "block",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "vg_name": "ceph_vg1"
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:        }
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:    ],
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:    "2": [
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:        {
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "devices": [
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "/dev/loop5"
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            ],
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "lv_name": "ceph_lv2",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "lv_size": "21470642176",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "name": "ceph_lv2",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "tags": {
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.cluster_name": "ceph",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.crush_device_class": "",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.encrypted": "0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.objectstore": "bluestore",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.osd_id": "2",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.type": "block",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.vdo": "0",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:                "ceph.with_tpm": "0"
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            },
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "type": "block",
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:            "vg_name": "ceph_vg2"
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:        }
Feb  2 06:36:15 np0005604943 focused_swirles[109496]:    ]
Feb  2 06:36:15 np0005604943 focused_swirles[109496]: }
Feb  2 06:36:15 np0005604943 systemd[1]: libpod-e57aefc6f0b03192fc5d8720a68f1689da87ab5d1e00981dd6b19ab455f3e1f9.scope: Deactivated successfully.
Feb  2 06:36:15 np0005604943 podman[109479]: 2026-02-02 11:36:15.55047148 +0000 UTC m=+0.519789396 container died e57aefc6f0b03192fc5d8720a68f1689da87ab5d1e00981dd6b19ab455f3e1f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 06:36:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:15 np0005604943 systemd[1]: var-lib-containers-storage-overlay-e750e93a5edc0f158db09886912d16e8dbf5d1469221f067a4a9c9b8b4857052-merged.mount: Deactivated successfully.
Feb  2 06:36:15 np0005604943 podman[109479]: 2026-02-02 11:36:15.751664704 +0000 UTC m=+0.720982590 container remove e57aefc6f0b03192fc5d8720a68f1689da87ab5d1e00981dd6b19ab455f3e1f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_swirles, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb  2 06:36:15 np0005604943 systemd[1]: libpod-conmon-e57aefc6f0b03192fc5d8720a68f1689da87ab5d1e00981dd6b19ab455f3e1f9.scope: Deactivated successfully.
Feb  2 06:36:16 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Feb  2 06:36:16 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Feb  2 06:36:16 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Feb  2 06:36:16 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Feb  2 06:36:16 np0005604943 podman[109735]: 2026-02-02 11:36:16.307543376 +0000 UTC m=+0.061631181 container create f6dfbee9c68b6f5305e18971fc0eac20449f7294da249fddb2bda013f1a4de2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:36:16 np0005604943 systemd[76640]: Created slice User Background Tasks Slice.
Feb  2 06:36:16 np0005604943 systemd[76640]: Starting Cleanup of User's Temporary Files and Directories...
Feb  2 06:36:16 np0005604943 systemd[1]: Started libpod-conmon-f6dfbee9c68b6f5305e18971fc0eac20449f7294da249fddb2bda013f1a4de2f.scope.
Feb  2 06:36:16 np0005604943 systemd[76640]: Finished Cleanup of User's Temporary Files and Directories.
Feb  2 06:36:16 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:36:16 np0005604943 podman[109735]: 2026-02-02 11:36:16.371139617 +0000 UTC m=+0.125227482 container init f6dfbee9c68b6f5305e18971fc0eac20449f7294da249fddb2bda013f1a4de2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_khorana, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:36:16 np0005604943 podman[109735]: 2026-02-02 11:36:16.282146477 +0000 UTC m=+0.036234352 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:36:16 np0005604943 podman[109735]: 2026-02-02 11:36:16.377305397 +0000 UTC m=+0.131393172 container start f6dfbee9c68b6f5305e18971fc0eac20449f7294da249fddb2bda013f1a4de2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:36:16 np0005604943 festive_khorana[109752]: 167 167
Feb  2 06:36:16 np0005604943 podman[109735]: 2026-02-02 11:36:16.381965048 +0000 UTC m=+0.136052833 container attach f6dfbee9c68b6f5305e18971fc0eac20449f7294da249fddb2bda013f1a4de2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:36:16 np0005604943 systemd[1]: libpod-f6dfbee9c68b6f5305e18971fc0eac20449f7294da249fddb2bda013f1a4de2f.scope: Deactivated successfully.
Feb  2 06:36:16 np0005604943 podman[109735]: 2026-02-02 11:36:16.382797739 +0000 UTC m=+0.136885514 container died f6dfbee9c68b6f5305e18971fc0eac20449f7294da249fddb2bda013f1a4de2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 06:36:16 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f1a8dcc5de1222879e054e18e00015bfdd85228c9934c4c2d55716376ddde83c-merged.mount: Deactivated successfully.
Feb  2 06:36:16 np0005604943 python3.9[109721]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:36:16 np0005604943 podman[109735]: 2026-02-02 11:36:16.433406704 +0000 UTC m=+0.187494519 container remove f6dfbee9c68b6f5305e18971fc0eac20449f7294da249fddb2bda013f1a4de2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_khorana, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 06:36:16 np0005604943 systemd[1]: libpod-conmon-f6dfbee9c68b6f5305e18971fc0eac20449f7294da249fddb2bda013f1a4de2f.scope: Deactivated successfully.
Feb  2 06:36:16 np0005604943 podman[109780]: 2026-02-02 11:36:16.572061634 +0000 UTC m=+0.052756961 container create 6e90e56b38899605eb10c10202d5565465bbc9fbabe5d53260ba53e98cda93ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 06:36:16 np0005604943 systemd[1]: Started libpod-conmon-6e90e56b38899605eb10c10202d5565465bbc9fbabe5d53260ba53e98cda93ec.scope.
Feb  2 06:36:16 np0005604943 podman[109780]: 2026-02-02 11:36:16.548797259 +0000 UTC m=+0.029492616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:36:16 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:36:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e812a25c5536613a87f6a19d2f29a9ab22a760bbf90979c489ff00fd0e1fe3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:36:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e812a25c5536613a87f6a19d2f29a9ab22a760bbf90979c489ff00fd0e1fe3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:36:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e812a25c5536613a87f6a19d2f29a9ab22a760bbf90979c489ff00fd0e1fe3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:36:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e812a25c5536613a87f6a19d2f29a9ab22a760bbf90979c489ff00fd0e1fe3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:36:16 np0005604943 podman[109780]: 2026-02-02 11:36:16.678569089 +0000 UTC m=+0.159264436 container init 6e90e56b38899605eb10c10202d5565465bbc9fbabe5d53260ba53e98cda93ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_elgamal, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:36:16 np0005604943 podman[109780]: 2026-02-02 11:36:16.686057033 +0000 UTC m=+0.166752370 container start 6e90e56b38899605eb10c10202d5565465bbc9fbabe5d53260ba53e98cda93ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_elgamal, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:36:16 np0005604943 podman[109780]: 2026-02-02 11:36:16.689685678 +0000 UTC m=+0.170380995 container attach 6e90e56b38899605eb10c10202d5565465bbc9fbabe5d53260ba53e98cda93ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_elgamal, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:36:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:36:16 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Feb  2 06:36:16 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Feb  2 06:36:17 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 3.e scrub starts
Feb  2 06:36:17 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 3.e scrub ok
Feb  2 06:36:17 np0005604943 lvm[110024]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:36:17 np0005604943 lvm[110024]: VG ceph_vg0 finished
Feb  2 06:36:17 np0005604943 lvm[110027]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:36:17 np0005604943 lvm[110027]: VG ceph_vg1 finished
Feb  2 06:36:17 np0005604943 lvm[110029]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:36:17 np0005604943 lvm[110029]: VG ceph_vg2 finished
Feb  2 06:36:17 np0005604943 python3.9[109988]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:36:17 np0005604943 sweet_elgamal[109796]: {}
Feb  2 06:36:17 np0005604943 systemd[1]: libpod-6e90e56b38899605eb10c10202d5565465bbc9fbabe5d53260ba53e98cda93ec.scope: Deactivated successfully.
Feb  2 06:36:17 np0005604943 systemd[1]: libpod-6e90e56b38899605eb10c10202d5565465bbc9fbabe5d53260ba53e98cda93ec.scope: Consumed 1.058s CPU time.
Feb  2 06:36:17 np0005604943 podman[109780]: 2026-02-02 11:36:17.440012278 +0000 UTC m=+0.920707645 container died 6e90e56b38899605eb10c10202d5565465bbc9fbabe5d53260ba53e98cda93ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_elgamal, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:36:17 np0005604943 systemd[1]: var-lib-containers-storage-overlay-60e812a25c5536613a87f6a19d2f29a9ab22a760bbf90979c489ff00fd0e1fe3-merged.mount: Deactivated successfully.
Feb  2 06:36:17 np0005604943 podman[109780]: 2026-02-02 11:36:17.484366529 +0000 UTC m=+0.965061836 container remove 6e90e56b38899605eb10c10202d5565465bbc9fbabe5d53260ba53e98cda93ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:36:17 np0005604943 systemd[1]: libpod-conmon-6e90e56b38899605eb10c10202d5565465bbc9fbabe5d53260ba53e98cda93ec.scope: Deactivated successfully.
Feb  2 06:36:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:36:17 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:36:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:36:17 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:36:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:17 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Feb  2 06:36:17 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Feb  2 06:36:18 np0005604943 python3.9[110259]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:36:18 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:36:18 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:36:18 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Feb  2 06:36:18 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Feb  2 06:36:19 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.f scrub starts
Feb  2 06:36:19 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.f scrub ok
Feb  2 06:36:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:20 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Feb  2 06:36:20 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Feb  2 06:36:20 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.a scrub starts
Feb  2 06:36:20 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.a scrub ok
Feb  2 06:36:20 np0005604943 systemd[1]: session-36.scope: Deactivated successfully.
Feb  2 06:36:20 np0005604943 systemd[1]: session-36.scope: Consumed 2.126s CPU time.
Feb  2 06:36:20 np0005604943 systemd-logind[786]: Session 36 logged out. Waiting for processes to exit.
Feb  2 06:36:20 np0005604943 systemd-logind[786]: Removed session 36.
Feb  2 06:36:20 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Feb  2 06:36:20 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:36:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:36:22 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.d scrub starts
Feb  2 06:36:22 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.d scrub ok
Feb  2 06:36:22 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Feb  2 06:36:22 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Feb  2 06:36:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:23 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Feb  2 06:36:23 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Feb  2 06:36:24 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.c scrub starts
Feb  2 06:36:24 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.c scrub ok
Feb  2 06:36:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:25 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Feb  2 06:36:25 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Feb  2 06:36:26 np0005604943 systemd-logind[786]: New session 37 of user zuul.
Feb  2 06:36:26 np0005604943 systemd[1]: Started Session 37 of User zuul.
Feb  2 06:36:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:36:27 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Feb  2 06:36:27 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Feb  2 06:36:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:27 np0005604943 python3.9[110438]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:36:27 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Feb  2 06:36:27 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Feb  2 06:36:28 np0005604943 python3.9[110592]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:36:28 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Feb  2 06:36:28 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Feb  2 06:36:29 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Feb  2 06:36:29 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Feb  2 06:36:29 np0005604943 python3.9[110748]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:36:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:29 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Feb  2 06:36:30 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Feb  2 06:36:30 np0005604943 python3.9[110832]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:36:30 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Feb  2 06:36:30 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Feb  2 06:36:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:36:32 np0005604943 python3.9[110985]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:36:32 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Feb  2 06:36:32 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Feb  2 06:36:33 np0005604943 python3.9[111180]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:36:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:34 np0005604943 python3.9[111332]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:36:34 np0005604943 python3.9[111497]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:36:34 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Feb  2 06:36:34 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Feb  2 06:36:35 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Feb  2 06:36:35 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Feb  2 06:36:35 np0005604943 python3.9[111575]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:36:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:35 np0005604943 python3.9[111727]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:36:36 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Feb  2 06:36:36 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Feb  2 06:36:36 np0005604943 python3.9[111805]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:36:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:36:37 np0005604943 python3.9[111957]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:36:37 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Feb  2 06:36:37 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Feb  2 06:36:37 np0005604943 python3.9[112109]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:36:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:38 np0005604943 python3.9[112261]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:36:38 np0005604943 python3.9[112413]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:36:39 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Feb  2 06:36:39 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Feb  2 06:36:39 np0005604943 python3.9[112565]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:36:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:39 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Feb  2 06:36:39 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Feb  2 06:36:39 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Feb  2 06:36:39 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Feb  2 06:36:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:36:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:36:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:36:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:36:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:36:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:36:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:41 np0005604943 python3.9[112718]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:36:41 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.a scrub starts
Feb  2 06:36:41 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.a scrub ok
Feb  2 06:36:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:36:42 np0005604943 python3.9[112872]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:36:42 np0005604943 python3.9[113024]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:36:42 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Feb  2 06:36:42 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Feb  2 06:36:43 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Feb  2 06:36:43 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Feb  2 06:36:43 np0005604943 python3.9[113176]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:36:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:44 np0005604943 python3.9[113329]: ansible-service_facts Invoked
Feb  2 06:36:44 np0005604943 network[113346]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 06:36:44 np0005604943 network[113347]: 'network-scripts' will be removed from distribution in near future.
Feb  2 06:36:44 np0005604943 network[113348]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 06:36:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:45 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.f scrub starts
Feb  2 06:36:45 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.f scrub ok
Feb  2 06:36:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:36:47 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Feb  2 06:36:47 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Feb  2 06:36:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:47 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.b scrub starts
Feb  2 06:36:47 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.b scrub ok
Feb  2 06:36:48 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Feb  2 06:36:48 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Feb  2 06:36:48 np0005604943 python3.9[113800]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:36:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:50 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Feb  2 06:36:50 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Feb  2 06:36:50 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Feb  2 06:36:50 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Feb  2 06:36:50 np0005604943 python3.9[113953]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb  2 06:36:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:51 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Feb  2 06:36:51 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Feb  2 06:36:51 np0005604943 python3.9[114105]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:36:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:36:52 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Feb  2 06:36:52 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Feb  2 06:36:52 np0005604943 python3.9[114183]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:36:52 np0005604943 python3.9[114335]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:36:53 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Feb  2 06:36:53 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Feb  2 06:36:53 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Feb  2 06:36:53 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Feb  2 06:36:53 np0005604943 python3.9[114413]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:36:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:53 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.d scrub starts
Feb  2 06:36:53 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.d scrub ok
Feb  2 06:36:54 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Feb  2 06:36:54 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Feb  2 06:36:54 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Feb  2 06:36:54 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Feb  2 06:36:54 np0005604943 python3.9[114565]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:36:54 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Feb  2 06:36:54 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Feb  2 06:36:55 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Feb  2 06:36:55 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Feb  2 06:36:55 np0005604943 python3.9[114717]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:36:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:55 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Feb  2 06:36:56 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Feb  2 06:36:56 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Feb  2 06:36:56 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Feb  2 06:36:56 np0005604943 python3.9[114801]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:36:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:36:57 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Feb  2 06:36:57 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Feb  2 06:36:57 np0005604943 systemd[1]: session-37.scope: Deactivated successfully.
Feb  2 06:36:57 np0005604943 systemd[1]: session-37.scope: Consumed 21.881s CPU time.
Feb  2 06:36:57 np0005604943 systemd-logind[786]: Session 37 logged out. Waiting for processes to exit.
Feb  2 06:36:57 np0005604943 systemd-logind[786]: Removed session 37.
Feb  2 06:36:57 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.f scrub starts
Feb  2 06:36:57 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.f scrub ok
Feb  2 06:36:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:58 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Feb  2 06:36:58 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Feb  2 06:36:58 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Feb  2 06:36:58 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Feb  2 06:36:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:36:59 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Feb  2 06:36:59 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Feb  2 06:37:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:01 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Feb  2 06:37:01 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Feb  2 06:37:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:37:02 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Feb  2 06:37:02 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Feb  2 06:37:02 np0005604943 systemd-logind[786]: New session 38 of user zuul.
Feb  2 06:37:02 np0005604943 systemd[1]: Started Session 38 of User zuul.
Feb  2 06:37:03 np0005604943 python3.9[114983]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:04 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Feb  2 06:37:04 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Feb  2 06:37:04 np0005604943 python3.9[115135]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:04 np0005604943 python3.9[115213]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:04 np0005604943 systemd[1]: session-38.scope: Deactivated successfully.
Feb  2 06:37:04 np0005604943 systemd[1]: session-38.scope: Consumed 1.295s CPU time.
Feb  2 06:37:04 np0005604943 systemd-logind[786]: Session 38 logged out. Waiting for processes to exit.
Feb  2 06:37:04 np0005604943 systemd-logind[786]: Removed session 38.
Feb  2 06:37:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:37:07 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Feb  2 06:37:07 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Feb  2 06:37:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:07 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Feb  2 06:37:07 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Feb  2 06:37:08 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Feb  2 06:37:08 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Feb  2 06:37:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:37:09
Feb  2 06:37:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:37:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:37:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.meta', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control']
Feb  2 06:37:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:37:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:09 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 6.f scrub starts
Feb  2 06:37:09 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 6.f scrub ok
Feb  2 06:37:09 np0005604943 systemd-logind[786]: New session 39 of user zuul.
Feb  2 06:37:09 np0005604943 systemd[1]: Started Session 39 of User zuul.
Feb  2 06:37:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:37:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:37:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:37:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:37:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:37:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:37:10 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Feb  2 06:37:10 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Feb  2 06:37:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:37:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:37:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:37:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:37:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:37:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:37:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:37:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:37:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:37:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:37:10 np0005604943 python3.9[115391]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:37:11 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.d scrub starts
Feb  2 06:37:11 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.d scrub ok
Feb  2 06:37:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:37:12 np0005604943 python3.9[115547]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:12 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Feb  2 06:37:12 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Feb  2 06:37:12 np0005604943 python3.9[115722]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:13 np0005604943 python3.9[115800]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.n1ahr5_m recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:14 np0005604943 python3.9[115952]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:14 np0005604943 python3.9[116030]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.mqtl4y6c recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:14 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Feb  2 06:37:14 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Feb  2 06:37:14 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.e scrub starts
Feb  2 06:37:14 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.e scrub ok
Feb  2 06:37:15 np0005604943 python3.9[116182]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:37:15 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.e scrub starts
Feb  2 06:37:15 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 10.e scrub ok
Feb  2 06:37:15 np0005604943 python3.9[116334]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:16 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 6.a scrub starts
Feb  2 06:37:16 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 6.a scrub ok
Feb  2 06:37:16 np0005604943 python3.9[116412]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:37:16 np0005604943 python3.9[116564]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:16 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Feb  2 06:37:16 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Feb  2 06:37:16 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Feb  2 06:37:16 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Feb  2 06:37:16 np0005604943 python3.9[116642]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:37:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:37:17 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Feb  2 06:37:17 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Feb  2 06:37:17 np0005604943 python3.9[116794]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:18 np0005604943 python3.9[117011]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:37:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:37:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:37:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:37:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:37:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:37:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:37:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:37:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:37:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:37:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:37:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:37:18 np0005604943 python3.9[117156]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:18 np0005604943 podman[117169]: 2026-02-02 11:37:18.484228279 +0000 UTC m=+0.034452103 container create 4b31fa1725c7838d39291b84379f031cbca4acef2321cbac49b1bdcaecc193a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_montalcini, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:37:18 np0005604943 systemd[1]: Started libpod-conmon-4b31fa1725c7838d39291b84379f031cbca4acef2321cbac49b1bdcaecc193a6.scope.
Feb  2 06:37:18 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:37:18 np0005604943 podman[117169]: 2026-02-02 11:37:18.53978495 +0000 UTC m=+0.090008804 container init 4b31fa1725c7838d39291b84379f031cbca4acef2321cbac49b1bdcaecc193a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_montalcini, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:37:18 np0005604943 podman[117169]: 2026-02-02 11:37:18.544805939 +0000 UTC m=+0.095029763 container start 4b31fa1725c7838d39291b84379f031cbca4acef2321cbac49b1bdcaecc193a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 06:37:18 np0005604943 infallible_montalcini[117186]: 167 167
Feb  2 06:37:18 np0005604943 systemd[1]: libpod-4b31fa1725c7838d39291b84379f031cbca4acef2321cbac49b1bdcaecc193a6.scope: Deactivated successfully.
Feb  2 06:37:18 np0005604943 podman[117169]: 2026-02-02 11:37:18.550244111 +0000 UTC m=+0.100467935 container attach 4b31fa1725c7838d39291b84379f031cbca4acef2321cbac49b1bdcaecc193a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:37:18 np0005604943 podman[117169]: 2026-02-02 11:37:18.551966556 +0000 UTC m=+0.102190400 container died 4b31fa1725c7838d39291b84379f031cbca4acef2321cbac49b1bdcaecc193a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_montalcini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:37:18 np0005604943 podman[117169]: 2026-02-02 11:37:18.467610589 +0000 UTC m=+0.017834443 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:37:18 np0005604943 systemd[1]: var-lib-containers-storage-overlay-72564486649cc4ae79248a409769a7a86f3e961011b38ee75e90ff82ddd179ba-merged.mount: Deactivated successfully.
Feb  2 06:37:18 np0005604943 podman[117169]: 2026-02-02 11:37:18.589548089 +0000 UTC m=+0.139771913 container remove 4b31fa1725c7838d39291b84379f031cbca4acef2321cbac49b1bdcaecc193a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_montalcini, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 06:37:18 np0005604943 systemd[1]: libpod-conmon-4b31fa1725c7838d39291b84379f031cbca4acef2321cbac49b1bdcaecc193a6.scope: Deactivated successfully.
Feb  2 06:37:18 np0005604943 podman[117257]: 2026-02-02 11:37:18.712346442 +0000 UTC m=+0.033586891 container create 9a99dadf568b9bbb66f62cb9b28f1add3e4e4e8a623725229917198fc8a89594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ramanujan, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:37:18 np0005604943 systemd[1]: Started libpod-conmon-9a99dadf568b9bbb66f62cb9b28f1add3e4e4e8a623725229917198fc8a89594.scope.
Feb  2 06:37:18 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:37:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/875b31f6b7030fc253eebb873515107020a4d2026da735722b8b3e48dd5d3f4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:37:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/875b31f6b7030fc253eebb873515107020a4d2026da735722b8b3e48dd5d3f4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:37:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/875b31f6b7030fc253eebb873515107020a4d2026da735722b8b3e48dd5d3f4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:37:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/875b31f6b7030fc253eebb873515107020a4d2026da735722b8b3e48dd5d3f4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:37:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/875b31f6b7030fc253eebb873515107020a4d2026da735722b8b3e48dd5d3f4f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:37:18 np0005604943 podman[117257]: 2026-02-02 11:37:18.76548115 +0000 UTC m=+0.086721609 container init 9a99dadf568b9bbb66f62cb9b28f1add3e4e4e8a623725229917198fc8a89594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 06:37:18 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Feb  2 06:37:18 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Feb  2 06:37:18 np0005604943 podman[117257]: 2026-02-02 11:37:18.77937972 +0000 UTC m=+0.100620159 container start 9a99dadf568b9bbb66f62cb9b28f1add3e4e4e8a623725229917198fc8a89594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ramanujan, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:37:18 np0005604943 podman[117257]: 2026-02-02 11:37:18.786651268 +0000 UTC m=+0.107891747 container attach 9a99dadf568b9bbb66f62cb9b28f1add3e4e4e8a623725229917198fc8a89594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ramanujan, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:37:18 np0005604943 podman[117257]: 2026-02-02 11:37:18.698151585 +0000 UTC m=+0.019392054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:37:18 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:37:18 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:37:18 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:37:19 np0005604943 python3.9[117383]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:19 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Feb  2 06:37:19 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Feb  2 06:37:19 np0005604943 youthful_ramanujan[117303]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:37:19 np0005604943 youthful_ramanujan[117303]: --> All data devices are unavailable
Feb  2 06:37:19 np0005604943 systemd[1]: libpod-9a99dadf568b9bbb66f62cb9b28f1add3e4e4e8a623725229917198fc8a89594.scope: Deactivated successfully.
Feb  2 06:37:19 np0005604943 podman[117257]: 2026-02-02 11:37:19.245471111 +0000 UTC m=+0.566711560 container died 9a99dadf568b9bbb66f62cb9b28f1add3e4e4e8a623725229917198fc8a89594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 06:37:19 np0005604943 systemd[1]: var-lib-containers-storage-overlay-875b31f6b7030fc253eebb873515107020a4d2026da735722b8b3e48dd5d3f4f-merged.mount: Deactivated successfully.
Feb  2 06:37:19 np0005604943 podman[117257]: 2026-02-02 11:37:19.293878355 +0000 UTC m=+0.615118804 container remove 9a99dadf568b9bbb66f62cb9b28f1add3e4e4e8a623725229917198fc8a89594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:37:19 np0005604943 systemd[1]: libpod-conmon-9a99dadf568b9bbb66f62cb9b28f1add3e4e4e8a623725229917198fc8a89594.scope: Deactivated successfully.
Feb  2 06:37:19 np0005604943 python3.9[117490]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:19 np0005604943 podman[117598]: 2026-02-02 11:37:19.713759149 +0000 UTC m=+0.044651019 container create 4a766a8751545e74bfe103721607b793b75f8c45697f74f6f2b5b0ef95f4d496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:37:19 np0005604943 systemd[1]: Started libpod-conmon-4a766a8751545e74bfe103721607b793b75f8c45697f74f6f2b5b0ef95f4d496.scope.
Feb  2 06:37:19 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:37:19 np0005604943 podman[117598]: 2026-02-02 11:37:19.694193281 +0000 UTC m=+0.025085201 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:37:19 np0005604943 podman[117598]: 2026-02-02 11:37:19.791465083 +0000 UTC m=+0.122357003 container init 4a766a8751545e74bfe103721607b793b75f8c45697f74f6f2b5b0ef95f4d496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 06:37:19 np0005604943 podman[117598]: 2026-02-02 11:37:19.798839764 +0000 UTC m=+0.129731634 container start 4a766a8751545e74bfe103721607b793b75f8c45697f74f6f2b5b0ef95f4d496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Feb  2 06:37:19 np0005604943 podman[117598]: 2026-02-02 11:37:19.802364486 +0000 UTC m=+0.133256396 container attach 4a766a8751545e74bfe103721607b793b75f8c45697f74f6f2b5b0ef95f4d496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 06:37:19 np0005604943 hungry_goodall[117647]: 167 167
Feb  2 06:37:19 np0005604943 systemd[1]: libpod-4a766a8751545e74bfe103721607b793b75f8c45697f74f6f2b5b0ef95f4d496.scope: Deactivated successfully.
Feb  2 06:37:19 np0005604943 podman[117598]: 2026-02-02 11:37:19.804527622 +0000 UTC m=+0.135419482 container died 4a766a8751545e74bfe103721607b793b75f8c45697f74f6f2b5b0ef95f4d496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 06:37:19 np0005604943 systemd[1]: var-lib-containers-storage-overlay-76935a2dcf2f6d21abab3a004d48e00f928c765056c4b6383c4971a86e79e4b1-merged.mount: Deactivated successfully.
Feb  2 06:37:19 np0005604943 podman[117598]: 2026-02-02 11:37:19.836735116 +0000 UTC m=+0.167626976 container remove 4a766a8751545e74bfe103721607b793b75f8c45697f74f6f2b5b0ef95f4d496 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 06:37:19 np0005604943 systemd[1]: libpod-conmon-4a766a8751545e74bfe103721607b793b75f8c45697f74f6f2b5b0ef95f4d496.scope: Deactivated successfully.
Feb  2 06:37:19 np0005604943 podman[117670]: 2026-02-02 11:37:19.980996856 +0000 UTC m=+0.043415927 container create e1c0d34f826fe7c85313adee2f9f2336c3f894555be755edca7b5a1d06b29938 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_boyd, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:37:20 np0005604943 systemd[1]: Started libpod-conmon-e1c0d34f826fe7c85313adee2f9f2336c3f894555be755edca7b5a1d06b29938.scope.
Feb  2 06:37:20 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:37:20 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676121b96cbfea689eeeb68673e0d610733dd67dd85e85ac7a6e94c7af4f3666/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:37:20 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676121b96cbfea689eeeb68673e0d610733dd67dd85e85ac7a6e94c7af4f3666/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:37:20 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676121b96cbfea689eeeb68673e0d610733dd67dd85e85ac7a6e94c7af4f3666/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:37:20 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676121b96cbfea689eeeb68673e0d610733dd67dd85e85ac7a6e94c7af4f3666/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:37:20 np0005604943 podman[117670]: 2026-02-02 11:37:19.964342114 +0000 UTC m=+0.026761205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:37:20 np0005604943 podman[117670]: 2026-02-02 11:37:20.073129024 +0000 UTC m=+0.135548125 container init e1c0d34f826fe7c85313adee2f9f2336c3f894555be755edca7b5a1d06b29938 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_boyd, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 06:37:20 np0005604943 podman[117670]: 2026-02-02 11:37:20.079254962 +0000 UTC m=+0.141674013 container start e1c0d34f826fe7c85313adee2f9f2336c3f894555be755edca7b5a1d06b29938 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_boyd, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:37:20 np0005604943 podman[117670]: 2026-02-02 11:37:20.090275728 +0000 UTC m=+0.152694829 container attach e1c0d34f826fe7c85313adee2f9f2336c3f894555be755edca7b5a1d06b29938 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]: {
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:    "0": [
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:        {
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "devices": [
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "/dev/loop3"
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            ],
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "lv_name": "ceph_lv0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "lv_size": "21470642176",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "name": "ceph_lv0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "tags": {
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.cluster_name": "ceph",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.crush_device_class": "",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.encrypted": "0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.objectstore": "bluestore",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.osd_id": "0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.type": "block",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.vdo": "0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.with_tpm": "0"
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            },
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "type": "block",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "vg_name": "ceph_vg0"
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:        }
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:    ],
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:    "1": [
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:        {
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "devices": [
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "/dev/loop4"
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            ],
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "lv_name": "ceph_lv1",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "lv_size": "21470642176",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "name": "ceph_lv1",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "tags": {
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.cluster_name": "ceph",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.crush_device_class": "",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.encrypted": "0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.objectstore": "bluestore",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.osd_id": "1",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.type": "block",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.vdo": "0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.with_tpm": "0"
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            },
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "type": "block",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "vg_name": "ceph_vg1"
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:        }
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:    ],
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:    "2": [
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:        {
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "devices": [
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "/dev/loop5"
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            ],
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "lv_name": "ceph_lv2",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "lv_size": "21470642176",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "name": "ceph_lv2",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "tags": {
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.cluster_name": "ceph",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.crush_device_class": "",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.encrypted": "0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.objectstore": "bluestore",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.osd_id": "2",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.type": "block",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.vdo": "0",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:                "ceph.with_tpm": "0"
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            },
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "type": "block",
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:            "vg_name": "ceph_vg2"
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:        }
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]:    ]
Feb  2 06:37:20 np0005604943 gallant_boyd[117687]: }
Feb  2 06:37:20 np0005604943 systemd[1]: libpod-e1c0d34f826fe7c85313adee2f9f2336c3f894555be755edca7b5a1d06b29938.scope: Deactivated successfully.
Feb  2 06:37:20 np0005604943 podman[117670]: 2026-02-02 11:37:20.375799869 +0000 UTC m=+0.438218910 container died e1c0d34f826fe7c85313adee2f9f2336c3f894555be755edca7b5a1d06b29938 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:37:20 np0005604943 systemd[1]: var-lib-containers-storage-overlay-676121b96cbfea689eeeb68673e0d610733dd67dd85e85ac7a6e94c7af4f3666-merged.mount: Deactivated successfully.
Feb  2 06:37:20 np0005604943 podman[117670]: 2026-02-02 11:37:20.422612143 +0000 UTC m=+0.485031194 container remove e1c0d34f826fe7c85313adee2f9f2336c3f894555be755edca7b5a1d06b29938 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_boyd, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 06:37:20 np0005604943 systemd[1]: libpod-conmon-e1c0d34f826fe7c85313adee2f9f2336c3f894555be755edca7b5a1d06b29938.scope: Deactivated successfully.
Feb  2 06:37:20 np0005604943 python3.9[117769]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:37:20 np0005604943 systemd[1]: Reloading.
Feb  2 06:37:20 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:37:20 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:37:20 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Feb  2 06:37:20 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Feb  2 06:37:20 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Feb  2 06:37:20 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Feb  2 06:37:21 np0005604943 podman[117962]: 2026-02-02 11:37:21.08301063 +0000 UTC m=+0.041192648 container create e586e9935d57a3cbf636e4407589723431d6ddbb3164ad6476930a0d02a3bf01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_buck, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:37:21 np0005604943 systemd[1]: Started libpod-conmon-e586e9935d57a3cbf636e4407589723431d6ddbb3164ad6476930a0d02a3bf01.scope.
Feb  2 06:37:21 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:37:21 np0005604943 podman[117962]: 2026-02-02 11:37:21.158965739 +0000 UTC m=+0.117147757 container init e586e9935d57a3cbf636e4407589723431d6ddbb3164ad6476930a0d02a3bf01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_buck, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:37:21 np0005604943 podman[117962]: 2026-02-02 11:37:21.166682499 +0000 UTC m=+0.124864507 container start e586e9935d57a3cbf636e4407589723431d6ddbb3164ad6476930a0d02a3bf01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:37:21 np0005604943 podman[117962]: 2026-02-02 11:37:21.070187298 +0000 UTC m=+0.028369326 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:37:21 np0005604943 podman[117962]: 2026-02-02 11:37:21.170721724 +0000 UTC m=+0.128903812 container attach e586e9935d57a3cbf636e4407589723431d6ddbb3164ad6476930a0d02a3bf01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_buck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 06:37:21 np0005604943 systemd[1]: libpod-e586e9935d57a3cbf636e4407589723431d6ddbb3164ad6476930a0d02a3bf01.scope: Deactivated successfully.
Feb  2 06:37:21 np0005604943 recursing_buck[118022]: 167 167
Feb  2 06:37:21 np0005604943 conmon[118022]: conmon e586e9935d57a3cbf636 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e586e9935d57a3cbf636e4407589723431d6ddbb3164ad6476930a0d02a3bf01.scope/container/memory.events
Feb  2 06:37:21 np0005604943 podman[117962]: 2026-02-02 11:37:21.173066805 +0000 UTC m=+0.131248813 container died e586e9935d57a3cbf636e4407589723431d6ddbb3164ad6476930a0d02a3bf01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_buck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:37:21 np0005604943 systemd[1]: var-lib-containers-storage-overlay-9593c400b1cc60ff10016bf717fc59defd1ba8e4b89013596f121ae64ee365f7-merged.mount: Deactivated successfully.
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:37:21 np0005604943 podman[117962]: 2026-02-02 11:37:21.21727542 +0000 UTC m=+0.175457428 container remove e586e9935d57a3cbf636e4407589723431d6ddbb3164ad6476930a0d02a3bf01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 06:37:21 np0005604943 systemd[1]: libpod-conmon-e586e9935d57a3cbf636e4407589723431d6ddbb3164ad6476930a0d02a3bf01.scope: Deactivated successfully.
Feb  2 06:37:21 np0005604943 podman[118078]: 2026-02-02 11:37:21.350252037 +0000 UTC m=+0.036546948 container create d8ee235aee944e7c71f69441681e25dfa5acf526ad19d21e5369135b8891755e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:37:21 np0005604943 python3.9[118069]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:21 np0005604943 systemd[1]: Started libpod-conmon-d8ee235aee944e7c71f69441681e25dfa5acf526ad19d21e5369135b8891755e.scope.
Feb  2 06:37:21 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:37:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565fa84cb23bc33832f23edad228915c458919e88439db449a55069de7244425/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:37:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565fa84cb23bc33832f23edad228915c458919e88439db449a55069de7244425/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:37:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565fa84cb23bc33832f23edad228915c458919e88439db449a55069de7244425/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:37:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565fa84cb23bc33832f23edad228915c458919e88439db449a55069de7244425/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:37:21 np0005604943 podman[118078]: 2026-02-02 11:37:21.333976715 +0000 UTC m=+0.020271646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:37:21 np0005604943 podman[118078]: 2026-02-02 11:37:21.433180807 +0000 UTC m=+0.119475818 container init d8ee235aee944e7c71f69441681e25dfa5acf526ad19d21e5369135b8891755e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:37:21 np0005604943 podman[118078]: 2026-02-02 11:37:21.438261789 +0000 UTC m=+0.124556700 container start d8ee235aee944e7c71f69441681e25dfa5acf526ad19d21e5369135b8891755e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:37:21 np0005604943 podman[118078]: 2026-02-02 11:37:21.44216031 +0000 UTC m=+0.128455341 container attach d8ee235aee944e7c71f69441681e25dfa5acf526ad19d21e5369135b8891755e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:37:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:21 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Feb  2 06:37:21 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Feb  2 06:37:21 np0005604943 python3.9[118177]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:37:22 np0005604943 lvm[118325]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:37:22 np0005604943 lvm[118325]: VG ceph_vg0 finished
Feb  2 06:37:22 np0005604943 lvm[118347]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:37:22 np0005604943 lvm[118347]: VG ceph_vg1 finished
Feb  2 06:37:22 np0005604943 lvm[118352]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:37:22 np0005604943 lvm[118352]: VG ceph_vg2 finished
Feb  2 06:37:22 np0005604943 reverent_kare[118097]: {}
Feb  2 06:37:22 np0005604943 systemd[1]: libpod-d8ee235aee944e7c71f69441681e25dfa5acf526ad19d21e5369135b8891755e.scope: Deactivated successfully.
Feb  2 06:37:22 np0005604943 podman[118078]: 2026-02-02 11:37:22.20337884 +0000 UTC m=+0.889673761 container died d8ee235aee944e7c71f69441681e25dfa5acf526ad19d21e5369135b8891755e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 06:37:22 np0005604943 systemd[1]: libpod-d8ee235aee944e7c71f69441681e25dfa5acf526ad19d21e5369135b8891755e.scope: Consumed 1.034s CPU time.
Feb  2 06:37:22 np0005604943 systemd[1]: var-lib-containers-storage-overlay-565fa84cb23bc33832f23edad228915c458919e88439db449a55069de7244425-merged.mount: Deactivated successfully.
Feb  2 06:37:22 np0005604943 podman[118078]: 2026-02-02 11:37:22.245915673 +0000 UTC m=+0.932210584 container remove d8ee235aee944e7c71f69441681e25dfa5acf526ad19d21e5369135b8891755e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:37:22 np0005604943 systemd[1]: libpod-conmon-d8ee235aee944e7c71f69441681e25dfa5acf526ad19d21e5369135b8891755e.scope: Deactivated successfully.
Feb  2 06:37:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:37:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:37:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:37:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:37:22 np0005604943 python3.9[118407]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:22 np0005604943 python3.9[118522]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:23 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:37:23 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:37:23 np0005604943 python3.9[118674]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:37:23 np0005604943 systemd[1]: Reloading.
Feb  2 06:37:23 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:37:23 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:37:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:23 np0005604943 systemd[1]: Starting Create netns directory...
Feb  2 06:37:23 np0005604943 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb  2 06:37:23 np0005604943 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb  2 06:37:23 np0005604943 systemd[1]: Finished Create netns directory.
Feb  2 06:37:24 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Feb  2 06:37:24 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Feb  2 06:37:24 np0005604943 python3.9[118864]: ansible-ansible.builtin.service_facts Invoked
Feb  2 06:37:24 np0005604943 network[118881]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 06:37:24 np0005604943 network[118882]: 'network-scripts' will be removed from distribution in near future.
Feb  2 06:37:24 np0005604943 network[118883]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 06:37:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:25 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.f scrub starts
Feb  2 06:37:25 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.f scrub ok
Feb  2 06:37:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:37:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:27 np0005604943 python3.9[119145]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:28 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Feb  2 06:37:28 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Feb  2 06:37:28 np0005604943 python3.9[119223]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:28 np0005604943 python3.9[119375]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:29 np0005604943 python3.9[119527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:37:29.633758) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032249633813, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7212, "num_deletes": 251, "total_data_size": 9810279, "memory_usage": 10005160, "flush_reason": "Manual Compaction"}
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032249672533, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7780819, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7355, "table_properties": {"data_size": 7753935, "index_size": 17617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 75536, "raw_average_key_size": 23, "raw_value_size": 7691095, "raw_average_value_size": 2363, "num_data_blocks": 775, "num_entries": 3254, "num_filter_entries": 3254, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031851, "oldest_key_time": 1770031851, "file_creation_time": 1770032249, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 38827 microseconds, and 10458 cpu microseconds.
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:37:29.672584) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7780819 bytes OK
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:37:29.672603) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:37:29.674068) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:37:29.674094) EVENT_LOG_v1 {"time_micros": 1770032249674087, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:37:29.674134) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9778900, prev total WAL file size 9778900, number of live WAL files 2.
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:37:29.676265) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7598KB) 13(58KB) 8(1944B)]
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032249676437, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7842723, "oldest_snapshot_seqno": -1}
Feb  2 06:37:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3080 keys, 7795611 bytes, temperature: kUnknown
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032249737405, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7795611, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7769132, "index_size": 17665, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7749, "raw_key_size": 73976, "raw_average_key_size": 24, "raw_value_size": 7707615, "raw_average_value_size": 2502, "num_data_blocks": 778, "num_entries": 3080, "num_filter_entries": 3080, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770032249, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:37:29.737733) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7795611 bytes
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:37:29.739303) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.4 rd, 127.7 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3369, records dropped: 289 output_compression: NoCompression
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:37:29.739344) EVENT_LOG_v1 {"time_micros": 1770032249739324, "job": 4, "event": "compaction_finished", "compaction_time_micros": 61066, "compaction_time_cpu_micros": 24351, "output_level": 6, "num_output_files": 1, "total_output_size": 7795611, "num_input_records": 3369, "num_output_records": 3080, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032249741128, "job": 4, "event": "table_file_deletion", "file_number": 19}
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032249741404, "job": 4, "event": "table_file_deletion", "file_number": 13}
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032249741590, "job": 4, "event": "table_file_deletion", "file_number": 8}
Feb  2 06:37:29 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:37:29.675970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:37:29 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.c scrub starts
Feb  2 06:37:29 np0005604943 python3.9[119606]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:29 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.c scrub ok
Feb  2 06:37:30 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Feb  2 06:37:30 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Feb  2 06:37:30 np0005604943 python3.9[119758]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb  2 06:37:30 np0005604943 systemd[1]: Starting Time & Date Service...
Feb  2 06:37:30 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Feb  2 06:37:30 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Feb  2 06:37:30 np0005604943 systemd[1]: Started Time & Date Service.
Feb  2 06:37:31 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Feb  2 06:37:31 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Feb  2 06:37:31 np0005604943 python3.9[119914]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:37:32 np0005604943 python3.9[120066]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:32 np0005604943 python3.9[120144]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:33 np0005604943 python3.9[120296]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:33 np0005604943 python3.9[120374]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.lpyjczen recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:34 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Feb  2 06:37:34 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Feb  2 06:37:34 np0005604943 python3.9[120526]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:34 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Feb  2 06:37:34 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Feb  2 06:37:34 np0005604943 python3.9[120604]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:35 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Feb  2 06:37:35 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Feb  2 06:37:35 np0005604943 python3.9[120756]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:37:35 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Feb  2 06:37:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:35 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Feb  2 06:37:35 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Feb  2 06:37:35 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Feb  2 06:37:36 np0005604943 python3[120909]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb  2 06:37:36 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Feb  2 06:37:36 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Feb  2 06:37:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:37:36 np0005604943 python3.9[121061]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:37 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.b scrub starts
Feb  2 06:37:37 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.b scrub ok
Feb  2 06:37:37 np0005604943 python3.9[121139]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:37 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Feb  2 06:37:37 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Feb  2 06:37:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:38 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Feb  2 06:37:38 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Feb  2 06:37:38 np0005604943 python3.9[121291]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:38 np0005604943 python3.9[121416]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032257.6151817-308-91524270291270/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:39 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.d scrub starts
Feb  2 06:37:39 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.d scrub ok
Feb  2 06:37:39 np0005604943 python3.9[121568]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:39 np0005604943 python3.9[121646]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:39 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Feb  2 06:37:39 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Feb  2 06:37:40 np0005604943 python3.9[121798]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:40 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Feb  2 06:37:40 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Feb  2 06:37:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:37:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:37:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:37:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:37:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:37:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:37:40 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Feb  2 06:37:40 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Feb  2 06:37:41 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Feb  2 06:37:41 np0005604943 python3.9[121876]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:41 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Feb  2 06:37:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:41 np0005604943 python3.9[122028]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:37:42 np0005604943 python3.9[122106]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:42 np0005604943 python3.9[122258]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:37:43 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Feb  2 06:37:43 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Feb  2 06:37:43 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Feb  2 06:37:43 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Feb  2 06:37:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:43 np0005604943 python3.9[122413]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:43 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Feb  2 06:37:43 np0005604943 ceph-osd[88236]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Feb  2 06:37:44 np0005604943 python3.9[122565]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:44 np0005604943 python3.9[122717]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:45 np0005604943 python3.9[122869]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb  2 06:37:46 np0005604943 python3.9[123021]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb  2 06:37:46 np0005604943 systemd[1]: session-39.scope: Deactivated successfully.
Feb  2 06:37:46 np0005604943 systemd[1]: session-39.scope: Consumed 26.899s CPU time.
Feb  2 06:37:46 np0005604943 systemd-logind[786]: Session 39 logged out. Waiting for processes to exit.
Feb  2 06:37:46 np0005604943 systemd-logind[786]: Removed session 39.
Feb  2 06:37:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:37:47 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Feb  2 06:37:47 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Feb  2 06:37:47 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Feb  2 06:37:47 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Feb  2 06:37:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:48 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Feb  2 06:37:48 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Feb  2 06:37:49 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Feb  2 06:37:49 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Feb  2 06:37:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:51 np0005604943 systemd-logind[786]: New session 40 of user zuul.
Feb  2 06:37:51 np0005604943 systemd[1]: Started Session 40 of User zuul.
Feb  2 06:37:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:37:52 np0005604943 python3.9[123201]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb  2 06:37:53 np0005604943 python3.9[123354]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:37:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:54 np0005604943 python3.9[123508]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Feb  2 06:37:54 np0005604943 python3.9[123660]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.1hpebu9h follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:37:55 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Feb  2 06:37:55 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Feb  2 06:37:55 np0005604943 python3.9[123785]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.1hpebu9h mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770032274.3503027-44-10934517474981/.source.1hpebu9h _original_basename=.r3vnsib0 follow=False checksum=084eca3c3b7094c59ee776ad1bc7a3df401d8e18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:56 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Feb  2 06:37:56 np0005604943 ceph-osd[86144]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Feb  2 06:37:56 np0005604943 python3.9[123937]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:37:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:37:57 np0005604943 python3.9[124089]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfaF4IdqXfMSGs4GhWaZFtA4Qu8RGgt5AsxWEnaCvTDvzl7EYu73JYQ0NnpnA1dZyRSPtWK5yZNFuCp9mcYerHE/VOIeY84yboreiq6oJoObwGlSPmdrEZPMShaSMrVfhkVseqLc4y1S5kU2UZqM3+OkpCVVeajnfHqZi6qH1fdoEe+mLgxKgX/vu7GRACcrVTzSnOnvLGcbiUy+FF++Euyk9D7DfF6caFdd4zoDFTd4CWiGjsQ2yWxZ0L5PSc3ObyJt3lxxxujtpacNakug142pr0O5PWMASfhm5nw72W45Ejp6uoLsWLLa4YZT4UYD8/bhgf2KMFQAUCXMoU2/+zauSg+IzqW+JPFQWYFEsBiDqg/jqu3VV7PSDoh/PviiShJ0gtqZZiQWgw1MGv8Txh7WlfI/QobTyFkazk7TEYnUt3K0CZgDtFIPpsKf+XHDK/YZb2SGzh1G6BsnW3ty8rGUnugFyTcT+HdXU9zNqgUgNpsGUjuOHSCnjwTV4V1GM=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBgSlBSlJEtpKhJj2H7DuKmtCggvKxC6o/EZ8HL54jj6#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIXf1GmBQzALPmDKfSXAuVB+xRYqjD32Y59Ej3vCpmUbvcj30rVtWJi5Szv+7TjUVBrZXLEVEpadyLc+MRDWQ8Y=#012 create=True mode=0644 path=/tmp/ansible.1hpebu9h state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:37:57 np0005604943 python3.9[124241]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.1hpebu9h' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:37:58 np0005604943 python3.9[124395]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.1hpebu9h state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:37:58 np0005604943 systemd[1]: session-40.scope: Deactivated successfully.
Feb  2 06:37:58 np0005604943 systemd[1]: session-40.scope: Consumed 4.468s CPU time.
Feb  2 06:37:58 np0005604943 systemd-logind[786]: Session 40 logged out. Waiting for processes to exit.
Feb  2 06:37:58 np0005604943 systemd-logind[786]: Removed session 40.
Feb  2 06:37:59 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 6.d scrub starts
Feb  2 06:37:59 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 6.d scrub ok
Feb  2 06:37:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:00 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 6.e scrub starts
Feb  2 06:38:00 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 6.e scrub ok
Feb  2 06:38:00 np0005604943 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb  2 06:38:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:38:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:04 np0005604943 systemd-logind[786]: New session 41 of user zuul.
Feb  2 06:38:04 np0005604943 systemd[1]: Started Session 41 of User zuul.
Feb  2 06:38:05 np0005604943 python3.9[124576]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:38:05 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Feb  2 06:38:05 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Feb  2 06:38:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:06 np0005604943 python3.9[124732]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb  2 06:38:06 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 6.c scrub starts
Feb  2 06:38:06 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 6.c scrub ok
Feb  2 06:38:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:38:07 np0005604943 python3.9[124886]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:38:07 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 6.b scrub starts
Feb  2 06:38:07 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 6.b scrub ok
Feb  2 06:38:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:07 np0005604943 python3.9[125039]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:38:08 np0005604943 python3.9[125192]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:38:09 np0005604943 python3.9[125344]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:38:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:38:09
Feb  2 06:38:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:38:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:38:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'images', '.mgr', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'volumes']
Feb  2 06:38:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:38:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:09 np0005604943 systemd[1]: session-41.scope: Deactivated successfully.
Feb  2 06:38:09 np0005604943 systemd[1]: session-41.scope: Consumed 3.775s CPU time.
Feb  2 06:38:09 np0005604943 systemd-logind[786]: Session 41 logged out. Waiting for processes to exit.
Feb  2 06:38:09 np0005604943 systemd-logind[786]: Removed session 41.
Feb  2 06:38:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:38:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:38:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:38:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:38:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:38:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:38:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:38:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:38:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:38:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:38:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:38:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:38:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:38:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:38:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:38:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:38:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:38:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:14 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Feb  2 06:38:14 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Feb  2 06:38:15 np0005604943 systemd-logind[786]: New session 42 of user zuul.
Feb  2 06:38:15 np0005604943 systemd[1]: Started Session 42 of User zuul.
Feb  2 06:38:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:16 np0005604943 python3.9[125522]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:38:16 np0005604943 python3.9[125678]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:38:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:38:17 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Feb  2 06:38:17 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Feb  2 06:38:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:17 np0005604943 python3.9[125762]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb  2 06:38:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:19 np0005604943 python3.9[125913]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:38:21 np0005604943 python3.9[126064]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:38:21 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Feb  2 06:38:21 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Feb  2 06:38:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:21 np0005604943 python3.9[126214]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:38:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:38:22 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Feb  2 06:38:22 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Feb  2 06:38:22 np0005604943 python3.9[126364]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:38:23 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Feb  2 06:38:23 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Feb  2 06:38:23 np0005604943 systemd-logind[786]: Session 42 logged out. Waiting for processes to exit.
Feb  2 06:38:23 np0005604943 systemd[1]: session-42.scope: Deactivated successfully.
Feb  2 06:38:23 np0005604943 systemd[1]: session-42.scope: Consumed 5.545s CPU time.
Feb  2 06:38:23 np0005604943 systemd-logind[786]: Removed session 42.
Feb  2 06:38:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:38:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:38:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:38:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:38:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:38:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:38:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:38:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:38:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:38:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:38:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:38:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:38:24 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.a scrub starts
Feb  2 06:38:24 np0005604943 podman[126533]: 2026-02-02 11:38:24.278501887 +0000 UTC m=+0.056616607 container create 728ca952bb6bac923d7ca41e5bccd5936eb801454143b02ae4a3929547e79622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wozniak, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:38:24 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.a scrub ok
Feb  2 06:38:24 np0005604943 systemd[1]: Started libpod-conmon-728ca952bb6bac923d7ca41e5bccd5936eb801454143b02ae4a3929547e79622.scope.
Feb  2 06:38:24 np0005604943 podman[126533]: 2026-02-02 11:38:24.250855666 +0000 UTC m=+0.028970436 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:38:24 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:38:24 np0005604943 podman[126533]: 2026-02-02 11:38:24.380859055 +0000 UTC m=+0.158973755 container init 728ca952bb6bac923d7ca41e5bccd5936eb801454143b02ae4a3929547e79622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  2 06:38:24 np0005604943 podman[126533]: 2026-02-02 11:38:24.389289955 +0000 UTC m=+0.167404685 container start 728ca952bb6bac923d7ca41e5bccd5936eb801454143b02ae4a3929547e79622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:38:24 np0005604943 goofy_wozniak[126549]: 167 167
Feb  2 06:38:24 np0005604943 systemd[1]: libpod-728ca952bb6bac923d7ca41e5bccd5936eb801454143b02ae4a3929547e79622.scope: Deactivated successfully.
Feb  2 06:38:24 np0005604943 podman[126533]: 2026-02-02 11:38:24.399752538 +0000 UTC m=+0.177867268 container attach 728ca952bb6bac923d7ca41e5bccd5936eb801454143b02ae4a3929547e79622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wozniak, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:38:24 np0005604943 podman[126533]: 2026-02-02 11:38:24.400004905 +0000 UTC m=+0.178119595 container died 728ca952bb6bac923d7ca41e5bccd5936eb801454143b02ae4a3929547e79622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Feb  2 06:38:24 np0005604943 systemd[1]: var-lib-containers-storage-overlay-20112bd6b1eca7c329a52fb6c9b2f604191b5cca541591f2ab035086d486231f-merged.mount: Deactivated successfully.
Feb  2 06:38:24 np0005604943 podman[126533]: 2026-02-02 11:38:24.464870506 +0000 UTC m=+0.242985216 container remove 728ca952bb6bac923d7ca41e5bccd5936eb801454143b02ae4a3929547e79622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:38:24 np0005604943 systemd[1]: libpod-conmon-728ca952bb6bac923d7ca41e5bccd5936eb801454143b02ae4a3929547e79622.scope: Deactivated successfully.
Feb  2 06:38:24 np0005604943 podman[126573]: 2026-02-02 11:38:24.60848496 +0000 UTC m=+0.049930233 container create c3f400f3b617827a6b9a85f3977c5426d74b7e569630943225948926a4ec6d48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_goldberg, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Feb  2 06:38:24 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:38:24 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:38:24 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:38:24 np0005604943 systemd[1]: Started libpod-conmon-c3f400f3b617827a6b9a85f3977c5426d74b7e569630943225948926a4ec6d48.scope.
Feb  2 06:38:24 np0005604943 podman[126573]: 2026-02-02 11:38:24.580530381 +0000 UTC m=+0.021975734 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:38:24 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:38:24 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02840d4241ff1ca7811b4d44fd48962e31b6174c34416dfd5e746f497c6a06a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:38:24 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02840d4241ff1ca7811b4d44fd48962e31b6174c34416dfd5e746f497c6a06a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:38:24 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02840d4241ff1ca7811b4d44fd48962e31b6174c34416dfd5e746f497c6a06a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:38:24 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02840d4241ff1ca7811b4d44fd48962e31b6174c34416dfd5e746f497c6a06a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:38:24 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02840d4241ff1ca7811b4d44fd48962e31b6174c34416dfd5e746f497c6a06a9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:38:24 np0005604943 podman[126573]: 2026-02-02 11:38:24.719396011 +0000 UTC m=+0.160841294 container init c3f400f3b617827a6b9a85f3977c5426d74b7e569630943225948926a4ec6d48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 06:38:24 np0005604943 podman[126573]: 2026-02-02 11:38:24.724857224 +0000 UTC m=+0.166302477 container start c3f400f3b617827a6b9a85f3977c5426d74b7e569630943225948926a4ec6d48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 06:38:24 np0005604943 podman[126573]: 2026-02-02 11:38:24.728831658 +0000 UTC m=+0.170276911 container attach c3f400f3b617827a6b9a85f3977c5426d74b7e569630943225948926a4ec6d48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_goldberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb  2 06:38:25 np0005604943 tender_goldberg[126589]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:38:25 np0005604943 tender_goldberg[126589]: --> All data devices are unavailable
Feb  2 06:38:25 np0005604943 systemd[1]: libpod-c3f400f3b617827a6b9a85f3977c5426d74b7e569630943225948926a4ec6d48.scope: Deactivated successfully.
Feb  2 06:38:25 np0005604943 podman[126573]: 2026-02-02 11:38:25.265411736 +0000 UTC m=+0.706857049 container died c3f400f3b617827a6b9a85f3977c5426d74b7e569630943225948926a4ec6d48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:38:25 np0005604943 systemd[1]: var-lib-containers-storage-overlay-02840d4241ff1ca7811b4d44fd48962e31b6174c34416dfd5e746f497c6a06a9-merged.mount: Deactivated successfully.
Feb  2 06:38:25 np0005604943 podman[126573]: 2026-02-02 11:38:25.317268798 +0000 UTC m=+0.758714081 container remove c3f400f3b617827a6b9a85f3977c5426d74b7e569630943225948926a4ec6d48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_goldberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:38:25 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Feb  2 06:38:25 np0005604943 systemd[1]: libpod-conmon-c3f400f3b617827a6b9a85f3977c5426d74b7e569630943225948926a4ec6d48.scope: Deactivated successfully.
Feb  2 06:38:25 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Feb  2 06:38:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:25 np0005604943 podman[126683]: 2026-02-02 11:38:25.736277231 +0000 UTC m=+0.038511665 container create 1256e85c63c9041469587ed5ecd0855ff15e498f8febfb3d692209a726c8f712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:38:25 np0005604943 systemd[1]: Started libpod-conmon-1256e85c63c9041469587ed5ecd0855ff15e498f8febfb3d692209a726c8f712.scope.
Feb  2 06:38:25 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:38:25 np0005604943 podman[126683]: 2026-02-02 11:38:25.803565396 +0000 UTC m=+0.105799909 container init 1256e85c63c9041469587ed5ecd0855ff15e498f8febfb3d692209a726c8f712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gates, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:38:25 np0005604943 podman[126683]: 2026-02-02 11:38:25.808871814 +0000 UTC m=+0.111106267 container start 1256e85c63c9041469587ed5ecd0855ff15e498f8febfb3d692209a726c8f712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 06:38:25 np0005604943 elegant_gates[126700]: 167 167
Feb  2 06:38:25 np0005604943 systemd[1]: libpod-1256e85c63c9041469587ed5ecd0855ff15e498f8febfb3d692209a726c8f712.scope: Deactivated successfully.
Feb  2 06:38:25 np0005604943 podman[126683]: 2026-02-02 11:38:25.812860938 +0000 UTC m=+0.115095391 container attach 1256e85c63c9041469587ed5ecd0855ff15e498f8febfb3d692209a726c8f712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gates, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:38:25 np0005604943 conmon[126700]: conmon 1256e85c63c904146958 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1256e85c63c9041469587ed5ecd0855ff15e498f8febfb3d692209a726c8f712.scope/container/memory.events
Feb  2 06:38:25 np0005604943 podman[126683]: 2026-02-02 11:38:25.813745342 +0000 UTC m=+0.115979765 container died 1256e85c63c9041469587ed5ecd0855ff15e498f8febfb3d692209a726c8f712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 06:38:25 np0005604943 podman[126683]: 2026-02-02 11:38:25.720368447 +0000 UTC m=+0.022602890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:38:25 np0005604943 systemd[1]: var-lib-containers-storage-overlay-59d7d5c9dda8a0f5edd7d6a04af8cdaf30076017996e1323e7174d2e501d3ed6-merged.mount: Deactivated successfully.
Feb  2 06:38:25 np0005604943 podman[126683]: 2026-02-02 11:38:25.849401691 +0000 UTC m=+0.151636144 container remove 1256e85c63c9041469587ed5ecd0855ff15e498f8febfb3d692209a726c8f712 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_gates, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:38:25 np0005604943 systemd[1]: libpod-conmon-1256e85c63c9041469587ed5ecd0855ff15e498f8febfb3d692209a726c8f712.scope: Deactivated successfully.
Feb  2 06:38:26 np0005604943 podman[126724]: 2026-02-02 11:38:26.009835134 +0000 UTC m=+0.055780506 container create bf51430a6fd3cede69b32debea2fef2b7f36611498e6592d44d2dac46e44a5d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 06:38:26 np0005604943 systemd[1]: Started libpod-conmon-bf51430a6fd3cede69b32debea2fef2b7f36611498e6592d44d2dac46e44a5d6.scope.
Feb  2 06:38:26 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:38:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18283d339fcd2f7ec7498d98cb9c479ef8582ca7839225e82ae7e237815a0c18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:38:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18283d339fcd2f7ec7498d98cb9c479ef8582ca7839225e82ae7e237815a0c18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:38:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18283d339fcd2f7ec7498d98cb9c479ef8582ca7839225e82ae7e237815a0c18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:38:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18283d339fcd2f7ec7498d98cb9c479ef8582ca7839225e82ae7e237815a0c18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:38:26 np0005604943 podman[126724]: 2026-02-02 11:38:25.991349482 +0000 UTC m=+0.037294854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:38:26 np0005604943 podman[126724]: 2026-02-02 11:38:26.093592588 +0000 UTC m=+0.139538000 container init bf51430a6fd3cede69b32debea2fef2b7f36611498e6592d44d2dac46e44a5d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bhabha, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:38:26 np0005604943 podman[126724]: 2026-02-02 11:38:26.09942487 +0000 UTC m=+0.145370252 container start bf51430a6fd3cede69b32debea2fef2b7f36611498e6592d44d2dac46e44a5d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bhabha, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 06:38:26 np0005604943 podman[126724]: 2026-02-02 11:38:26.103001122 +0000 UTC m=+0.148946494 container attach bf51430a6fd3cede69b32debea2fef2b7f36611498e6592d44d2dac46e44a5d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bhabha, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:38:26 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Feb  2 06:38:26 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]: {
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:    "0": [
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:        {
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "devices": [
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "/dev/loop3"
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            ],
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "lv_name": "ceph_lv0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "lv_size": "21470642176",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "name": "ceph_lv0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "tags": {
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.cluster_name": "ceph",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.crush_device_class": "",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.encrypted": "0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.objectstore": "bluestore",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.osd_id": "0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.type": "block",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.vdo": "0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.with_tpm": "0"
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            },
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "type": "block",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "vg_name": "ceph_vg0"
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:        }
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:    ],
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:    "1": [
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:        {
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "devices": [
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "/dev/loop4"
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            ],
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "lv_name": "ceph_lv1",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "lv_size": "21470642176",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "name": "ceph_lv1",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "tags": {
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.cluster_name": "ceph",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.crush_device_class": "",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.encrypted": "0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.objectstore": "bluestore",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.osd_id": "1",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.type": "block",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.vdo": "0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.with_tpm": "0"
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            },
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "type": "block",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "vg_name": "ceph_vg1"
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:        }
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:    ],
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:    "2": [
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:        {
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "devices": [
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "/dev/loop5"
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            ],
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "lv_name": "ceph_lv2",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "lv_size": "21470642176",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "name": "ceph_lv2",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "tags": {
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.cluster_name": "ceph",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.crush_device_class": "",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.encrypted": "0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.objectstore": "bluestore",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.osd_id": "2",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.type": "block",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.vdo": "0",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:                "ceph.with_tpm": "0"
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            },
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "type": "block",
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:            "vg_name": "ceph_vg2"
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:        }
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]:    ]
Feb  2 06:38:26 np0005604943 sweet_bhabha[126741]: }
Feb  2 06:38:26 np0005604943 systemd[1]: libpod-bf51430a6fd3cede69b32debea2fef2b7f36611498e6592d44d2dac46e44a5d6.scope: Deactivated successfully.
Feb  2 06:38:26 np0005604943 podman[126724]: 2026-02-02 11:38:26.39747656 +0000 UTC m=+0.443421932 container died bf51430a6fd3cede69b32debea2fef2b7f36611498e6592d44d2dac46e44a5d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bhabha, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:38:26 np0005604943 systemd[1]: var-lib-containers-storage-overlay-18283d339fcd2f7ec7498d98cb9c479ef8582ca7839225e82ae7e237815a0c18-merged.mount: Deactivated successfully.
Feb  2 06:38:26 np0005604943 podman[126724]: 2026-02-02 11:38:26.448367986 +0000 UTC m=+0.494313328 container remove bf51430a6fd3cede69b32debea2fef2b7f36611498e6592d44d2dac46e44a5d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bhabha, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 06:38:26 np0005604943 systemd[1]: libpod-conmon-bf51430a6fd3cede69b32debea2fef2b7f36611498e6592d44d2dac46e44a5d6.scope: Deactivated successfully.
Feb  2 06:38:26 np0005604943 podman[126824]: 2026-02-02 11:38:26.865109971 +0000 UTC m=+0.032396816 container create 6411a74eaca1d7ed8154a4bbdd84885bd46b54bb070c5e995d1224b9bac91a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noyce, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 06:38:26 np0005604943 systemd[1]: Started libpod-conmon-6411a74eaca1d7ed8154a4bbdd84885bd46b54bb070c5e995d1224b9bac91a3e.scope.
Feb  2 06:38:26 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:38:26 np0005604943 podman[126824]: 2026-02-02 11:38:26.937258792 +0000 UTC m=+0.104545707 container init 6411a74eaca1d7ed8154a4bbdd84885bd46b54bb070c5e995d1224b9bac91a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:38:26 np0005604943 podman[126824]: 2026-02-02 11:38:26.947027526 +0000 UTC m=+0.114314391 container start 6411a74eaca1d7ed8154a4bbdd84885bd46b54bb070c5e995d1224b9bac91a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noyce, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:38:26 np0005604943 podman[126824]: 2026-02-02 11:38:26.853650232 +0000 UTC m=+0.020937107 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:38:26 np0005604943 podman[126824]: 2026-02-02 11:38:26.950708783 +0000 UTC m=+0.117995698 container attach 6411a74eaca1d7ed8154a4bbdd84885bd46b54bb070c5e995d1224b9bac91a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noyce, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 06:38:26 np0005604943 beautiful_noyce[126840]: 167 167
Feb  2 06:38:26 np0005604943 systemd[1]: libpod-6411a74eaca1d7ed8154a4bbdd84885bd46b54bb070c5e995d1224b9bac91a3e.scope: Deactivated successfully.
Feb  2 06:38:26 np0005604943 podman[126824]: 2026-02-02 11:38:26.951826532 +0000 UTC m=+0.119113397 container died 6411a74eaca1d7ed8154a4bbdd84885bd46b54bb070c5e995d1224b9bac91a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noyce, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 06:38:26 np0005604943 systemd[1]: var-lib-containers-storage-overlay-2586946ef28038cf9483b7f0b2082781558e3ee9aa0389fc8a3c05bc6b3d7e40-merged.mount: Deactivated successfully.
Feb  2 06:38:26 np0005604943 podman[126824]: 2026-02-02 11:38:26.987944343 +0000 UTC m=+0.155231188 container remove 6411a74eaca1d7ed8154a4bbdd84885bd46b54bb070c5e995d1224b9bac91a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noyce, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 06:38:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:38:27 np0005604943 systemd[1]: libpod-conmon-6411a74eaca1d7ed8154a4bbdd84885bd46b54bb070c5e995d1224b9bac91a3e.scope: Deactivated successfully.
Feb  2 06:38:27 np0005604943 podman[126864]: 2026-02-02 11:38:27.101703409 +0000 UTC m=+0.031365609 container create f7d8c33fd28a00df6670fe7977a2f8ca0292584ca24fe4bc24b93a577d076bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_tesla, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:38:27 np0005604943 systemd[1]: Started libpod-conmon-f7d8c33fd28a00df6670fe7977a2f8ca0292584ca24fe4bc24b93a577d076bb4.scope.
Feb  2 06:38:27 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:38:27 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57f817b098a06efe58af7984f84304447e43f29fbafb0a33b699d7130be3a62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:38:27 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57f817b098a06efe58af7984f84304447e43f29fbafb0a33b699d7130be3a62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:38:27 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57f817b098a06efe58af7984f84304447e43f29fbafb0a33b699d7130be3a62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:38:27 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d57f817b098a06efe58af7984f84304447e43f29fbafb0a33b699d7130be3a62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:38:27 np0005604943 podman[126864]: 2026-02-02 11:38:27.087706064 +0000 UTC m=+0.017368284 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:38:27 np0005604943 podman[126864]: 2026-02-02 11:38:27.207153918 +0000 UTC m=+0.136816138 container init f7d8c33fd28a00df6670fe7977a2f8ca0292584ca24fe4bc24b93a577d076bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 06:38:27 np0005604943 podman[126864]: 2026-02-02 11:38:27.215517096 +0000 UTC m=+0.145179296 container start f7d8c33fd28a00df6670fe7977a2f8ca0292584ca24fe4bc24b93a577d076bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_tesla, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:38:27 np0005604943 podman[126864]: 2026-02-02 11:38:27.220060574 +0000 UTC m=+0.149722774 container attach f7d8c33fd28a00df6670fe7977a2f8ca0292584ca24fe4bc24b93a577d076bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:38:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:27 np0005604943 lvm[126960]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:38:27 np0005604943 lvm[126958]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:38:27 np0005604943 lvm[126960]: VG ceph_vg1 finished
Feb  2 06:38:27 np0005604943 lvm[126958]: VG ceph_vg0 finished
Feb  2 06:38:27 np0005604943 lvm[126962]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:38:27 np0005604943 lvm[126962]: VG ceph_vg2 finished
Feb  2 06:38:28 np0005604943 compassionate_tesla[126881]: {}
Feb  2 06:38:28 np0005604943 systemd[1]: libpod-f7d8c33fd28a00df6670fe7977a2f8ca0292584ca24fe4bc24b93a577d076bb4.scope: Deactivated successfully.
Feb  2 06:38:28 np0005604943 podman[126864]: 2026-02-02 11:38:28.038617145 +0000 UTC m=+0.968279355 container died f7d8c33fd28a00df6670fe7977a2f8ca0292584ca24fe4bc24b93a577d076bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 06:38:28 np0005604943 systemd[1]: libpod-f7d8c33fd28a00df6670fe7977a2f8ca0292584ca24fe4bc24b93a577d076bb4.scope: Consumed 1.079s CPU time.
Feb  2 06:38:28 np0005604943 systemd[1]: var-lib-containers-storage-overlay-d57f817b098a06efe58af7984f84304447e43f29fbafb0a33b699d7130be3a62-merged.mount: Deactivated successfully.
Feb  2 06:38:28 np0005604943 podman[126864]: 2026-02-02 11:38:28.079634424 +0000 UTC m=+1.009296624 container remove f7d8c33fd28a00df6670fe7977a2f8ca0292584ca24fe4bc24b93a577d076bb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 06:38:28 np0005604943 systemd[1]: libpod-conmon-f7d8c33fd28a00df6670fe7977a2f8ca0292584ca24fe4bc24b93a577d076bb4.scope: Deactivated successfully.
Feb  2 06:38:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:38:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:38:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:38:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:38:28 np0005604943 systemd-logind[786]: New session 43 of user zuul.
Feb  2 06:38:28 np0005604943 systemd[1]: Started Session 43 of User zuul.
Feb  2 06:38:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:38:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:38:29 np0005604943 python3.9[127156]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:38:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:30 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Feb  2 06:38:30 np0005604943 ceph-osd[87192]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Feb  2 06:38:30 np0005604943 python3.9[127312]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:38:31 np0005604943 python3.9[127464]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:38:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:38:32 np0005604943 python3.9[127616]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:38:32 np0005604943 python3.9[127739]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032311.7914145-60-71446094340434/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=7e4706e46df712bd3b58750b3323d4e195539986 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:38:33 np0005604943 python3.9[127891]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:38:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:34 np0005604943 python3.9[128014]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032313.1734753-60-57830816551438/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a3670c8d31d8bd2873ba0ebc0986f63459afcd64 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:38:34 np0005604943 python3.9[128166]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:38:35 np0005604943 python3.9[128289]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032314.254918-60-192845541211112/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=43021e0fa250a08a5cbd9411d21144fec853edd0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:38:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:35 np0005604943 python3.9[128441]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:38:36 np0005604943 python3.9[128593]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:38:36 np0005604943 python3.9[128745]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:38:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:38:37 np0005604943 python3.9[128868]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032316.520206-119-96144797063248/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=68b7c9d6eec4410c12dec999f4b11e1bcc8163fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:38:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:38 np0005604943 python3.9[129020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:38:38 np0005604943 python3.9[129143]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032317.6559644-119-194980472191469/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=27abdbc0650e75f5e9c9953af0f3246bf93f70a8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:38:39 np0005604943 python3.9[129295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:38:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:39 np0005604943 python3.9[129418]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032318.7221837-119-152025964216831/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=29f01c5dc4d2ff187f1142156bbb6503fb8e8732 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:38:40 np0005604943 python3.9[129570]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:38:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:38:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:38:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:38:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:38:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:38:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:38:40 np0005604943 python3.9[129722]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:38:41 np0005604943 systemd[1]: session-17.scope: Deactivated successfully.
Feb  2 06:38:41 np0005604943 systemd[1]: session-17.scope: Consumed 1min 30.484s CPU time.
Feb  2 06:38:41 np0005604943 systemd-logind[786]: Session 17 logged out. Waiting for processes to exit.
Feb  2 06:38:41 np0005604943 systemd-logind[786]: Removed session 17.
Feb  2 06:38:41 np0005604943 python3.9[129874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:38:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:38:42 np0005604943 python3.9[129997]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032321.1708443-178-26394615785540/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=6ae8e93117c03cfbd19fc8f0441b02ad44df1acf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:38:42 np0005604943 python3.9[130149]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:38:43 np0005604943 python3.9[130272]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032322.3923464-178-86218312871450/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=27abdbc0650e75f5e9c9953af0f3246bf93f70a8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:38:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:43 np0005604943 python3.9[130424]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:38:44 np0005604943 python3.9[130547]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032323.5273976-178-103335627601602/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=33797d73043dc5684efab07483c1d8319aacac2a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:38:45 np0005604943 python3.9[130699]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:38:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:46 np0005604943 python3.9[130851]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:38:46 np0005604943 python3.9[130974]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032325.7538896-246-116867754807575/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=89824fc7f99ee1c7063912a8f8135620e81daa3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:38:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:38:47 np0005604943 python3.9[131126]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:38:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:47 np0005604943 python3.9[131278]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:38:48 np0005604943 python3.9[131401]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032327.4873245-270-165986305413318/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=89824fc7f99ee1c7063912a8f8135620e81daa3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:38:49 np0005604943 python3.9[131553]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:38:49 np0005604943 python3.9[131705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:38:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:50 np0005604943 python3.9[131828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032329.2468023-294-200088323287897/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=89824fc7f99ee1c7063912a8f8135620e81daa3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:38:50 np0005604943 python3.9[131980]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:38:51 np0005604943 python3.9[132132]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:38:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:51 np0005604943 python3.9[132255]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032331.019907-318-63578552425717/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=89824fc7f99ee1c7063912a8f8135620e81daa3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:38:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:38:52 np0005604943 python3.9[132407]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:38:53 np0005604943 python3.9[132560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:38:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:53 np0005604943 python3.9[132683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032332.8277538-342-217745253009804/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=89824fc7f99ee1c7063912a8f8135620e81daa3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:38:54 np0005604943 python3.9[132835]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:38:55 np0005604943 python3.9[132987]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:38:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:55 np0005604943 python3.9[133110]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032334.7776418-366-270374911042371/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=89824fc7f99ee1c7063912a8f8135620e81daa3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:38:56 np0005604943 systemd[1]: session-43.scope: Deactivated successfully.
Feb  2 06:38:56 np0005604943 systemd[1]: session-43.scope: Consumed 21.157s CPU time.
Feb  2 06:38:56 np0005604943 systemd-logind[786]: Session 43 logged out. Waiting for processes to exit.
Feb  2 06:38:56 np0005604943 systemd-logind[786]: Removed session 43.
Feb  2 06:38:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:38:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:38:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:01 np0005604943 systemd-logind[786]: New session 44 of user zuul.
Feb  2 06:39:01 np0005604943 systemd[1]: Started Session 44 of User zuul.
Feb  2 06:39:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:39:02 np0005604943 python3.9[133290]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:03 np0005604943 python3.9[133442]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:03 np0005604943 python3.9[133565]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770032342.6545475-29-160441565057935/.source.conf _original_basename=ceph.conf follow=False checksum=e0be5ca7ce37763054308055dc6589448a114dd7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:04 np0005604943 python3.9[133717]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:04 np0005604943 python3.9[133840]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770032343.9659731-29-130786021749561/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=9ed8f834e291931712ada0e12cd8297435f539d4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:05 np0005604943 systemd[1]: session-44.scope: Deactivated successfully.
Feb  2 06:39:05 np0005604943 systemd[1]: session-44.scope: Consumed 2.336s CPU time.
Feb  2 06:39:05 np0005604943 systemd-logind[786]: Session 44 logged out. Waiting for processes to exit.
Feb  2 06:39:05 np0005604943 systemd-logind[786]: Removed session 44.
Feb  2 06:39:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:39:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:39:09
Feb  2 06:39:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:39:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:39:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'images', '.rgw.root', 'vms', 'backups', 'volumes', 'default.rgw.log']
Feb  2 06:39:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:39:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:39:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:39:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:39:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:39:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:39:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:39:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:39:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:39:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:39:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:39:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:39:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:39:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:39:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:39:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:39:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:39:11 np0005604943 systemd-logind[786]: New session 45 of user zuul.
Feb  2 06:39:11 np0005604943 systemd[1]: Started Session 45 of User zuul.
Feb  2 06:39:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:11 np0005604943 python3.9[134018]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:39:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:39:13 np0005604943 python3.9[134174]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:39:13 np0005604943 python3.9[134326]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:39:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:14 np0005604943 python3.9[134476]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:39:15 np0005604943 python3.9[134628]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb  2 06:39:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:39:17 np0005604943 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Feb  2 06:39:17 np0005604943 python3.9[134784]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:39:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:18 np0005604943 python3.9[134868]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:39:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:20 np0005604943 python3.9[135021]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:39:21 np0005604943 python3[135176]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Feb  2 06:39:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:21 np0005604943 python3.9[135328]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:39:23 np0005604943 python3.9[135480]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:23 np0005604943 python3.9[135558]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:24 np0005604943 python3.9[135710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:24 np0005604943 python3.9[135788]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.1w84xonf recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:25 np0005604943 python3.9[135940]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:25 np0005604943 python3.9[136018]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:26 np0005604943 python3.9[136170]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:39:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:39:27 np0005604943 python3[136323]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb  2 06:39:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:27 np0005604943 python3.9[136475]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:28 np0005604943 python3.9[136646]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032367.3560221-152-17465814655417/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:39:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:39:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:39:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:39:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:39:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:39:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:39:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:39:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:39:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:39:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:39:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:39:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:39:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:39:28 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:39:29 np0005604943 python3.9[136883]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:29 np0005604943 podman[136898]: 2026-02-02 11:39:29.102470876 +0000 UTC m=+0.039372081 container create ba4448ff68b851fdba90e788830e6a9c68ed0f4d216b224632769be22944ca96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 06:39:29 np0005604943 systemd[1]: Started libpod-conmon-ba4448ff68b851fdba90e788830e6a9c68ed0f4d216b224632769be22944ca96.scope.
Feb  2 06:39:29 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:39:29 np0005604943 podman[136898]: 2026-02-02 11:39:29.082077048 +0000 UTC m=+0.018978313 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:39:29 np0005604943 podman[136898]: 2026-02-02 11:39:29.18394814 +0000 UTC m=+0.120849445 container init ba4448ff68b851fdba90e788830e6a9c68ed0f4d216b224632769be22944ca96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:39:29 np0005604943 podman[136898]: 2026-02-02 11:39:29.190943375 +0000 UTC m=+0.127844590 container start ba4448ff68b851fdba90e788830e6a9c68ed0f4d216b224632769be22944ca96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 06:39:29 np0005604943 podman[136898]: 2026-02-02 11:39:29.194533229 +0000 UTC m=+0.131434524 container attach ba4448ff68b851fdba90e788830e6a9c68ed0f4d216b224632769be22944ca96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 06:39:29 np0005604943 gifted_sanderson[136946]: 167 167
Feb  2 06:39:29 np0005604943 systemd[1]: libpod-ba4448ff68b851fdba90e788830e6a9c68ed0f4d216b224632769be22944ca96.scope: Deactivated successfully.
Feb  2 06:39:29 np0005604943 conmon[136946]: conmon ba4448ff68b851fdba90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba4448ff68b851fdba90e788830e6a9c68ed0f4d216b224632769be22944ca96.scope/container/memory.events
Feb  2 06:39:29 np0005604943 podman[136898]: 2026-02-02 11:39:29.198168996 +0000 UTC m=+0.135070211 container died ba4448ff68b851fdba90e788830e6a9c68ed0f4d216b224632769be22944ca96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:39:29 np0005604943 systemd[1]: var-lib-containers-storage-overlay-4270173c5d2527e6c0aad442a6a91c8cc006ba364faa51eca6b64b2508297883-merged.mount: Deactivated successfully.
Feb  2 06:39:29 np0005604943 podman[136898]: 2026-02-02 11:39:29.245244109 +0000 UTC m=+0.182145344 container remove ba4448ff68b851fdba90e788830e6a9c68ed0f4d216b224632769be22944ca96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sanderson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 06:39:29 np0005604943 systemd[1]: libpod-conmon-ba4448ff68b851fdba90e788830e6a9c68ed0f4d216b224632769be22944ca96.scope: Deactivated successfully.
Feb  2 06:39:29 np0005604943 podman[137032]: 2026-02-02 11:39:29.367729877 +0000 UTC m=+0.049681715 container create b093cbe6472d5aee9cec267086da2da52de3df6f8a74f41b3a9e2ee33f6ec683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:39:29 np0005604943 systemd[1]: Started libpod-conmon-b093cbe6472d5aee9cec267086da2da52de3df6f8a74f41b3a9e2ee33f6ec683.scope.
Feb  2 06:39:29 np0005604943 podman[137032]: 2026-02-02 11:39:29.338762401 +0000 UTC m=+0.020714259 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:39:29 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:39:29 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bede9437b59b5a4a6a1b7e93c2f6ff297b8c72b3611b1df4c3ac560639f59b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:39:29 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bede9437b59b5a4a6a1b7e93c2f6ff297b8c72b3611b1df4c3ac560639f59b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:39:29 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bede9437b59b5a4a6a1b7e93c2f6ff297b8c72b3611b1df4c3ac560639f59b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:39:29 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bede9437b59b5a4a6a1b7e93c2f6ff297b8c72b3611b1df4c3ac560639f59b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:39:29 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bede9437b59b5a4a6a1b7e93c2f6ff297b8c72b3611b1df4c3ac560639f59b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:39:29 np0005604943 podman[137032]: 2026-02-02 11:39:29.458795383 +0000 UTC m=+0.140747221 container init b093cbe6472d5aee9cec267086da2da52de3df6f8a74f41b3a9e2ee33f6ec683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:39:29 np0005604943 podman[137032]: 2026-02-02 11:39:29.469712592 +0000 UTC m=+0.151664430 container start b093cbe6472d5aee9cec267086da2da52de3df6f8a74f41b3a9e2ee33f6ec683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:39:29 np0005604943 podman[137032]: 2026-02-02 11:39:29.472705901 +0000 UTC m=+0.154657759 container attach b093cbe6472d5aee9cec267086da2da52de3df6f8a74f41b3a9e2ee33f6ec683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:39:29 np0005604943 python3.9[137074]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032368.5985487-167-21323307571470/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:29 np0005604943 naughty_varahamihira[137078]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:39:29 np0005604943 naughty_varahamihira[137078]: --> All data devices are unavailable
Feb  2 06:39:29 np0005604943 systemd[1]: libpod-b093cbe6472d5aee9cec267086da2da52de3df6f8a74f41b3a9e2ee33f6ec683.scope: Deactivated successfully.
Feb  2 06:39:29 np0005604943 podman[137032]: 2026-02-02 11:39:29.922112247 +0000 UTC m=+0.604064095 container died b093cbe6472d5aee9cec267086da2da52de3df6f8a74f41b3a9e2ee33f6ec683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:39:29 np0005604943 systemd[1]: var-lib-containers-storage-overlay-9bede9437b59b5a4a6a1b7e93c2f6ff297b8c72b3611b1df4c3ac560639f59b8-merged.mount: Deactivated successfully.
Feb  2 06:39:29 np0005604943 podman[137032]: 2026-02-02 11:39:29.972078747 +0000 UTC m=+0.654030585 container remove b093cbe6472d5aee9cec267086da2da52de3df6f8a74f41b3a9e2ee33f6ec683 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 06:39:29 np0005604943 systemd[1]: libpod-conmon-b093cbe6472d5aee9cec267086da2da52de3df6f8a74f41b3a9e2ee33f6ec683.scope: Deactivated successfully.
Feb  2 06:39:30 np0005604943 python3.9[137249]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:30 np0005604943 podman[137395]: 2026-02-02 11:39:30.384850416 +0000 UTC m=+0.031047772 container create 81a1628a3802678854713b0efb85162e83e140767d10b8289ddb86041e6ed0f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 06:39:30 np0005604943 systemd[1]: Started libpod-conmon-81a1628a3802678854713b0efb85162e83e140767d10b8289ddb86041e6ed0f3.scope.
Feb  2 06:39:30 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:39:30 np0005604943 podman[137395]: 2026-02-02 11:39:30.465264321 +0000 UTC m=+0.111461697 container init 81a1628a3802678854713b0efb85162e83e140767d10b8289ddb86041e6ed0f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_panini, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:39:30 np0005604943 podman[137395]: 2026-02-02 11:39:30.370570738 +0000 UTC m=+0.016768114 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:39:30 np0005604943 podman[137395]: 2026-02-02 11:39:30.471457245 +0000 UTC m=+0.117654591 container start 81a1628a3802678854713b0efb85162e83e140767d10b8289ddb86041e6ed0f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_panini, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:39:30 np0005604943 thirsty_panini[137452]: 167 167
Feb  2 06:39:30 np0005604943 podman[137395]: 2026-02-02 11:39:30.47504759 +0000 UTC m=+0.121245076 container attach 81a1628a3802678854713b0efb85162e83e140767d10b8289ddb86041e6ed0f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_panini, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 06:39:30 np0005604943 systemd[1]: libpod-81a1628a3802678854713b0efb85162e83e140767d10b8289ddb86041e6ed0f3.scope: Deactivated successfully.
Feb  2 06:39:30 np0005604943 podman[137395]: 2026-02-02 11:39:30.47623688 +0000 UTC m=+0.122434266 container died 81a1628a3802678854713b0efb85162e83e140767d10b8289ddb86041e6ed0f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:39:30 np0005604943 systemd[1]: var-lib-containers-storage-overlay-7e4df341d7f9b5efcb2f121b6f92de343c220b4930b7ef6430632a092f9b4ce6-merged.mount: Deactivated successfully.
Feb  2 06:39:30 np0005604943 podman[137395]: 2026-02-02 11:39:30.523186752 +0000 UTC m=+0.169384118 container remove 81a1628a3802678854713b0efb85162e83e140767d10b8289ddb86041e6ed0f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_panini, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  2 06:39:30 np0005604943 systemd[1]: libpod-conmon-81a1628a3802678854713b0efb85162e83e140767d10b8289ddb86041e6ed0f3.scope: Deactivated successfully.
Feb  2 06:39:30 np0005604943 python3.9[137467]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032369.6722503-182-84407117726869/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:30 np0005604943 podman[137490]: 2026-02-02 11:39:30.654374529 +0000 UTC m=+0.041627332 container create 352fbebe6e628a024c1678a87f054f82b7bef426d620c2cd86c29375db410180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:39:30 np0005604943 systemd[1]: Started libpod-conmon-352fbebe6e628a024c1678a87f054f82b7bef426d620c2cd86c29375db410180.scope.
Feb  2 06:39:30 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:39:30 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959cedeee21c6a0a3a774d50977e97d0623fbe4eba4c61f4540565a46242e8cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:39:30 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959cedeee21c6a0a3a774d50977e97d0623fbe4eba4c61f4540565a46242e8cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:39:30 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959cedeee21c6a0a3a774d50977e97d0623fbe4eba4c61f4540565a46242e8cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:39:30 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959cedeee21c6a0a3a774d50977e97d0623fbe4eba4c61f4540565a46242e8cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:39:30 np0005604943 podman[137490]: 2026-02-02 11:39:30.634096592 +0000 UTC m=+0.021349425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:39:30 np0005604943 podman[137490]: 2026-02-02 11:39:30.740621797 +0000 UTC m=+0.127874630 container init 352fbebe6e628a024c1678a87f054f82b7bef426d620c2cd86c29375db410180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:39:30 np0005604943 podman[137490]: 2026-02-02 11:39:30.746111863 +0000 UTC m=+0.133364666 container start 352fbebe6e628a024c1678a87f054f82b7bef426d620c2cd86c29375db410180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:39:30 np0005604943 podman[137490]: 2026-02-02 11:39:30.750330064 +0000 UTC m=+0.137582917 container attach 352fbebe6e628a024c1678a87f054f82b7bef426d620c2cd86c29375db410180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mcclintock, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]: {
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:    "0": [
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:        {
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "devices": [
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "/dev/loop3"
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            ],
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "lv_name": "ceph_lv0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "lv_size": "21470642176",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "name": "ceph_lv0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "tags": {
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.cluster_name": "ceph",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.crush_device_class": "",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.encrypted": "0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.objectstore": "bluestore",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.osd_id": "0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.type": "block",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.vdo": "0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.with_tpm": "0"
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            },
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "type": "block",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "vg_name": "ceph_vg0"
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:        }
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:    ],
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:    "1": [
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:        {
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "devices": [
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "/dev/loop4"
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            ],
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "lv_name": "ceph_lv1",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "lv_size": "21470642176",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "name": "ceph_lv1",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "tags": {
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.cluster_name": "ceph",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.crush_device_class": "",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.encrypted": "0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.objectstore": "bluestore",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.osd_id": "1",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.type": "block",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.vdo": "0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.with_tpm": "0"
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            },
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "type": "block",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "vg_name": "ceph_vg1"
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:        }
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:    ],
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:    "2": [
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:        {
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "devices": [
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "/dev/loop5"
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            ],
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "lv_name": "ceph_lv2",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "lv_size": "21470642176",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "name": "ceph_lv2",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "tags": {
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.cluster_name": "ceph",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.crush_device_class": "",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.encrypted": "0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.objectstore": "bluestore",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.osd_id": "2",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.type": "block",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.vdo": "0",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:                "ceph.with_tpm": "0"
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            },
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "type": "block",
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:            "vg_name": "ceph_vg2"
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:        }
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]:    ]
Feb  2 06:39:31 np0005604943 thirsty_mcclintock[137524]: }
Feb  2 06:39:31 np0005604943 systemd[1]: libpod-352fbebe6e628a024c1678a87f054f82b7bef426d620c2cd86c29375db410180.scope: Deactivated successfully.
Feb  2 06:39:31 np0005604943 conmon[137524]: conmon 352fbebe6e628a024c16 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-352fbebe6e628a024c1678a87f054f82b7bef426d620c2cd86c29375db410180.scope/container/memory.events
Feb  2 06:39:31 np0005604943 podman[137490]: 2026-02-02 11:39:31.027044577 +0000 UTC m=+0.414297390 container died 352fbebe6e628a024c1678a87f054f82b7bef426d620c2cd86c29375db410180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mcclintock, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:39:31 np0005604943 systemd[1]: var-lib-containers-storage-overlay-959cedeee21c6a0a3a774d50977e97d0623fbe4eba4c61f4540565a46242e8cd-merged.mount: Deactivated successfully.
Feb  2 06:39:31 np0005604943 podman[137490]: 2026-02-02 11:39:31.067123167 +0000 UTC m=+0.454375970 container remove 352fbebe6e628a024c1678a87f054f82b7bef426d620c2cd86c29375db410180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mcclintock, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:39:31 np0005604943 systemd[1]: libpod-conmon-352fbebe6e628a024c1678a87f054f82b7bef426d620c2cd86c29375db410180.scope: Deactivated successfully.
Feb  2 06:39:31 np0005604943 python3.9[137667]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:31 np0005604943 podman[137815]: 2026-02-02 11:39:31.494256594 +0000 UTC m=+0.040421709 container create c5af74192ab0a96cddcc3642f977f14e49af699e3704f860d6f78d7e12d4a790 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_gagarin, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 06:39:31 np0005604943 systemd[1]: Started libpod-conmon-c5af74192ab0a96cddcc3642f977f14e49af699e3704f860d6f78d7e12d4a790.scope.
Feb  2 06:39:31 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:39:31 np0005604943 podman[137815]: 2026-02-02 11:39:31.569740478 +0000 UTC m=+0.115905603 container init c5af74192ab0a96cddcc3642f977f14e49af699e3704f860d6f78d7e12d4a790 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_gagarin, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:39:31 np0005604943 podman[137815]: 2026-02-02 11:39:31.476864004 +0000 UTC m=+0.023029139 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:39:31 np0005604943 podman[137815]: 2026-02-02 11:39:31.576938129 +0000 UTC m=+0.123103244 container start c5af74192ab0a96cddcc3642f977f14e49af699e3704f860d6f78d7e12d4a790 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_gagarin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:39:31 np0005604943 podman[137815]: 2026-02-02 11:39:31.580686998 +0000 UTC m=+0.126852203 container attach c5af74192ab0a96cddcc3642f977f14e49af699e3704f860d6f78d7e12d4a790 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_gagarin, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:39:31 np0005604943 adoring_gagarin[137883]: 167 167
Feb  2 06:39:31 np0005604943 systemd[1]: libpod-c5af74192ab0a96cddcc3642f977f14e49af699e3704f860d6f78d7e12d4a790.scope: Deactivated successfully.
Feb  2 06:39:31 np0005604943 podman[137815]: 2026-02-02 11:39:31.581703315 +0000 UTC m=+0.127868450 container died c5af74192ab0a96cddcc3642f977f14e49af699e3704f860d6f78d7e12d4a790 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_gagarin, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Feb  2 06:39:31 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a486a2a752d90af913dc67ffb2a844fe57fc3d9f7c044527bdcd5c2472985d2c-merged.mount: Deactivated successfully.
Feb  2 06:39:31 np0005604943 podman[137815]: 2026-02-02 11:39:31.617398519 +0000 UTC m=+0.163563654 container remove c5af74192ab0a96cddcc3642f977f14e49af699e3704f860d6f78d7e12d4a790 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 06:39:31 np0005604943 systemd[1]: libpod-conmon-c5af74192ab0a96cddcc3642f977f14e49af699e3704f860d6f78d7e12d4a790.scope: Deactivated successfully.
Feb  2 06:39:31 np0005604943 podman[137907]: 2026-02-02 11:39:31.731777281 +0000 UTC m=+0.035342565 container create 722be9ce930712087d7a9b54d9f44d1d0719de90f2a252826e539e1d534ef3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 06:39:31 np0005604943 python3.9[137887]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032370.785333-197-184141028402437/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:31 np0005604943 systemd[1]: Started libpod-conmon-722be9ce930712087d7a9b54d9f44d1d0719de90f2a252826e539e1d534ef3e1.scope.
Feb  2 06:39:31 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:39:31 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdee9dc037f646f27140006b0463f0b9396036216e9553e14e24b0b9343c2f90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:39:31 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdee9dc037f646f27140006b0463f0b9396036216e9553e14e24b0b9343c2f90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:39:31 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdee9dc037f646f27140006b0463f0b9396036216e9553e14e24b0b9343c2f90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:39:31 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdee9dc037f646f27140006b0463f0b9396036216e9553e14e24b0b9343c2f90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:39:31 np0005604943 podman[137907]: 2026-02-02 11:39:31.714641368 +0000 UTC m=+0.018206672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:39:31 np0005604943 podman[137907]: 2026-02-02 11:39:31.821584944 +0000 UTC m=+0.125150278 container init 722be9ce930712087d7a9b54d9f44d1d0719de90f2a252826e539e1d534ef3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_murdock, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:39:31 np0005604943 podman[137907]: 2026-02-02 11:39:31.827010027 +0000 UTC m=+0.130575311 container start 722be9ce930712087d7a9b54d9f44d1d0719de90f2a252826e539e1d534ef3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_murdock, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:39:31 np0005604943 podman[137907]: 2026-02-02 11:39:31.832132643 +0000 UTC m=+0.135698007 container attach 722be9ce930712087d7a9b54d9f44d1d0719de90f2a252826e539e1d534ef3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_murdock, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Feb  2 06:39:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:39:32 np0005604943 python3.9[138122]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:32 np0005604943 lvm[138154]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:39:32 np0005604943 lvm[138154]: VG ceph_vg1 finished
Feb  2 06:39:32 np0005604943 lvm[138155]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:39:32 np0005604943 lvm[138155]: VG ceph_vg0 finished
Feb  2 06:39:32 np0005604943 lvm[138159]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:39:32 np0005604943 lvm[138159]: VG ceph_vg2 finished
Feb  2 06:39:32 np0005604943 admiring_murdock[137924]: {}
Feb  2 06:39:32 np0005604943 systemd[1]: libpod-722be9ce930712087d7a9b54d9f44d1d0719de90f2a252826e539e1d534ef3e1.scope: Deactivated successfully.
Feb  2 06:39:32 np0005604943 podman[137907]: 2026-02-02 11:39:32.576170845 +0000 UTC m=+0.879736159 container died 722be9ce930712087d7a9b54d9f44d1d0719de90f2a252826e539e1d534ef3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_murdock, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 06:39:32 np0005604943 systemd[1]: var-lib-containers-storage-overlay-cdee9dc037f646f27140006b0463f0b9396036216e9553e14e24b0b9343c2f90-merged.mount: Deactivated successfully.
Feb  2 06:39:32 np0005604943 podman[137907]: 2026-02-02 11:39:32.635105483 +0000 UTC m=+0.938670797 container remove 722be9ce930712087d7a9b54d9f44d1d0719de90f2a252826e539e1d534ef3e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_murdock, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:39:32 np0005604943 systemd[1]: libpod-conmon-722be9ce930712087d7a9b54d9f44d1d0719de90f2a252826e539e1d534ef3e1.scope: Deactivated successfully.
Feb  2 06:39:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:39:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:39:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:39:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:39:32 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:39:32 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:39:32 np0005604943 python3.9[138303]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032371.898367-212-164859866146338/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:33 np0005604943 python3.9[138472]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:34 np0005604943 python3.9[138624]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:39:34 np0005604943 python3.9[138779]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:35 np0005604943 python3.9[138931]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:39:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:36 np0005604943 python3.9[139084]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:39:36 np0005604943 python3.9[139238]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:39:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:39:37 np0005604943 python3.9[139393]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:38 np0005604943 python3.9[139543]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:39:39 np0005604943 python3.9[139696]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:39:39 np0005604943 ovs-vsctl[139697]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Feb  2 06:39:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:39 np0005604943 python3.9[139849]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:39:40 np0005604943 python3.9[140004]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:39:40 np0005604943 ovs-vsctl[140005]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Feb  2 06:39:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:39:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:39:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:39:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:39:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:39:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:39:41 np0005604943 python3.9[140155]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:39:41 np0005604943 python3.9[140309]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:39:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:39:42 np0005604943 python3.9[140461]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:42 np0005604943 python3.9[140539]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:39:43 np0005604943 python3.9[140691]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:43 np0005604943 python3.9[140769]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:39:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:44 np0005604943 python3.9[140921]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:44 np0005604943 python3.9[141073]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:45 np0005604943 python3.9[141151]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:45 np0005604943 python3.9[141303]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:45 np0005604943 python3.9[141381]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:46 np0005604943 python3.9[141533]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:39:46 np0005604943 systemd[1]: Reloading.
Feb  2 06:39:46 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:39:46 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:39:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:39:47 np0005604943 python3.9[141722]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:47 np0005604943 python3.9[141800]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:48 np0005604943 python3.9[141952]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:48 np0005604943 python3.9[142030]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:49 np0005604943 python3.9[142182]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:39:49 np0005604943 systemd[1]: Reloading.
Feb  2 06:39:49 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:39:49 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:39:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:49 np0005604943 systemd[1]: Starting Create netns directory...
Feb  2 06:39:49 np0005604943 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb  2 06:39:49 np0005604943 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb  2 06:39:49 np0005604943 systemd[1]: Finished Create netns directory.
Feb  2 06:39:50 np0005604943 python3.9[142375]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:39:51 np0005604943 python3.9[142527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:51 np0005604943 python3.9[142650]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770032390.8298938-463-152039981916672/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:39:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:39:52 np0005604943 python3.9[142802]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:53 np0005604943 python3.9[142954]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:39:53 np0005604943 python3.9[143106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:39:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:54 np0005604943 python3.9[143229]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770032393.2288451-496-186073909433806/.source.json _original_basename=.2dmvokbp follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:54 np0005604943 python3.9[143379]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:39:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:56 np0005604943 python3.9[143802]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Feb  2 06:39:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:39:57 np0005604943 python3.9[143954]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb  2 06:39:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:39:58 np0005604943 python3[144106]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Feb  2 06:39:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:40:02.070898) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032402070948, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1569, "num_deletes": 252, "total_data_size": 2342794, "memory_usage": 2371224, "flush_reason": "Manual Compaction"}
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032402078717, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1364027, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7356, "largest_seqno": 8924, "table_properties": {"data_size": 1358749, "index_size": 2354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14820, "raw_average_key_size": 20, "raw_value_size": 1346528, "raw_average_value_size": 1867, "num_data_blocks": 111, "num_entries": 721, "num_filter_entries": 721, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770032250, "oldest_key_time": 1770032250, "file_creation_time": 1770032402, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 7857 microseconds, and 3173 cpu microseconds.
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:40:02.078757) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1364027 bytes OK
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:40:02.078775) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:40:02.080143) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:40:02.080157) EVENT_LOG_v1 {"time_micros": 1770032402080153, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:40:02.080173) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2335783, prev total WAL file size 2335783, number of live WAL files 2.
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:40:02.080650) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1332KB)], [20(7612KB)]
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032402080681, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9159638, "oldest_snapshot_seqno": -1}
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3356 keys, 7082226 bytes, temperature: kUnknown
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032402111748, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7082226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7056353, "index_size": 16356, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 80392, "raw_average_key_size": 23, "raw_value_size": 6992285, "raw_average_value_size": 2083, "num_data_blocks": 726, "num_entries": 3356, "num_filter_entries": 3356, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770032402, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:40:02.111946) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7082226 bytes
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:40:02.115092) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 294.3 rd, 227.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.4 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(11.9) write-amplify(5.2) OK, records in: 3801, records dropped: 445 output_compression: NoCompression
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:40:02.115115) EVENT_LOG_v1 {"time_micros": 1770032402115103, "job": 6, "event": "compaction_finished", "compaction_time_micros": 31128, "compaction_time_cpu_micros": 12734, "output_level": 6, "num_output_files": 1, "total_output_size": 7082226, "num_input_records": 3801, "num_output_records": 3356, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032402115369, "job": 6, "event": "table_file_deletion", "file_number": 22}
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032402116136, "job": 6, "event": "table_file_deletion", "file_number": 20}
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:40:02.080574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:40:02.116185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:40:02.116190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:40:02.116191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:40:02.116193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:40:02 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:40:02.116194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:40:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:04 np0005604943 podman[144119]: 2026-02-02 11:40:04.046119406 +0000 UTC m=+5.351823883 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb  2 06:40:04 np0005604943 podman[144241]: 2026-02-02 11:40:04.181281637 +0000 UTC m=+0.064743791 container create dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb  2 06:40:04 np0005604943 podman[144241]: 2026-02-02 11:40:04.149127978 +0000 UTC m=+0.032590192 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb  2 06:40:04 np0005604943 python3[144106]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb  2 06:40:04 np0005604943 python3.9[144431]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:40:05 np0005604943 python3.9[144585]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:40:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:05 np0005604943 python3.9[144661]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:40:06 np0005604943 python3.9[144812]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770032406.0597434-574-188090636831143/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:40:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:40:07 np0005604943 python3.9[144888]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 06:40:07 np0005604943 systemd[1]: Reloading.
Feb  2 06:40:07 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:40:07 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:40:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:08 np0005604943 python3.9[144999]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:40:08 np0005604943 systemd[1]: Reloading.
Feb  2 06:40:08 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:40:08 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:40:08 np0005604943 systemd[1]: Starting ovn_controller container...
Feb  2 06:40:08 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:40:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcd0b9c59c19976d8d1152cedbd6da157fc7947d760fae19b5fd07fe4dc07631/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb  2 06:40:08 np0005604943 systemd[1]: Started /usr/bin/podman healthcheck run dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059.
Feb  2 06:40:08 np0005604943 podman[145040]: 2026-02-02 11:40:08.772199637 +0000 UTC m=+0.333519126 container init dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Feb  2 06:40:08 np0005604943 ovn_controller[145056]: + sudo -E kolla_set_configs
Feb  2 06:40:08 np0005604943 podman[145040]: 2026-02-02 11:40:08.797536521 +0000 UTC m=+0.358855930 container start dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb  2 06:40:08 np0005604943 edpm-start-podman-container[145040]: ovn_controller
Feb  2 06:40:08 np0005604943 systemd[1]: Created slice User Slice of UID 0.
Feb  2 06:40:08 np0005604943 systemd[1]: Starting User Runtime Directory /run/user/0...
Feb  2 06:40:08 np0005604943 systemd[1]: Finished User Runtime Directory /run/user/0.
Feb  2 06:40:08 np0005604943 systemd[1]: Starting User Manager for UID 0...
Feb  2 06:40:08 np0005604943 edpm-start-podman-container[145039]: Creating additional drop-in dependency for "ovn_controller" (dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059)
Feb  2 06:40:08 np0005604943 systemd[1]: Reloading.
Feb  2 06:40:08 np0005604943 podman[145063]: 2026-02-02 11:40:08.902841399 +0000 UTC m=+0.094886877 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:40:08 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:40:08 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:40:08 np0005604943 systemd[145084]: Queued start job for default target Main User Target.
Feb  2 06:40:08 np0005604943 systemd[145084]: Created slice User Application Slice.
Feb  2 06:40:08 np0005604943 systemd[145084]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Feb  2 06:40:08 np0005604943 systemd[145084]: Started Daily Cleanup of User's Temporary Directories.
Feb  2 06:40:08 np0005604943 systemd[145084]: Reached target Paths.
Feb  2 06:40:08 np0005604943 systemd[145084]: Reached target Timers.
Feb  2 06:40:08 np0005604943 systemd[145084]: Starting D-Bus User Message Bus Socket...
Feb  2 06:40:08 np0005604943 systemd[145084]: Starting Create User's Volatile Files and Directories...
Feb  2 06:40:08 np0005604943 systemd[145084]: Finished Create User's Volatile Files and Directories.
Feb  2 06:40:08 np0005604943 systemd[145084]: Listening on D-Bus User Message Bus Socket.
Feb  2 06:40:08 np0005604943 systemd[145084]: Reached target Sockets.
Feb  2 06:40:08 np0005604943 systemd[145084]: Reached target Basic System.
Feb  2 06:40:08 np0005604943 systemd[145084]: Reached target Main User Target.
Feb  2 06:40:08 np0005604943 systemd[145084]: Startup finished in 123ms.
Feb  2 06:40:09 np0005604943 systemd[1]: Started User Manager for UID 0.
Feb  2 06:40:09 np0005604943 systemd[1]: Started ovn_controller container.
Feb  2 06:40:09 np0005604943 systemd[1]: dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059-1f21a3646628ee04.service: Main process exited, code=exited, status=1/FAILURE
Feb  2 06:40:09 np0005604943 systemd[1]: dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059-1f21a3646628ee04.service: Failed with result 'exit-code'.
Feb  2 06:40:09 np0005604943 systemd[1]: Started Session c1 of User root.
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: INFO:__main__:Validating config file
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: INFO:__main__:Writing out command to execute
Feb  2 06:40:09 np0005604943 systemd[1]: session-c1.scope: Deactivated successfully.
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: ++ cat /run_command
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: + ARGS=
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: + sudo kolla_copy_cacerts
Feb  2 06:40:09 np0005604943 systemd[1]: Started Session c2 of User root.
Feb  2 06:40:09 np0005604943 systemd[1]: session-c2.scope: Deactivated successfully.
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: + [[ ! -n '' ]]
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: + . kolla_extend_start
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: + umask 0022
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Feb  2 06:40:09 np0005604943 NetworkManager[49093]: <info>  [1770032409.2546] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Feb  2 06:40:09 np0005604943 NetworkManager[49093]: <info>  [1770032409.2553] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 06:40:09 np0005604943 NetworkManager[49093]: <warn>  [1770032409.2554] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 06:40:09 np0005604943 NetworkManager[49093]: <info>  [1770032409.2561] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Feb  2 06:40:09 np0005604943 NetworkManager[49093]: <info>  [1770032409.2566] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Feb  2 06:40:09 np0005604943 NetworkManager[49093]: <info>  [1770032409.2570] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb  2 06:40:09 np0005604943 kernel: br-int: entered promiscuous mode
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00014|main|INFO|OVS feature set changed, force recompute.
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00022|main|INFO|OVS feature set changed, force recompute.
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Feb  2 06:40:09 np0005604943 systemd-udevd[145184]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb  2 06:40:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:09Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb  2 06:40:09 np0005604943 NetworkManager[49093]: <info>  [1770032409.2933] manager: (ovn-74721d-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Feb  2 06:40:09 np0005604943 kernel: genev_sys_6081: entered promiscuous mode
Feb  2 06:40:09 np0005604943 NetworkManager[49093]: <info>  [1770032409.3205] device (genev_sys_6081): carrier: link connected
Feb  2 06:40:09 np0005604943 NetworkManager[49093]: <info>  [1770032409.3209] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Feb  2 06:40:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:40:09
Feb  2 06:40:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:40:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:40:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', '.mgr', 'backups', 'volumes', 'vms', 'default.rgw.log', 'cephfs.cephfs.data']
Feb  2 06:40:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:40:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:09 np0005604943 python3.9[145314]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb  2 06:40:10 np0005604943 python3.9[145466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:40:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:40:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:40:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:40:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:40:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:40:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:40:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:40:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:40:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:40:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:40:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:40:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:40:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:40:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:40:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:40:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:40:11 np0005604943 python3.9[145589]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770032410.236386-619-4003709131658/.source.yaml _original_basename=.x8b0b0af follow=False checksum=ee3b680d2863fc41fa1014a0ed74ac2d54ecfb05 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:40:11 np0005604943 python3.9[145741]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:40:11 np0005604943 ovs-vsctl[145742]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Feb  2 06:40:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:40:12 np0005604943 python3.9[145894]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:40:12 np0005604943 ovs-vsctl[145896]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Feb  2 06:40:13 np0005604943 python3.9[146049]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:40:13 np0005604943 ovs-vsctl[146050]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Feb  2 06:40:13 np0005604943 systemd[1]: session-45.scope: Deactivated successfully.
Feb  2 06:40:13 np0005604943 systemd[1]: session-45.scope: Consumed 50.735s CPU time.
Feb  2 06:40:13 np0005604943 systemd-logind[786]: Session 45 logged out. Waiting for processes to exit.
Feb  2 06:40:13 np0005604943 systemd-logind[786]: Removed session 45.
Feb  2 06:40:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:40:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:18 np0005604943 systemd-logind[786]: New session 47 of user zuul.
Feb  2 06:40:18 np0005604943 systemd[1]: Started Session 47 of User zuul.
Feb  2 06:40:19 np0005604943 systemd[1]: Stopping User Manager for UID 0...
Feb  2 06:40:19 np0005604943 systemd[145084]: Activating special unit Exit the Session...
Feb  2 06:40:19 np0005604943 systemd[145084]: Stopped target Main User Target.
Feb  2 06:40:19 np0005604943 systemd[145084]: Stopped target Basic System.
Feb  2 06:40:19 np0005604943 systemd[145084]: Stopped target Paths.
Feb  2 06:40:19 np0005604943 systemd[145084]: Stopped target Sockets.
Feb  2 06:40:19 np0005604943 systemd[145084]: Stopped target Timers.
Feb  2 06:40:19 np0005604943 systemd[145084]: Stopped Daily Cleanup of User's Temporary Directories.
Feb  2 06:40:19 np0005604943 systemd[145084]: Closed D-Bus User Message Bus Socket.
Feb  2 06:40:19 np0005604943 systemd[145084]: Stopped Create User's Volatile Files and Directories.
Feb  2 06:40:19 np0005604943 systemd[145084]: Removed slice User Application Slice.
Feb  2 06:40:19 np0005604943 systemd[145084]: Reached target Shutdown.
Feb  2 06:40:19 np0005604943 systemd[145084]: Finished Exit the Session.
Feb  2 06:40:19 np0005604943 systemd[145084]: Reached target Exit the Session.
Feb  2 06:40:19 np0005604943 systemd[1]: user@0.service: Deactivated successfully.
Feb  2 06:40:19 np0005604943 systemd[1]: Stopped User Manager for UID 0.
Feb  2 06:40:19 np0005604943 systemd[1]: Stopping User Runtime Directory /run/user/0...
Feb  2 06:40:19 np0005604943 systemd[1]: run-user-0.mount: Deactivated successfully.
Feb  2 06:40:19 np0005604943 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Feb  2 06:40:19 np0005604943 systemd[1]: Stopped User Runtime Directory /run/user/0.
Feb  2 06:40:19 np0005604943 systemd[1]: Removed slice User Slice of UID 0.
Feb  2 06:40:19 np0005604943 python3.9[146228]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:40:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:20 np0005604943 python3.9[146385]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:21 np0005604943 python3.9[146537]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:40:21 np0005604943 python3.9[146689]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:40:22 np0005604943 python3.9[146841]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:22 np0005604943 python3.9[146993]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:23 np0005604943 python3.9[147143]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:40:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:24 np0005604943 python3.9[147295]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb  2 06:40:25 np0005604943 python3.9[147445]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:40:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:26 np0005604943 python3.9[147566]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770032425.0600562-81-187232080125798/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:26 np0005604943 python3.9[147716]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:40:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:40:27 np0005604943 python3.9[147837]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770032426.3461702-96-201009364812473/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:28 np0005604943 python3.9[147990]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:40:29 np0005604943 python3.9[148074]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:40:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:31 np0005604943 python3.9[148227]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 06:40:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:31 np0005604943 python3.9[148380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:40:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:40:32 np0005604943 python3.9[148501]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770032431.553892-133-137134393974882/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:32 np0005604943 python3.9[148676]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:40:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:40:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:40:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:40:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:40:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:40:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:40:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:40:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:40:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:40:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:40:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:40:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:40:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:40:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:40:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:40:33 np0005604943 python3.9[148853]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770032432.5875587-133-29757731104828/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:33 np0005604943 podman[148941]: 2026-02-02 11:40:33.709432703 +0000 UTC m=+0.042444323 container create 6966b270f2c53f7ab98ccf9ee19b35e9b96594b5f5bc348c75b906edcf0b0a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_nash, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  2 06:40:33 np0005604943 systemd[1]: Started libpod-conmon-6966b270f2c53f7ab98ccf9ee19b35e9b96594b5f5bc348c75b906edcf0b0a9d.scope.
Feb  2 06:40:33 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:40:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:33 np0005604943 podman[148941]: 2026-02-02 11:40:33.785844804 +0000 UTC m=+0.118856444 container init 6966b270f2c53f7ab98ccf9ee19b35e9b96594b5f5bc348c75b906edcf0b0a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:40:33 np0005604943 podman[148941]: 2026-02-02 11:40:33.691092952 +0000 UTC m=+0.024104622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:40:33 np0005604943 podman[148941]: 2026-02-02 11:40:33.792969501 +0000 UTC m=+0.125981131 container start 6966b270f2c53f7ab98ccf9ee19b35e9b96594b5f5bc348c75b906edcf0b0a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_nash, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:40:33 np0005604943 podman[148941]: 2026-02-02 11:40:33.796556195 +0000 UTC m=+0.129567815 container attach 6966b270f2c53f7ab98ccf9ee19b35e9b96594b5f5bc348c75b906edcf0b0a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_nash, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 06:40:33 np0005604943 bold_nash[148957]: 167 167
Feb  2 06:40:33 np0005604943 systemd[1]: libpod-6966b270f2c53f7ab98ccf9ee19b35e9b96594b5f5bc348c75b906edcf0b0a9d.scope: Deactivated successfully.
Feb  2 06:40:33 np0005604943 podman[148941]: 2026-02-02 11:40:33.797979233 +0000 UTC m=+0.130990863 container died 6966b270f2c53f7ab98ccf9ee19b35e9b96594b5f5bc348c75b906edcf0b0a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 06:40:33 np0005604943 systemd[1]: var-lib-containers-storage-overlay-aab7fcaf5b3a23c4c748bf8b89b1b3323f2d56d05b5b56da8e97abbadde40133-merged.mount: Deactivated successfully.
Feb  2 06:40:33 np0005604943 podman[148941]: 2026-02-02 11:40:33.8387266 +0000 UTC m=+0.171738220 container remove 6966b270f2c53f7ab98ccf9ee19b35e9b96594b5f5bc348c75b906edcf0b0a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_nash, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:40:33 np0005604943 systemd[1]: libpod-conmon-6966b270f2c53f7ab98ccf9ee19b35e9b96594b5f5bc348c75b906edcf0b0a9d.scope: Deactivated successfully.
Feb  2 06:40:33 np0005604943 podman[148982]: 2026-02-02 11:40:33.961841874 +0000 UTC m=+0.033284293 container create c2be9fd12c07760b322656771098f2c6960b501cdb95dfbb05491cbe59878bd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_varahamihira, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:40:34 np0005604943 systemd[1]: Started libpod-conmon-c2be9fd12c07760b322656771098f2c6960b501cdb95dfbb05491cbe59878bd8.scope.
Feb  2 06:40:34 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:40:34 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36759500c193d80c09881fe5667f0850d304f71cb2f685d0cdd67d75bb4ac924/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:40:34 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36759500c193d80c09881fe5667f0850d304f71cb2f685d0cdd67d75bb4ac924/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:40:34 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36759500c193d80c09881fe5667f0850d304f71cb2f685d0cdd67d75bb4ac924/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:40:34 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36759500c193d80c09881fe5667f0850d304f71cb2f685d0cdd67d75bb4ac924/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:40:34 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36759500c193d80c09881fe5667f0850d304f71cb2f685d0cdd67d75bb4ac924/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:40:34 np0005604943 podman[148982]: 2026-02-02 11:40:33.949969993 +0000 UTC m=+0.021412442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:40:34 np0005604943 podman[148982]: 2026-02-02 11:40:34.055728764 +0000 UTC m=+0.127171223 container init c2be9fd12c07760b322656771098f2c6960b501cdb95dfbb05491cbe59878bd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Feb  2 06:40:34 np0005604943 podman[148982]: 2026-02-02 11:40:34.065455718 +0000 UTC m=+0.136898167 container start c2be9fd12c07760b322656771098f2c6960b501cdb95dfbb05491cbe59878bd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 06:40:34 np0005604943 podman[148982]: 2026-02-02 11:40:34.069369011 +0000 UTC m=+0.140811450 container attach c2be9fd12c07760b322656771098f2c6960b501cdb95dfbb05491cbe59878bd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_varahamihira, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:40:34 np0005604943 friendly_varahamihira[148999]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:40:34 np0005604943 friendly_varahamihira[148999]: --> All data devices are unavailable
Feb  2 06:40:34 np0005604943 systemd[1]: libpod-c2be9fd12c07760b322656771098f2c6960b501cdb95dfbb05491cbe59878bd8.scope: Deactivated successfully.
Feb  2 06:40:34 np0005604943 podman[148982]: 2026-02-02 11:40:34.541703491 +0000 UTC m=+0.613146020 container died c2be9fd12c07760b322656771098f2c6960b501cdb95dfbb05491cbe59878bd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_varahamihira, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 06:40:34 np0005604943 python3.9[149143]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:40:34 np0005604943 systemd[1]: var-lib-containers-storage-overlay-36759500c193d80c09881fe5667f0850d304f71cb2f685d0cdd67d75bb4ac924-merged.mount: Deactivated successfully.
Feb  2 06:40:34 np0005604943 podman[148982]: 2026-02-02 11:40:34.83376083 +0000 UTC m=+0.905203259 container remove c2be9fd12c07760b322656771098f2c6960b501cdb95dfbb05491cbe59878bd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 06:40:34 np0005604943 systemd[1]: libpod-conmon-c2be9fd12c07760b322656771098f2c6960b501cdb95dfbb05491cbe59878bd8.scope: Deactivated successfully.
Feb  2 06:40:35 np0005604943 python3.9[149327]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770032434.2036622-177-182788045362973/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:35 np0005604943 podman[149341]: 2026-02-02 11:40:35.197677152 +0000 UTC m=+0.034502604 container create f9ddedf76ba1ef3c27a4f572148ea156320f084e3e7ce60510502ba44f99ee9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swanson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:40:35 np0005604943 systemd[1]: Started libpod-conmon-f9ddedf76ba1ef3c27a4f572148ea156320f084e3e7ce60510502ba44f99ee9e.scope.
Feb  2 06:40:35 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:40:35 np0005604943 podman[149341]: 2026-02-02 11:40:35.261784942 +0000 UTC m=+0.098610414 container init f9ddedf76ba1ef3c27a4f572148ea156320f084e3e7ce60510502ba44f99ee9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swanson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:40:35 np0005604943 podman[149341]: 2026-02-02 11:40:35.268023905 +0000 UTC m=+0.104849397 container start f9ddedf76ba1ef3c27a4f572148ea156320f084e3e7ce60510502ba44f99ee9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swanson, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:40:35 np0005604943 pensive_swanson[149377]: 167 167
Feb  2 06:40:35 np0005604943 systemd[1]: libpod-f9ddedf76ba1ef3c27a4f572148ea156320f084e3e7ce60510502ba44f99ee9e.scope: Deactivated successfully.
Feb  2 06:40:35 np0005604943 podman[149341]: 2026-02-02 11:40:35.275036619 +0000 UTC m=+0.111862081 container attach f9ddedf76ba1ef3c27a4f572148ea156320f084e3e7ce60510502ba44f99ee9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:40:35 np0005604943 podman[149341]: 2026-02-02 11:40:35.276578209 +0000 UTC m=+0.113403671 container died f9ddedf76ba1ef3c27a4f572148ea156320f084e3e7ce60510502ba44f99ee9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swanson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 06:40:35 np0005604943 podman[149341]: 2026-02-02 11:40:35.182071113 +0000 UTC m=+0.018896615 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:40:35 np0005604943 systemd[1]: var-lib-containers-storage-overlay-4a4695e05bf30cae1d6dbcdf97843e3722704d21ed7e4982a40dde4fdb779014-merged.mount: Deactivated successfully.
Feb  2 06:40:35 np0005604943 podman[149341]: 2026-02-02 11:40:35.312594322 +0000 UTC m=+0.149419784 container remove f9ddedf76ba1ef3c27a4f572148ea156320f084e3e7ce60510502ba44f99ee9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_swanson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  2 06:40:35 np0005604943 systemd[1]: libpod-conmon-f9ddedf76ba1ef3c27a4f572148ea156320f084e3e7ce60510502ba44f99ee9e.scope: Deactivated successfully.
Feb  2 06:40:35 np0005604943 podman[149479]: 2026-02-02 11:40:35.452464466 +0000 UTC m=+0.047654139 container create 5110155f3f4cd424ee3a07c9ece7dfe4d1258d6453c215a196c9dd49302058e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 06:40:35 np0005604943 systemd[1]: Started libpod-conmon-5110155f3f4cd424ee3a07c9ece7dfe4d1258d6453c215a196c9dd49302058e2.scope.
Feb  2 06:40:35 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:40:35 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8320addadfe066d4dbfbaf68de44a396423ff4fcb10869da524b66b2ceac2c45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:40:35 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8320addadfe066d4dbfbaf68de44a396423ff4fcb10869da524b66b2ceac2c45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:40:35 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8320addadfe066d4dbfbaf68de44a396423ff4fcb10869da524b66b2ceac2c45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:40:35 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8320addadfe066d4dbfbaf68de44a396423ff4fcb10869da524b66b2ceac2c45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:40:35 np0005604943 podman[149479]: 2026-02-02 11:40:35.434252689 +0000 UTC m=+0.029442392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:40:35 np0005604943 podman[149479]: 2026-02-02 11:40:35.535633155 +0000 UTC m=+0.130822878 container init 5110155f3f4cd424ee3a07c9ece7dfe4d1258d6453c215a196c9dd49302058e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:40:35 np0005604943 podman[149479]: 2026-02-02 11:40:35.542779842 +0000 UTC m=+0.137969525 container start 5110155f3f4cd424ee3a07c9ece7dfe4d1258d6453c215a196c9dd49302058e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_noether, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:40:35 np0005604943 podman[149479]: 2026-02-02 11:40:35.545640077 +0000 UTC m=+0.140829760 container attach 5110155f3f4cd424ee3a07c9ece7dfe4d1258d6453c215a196c9dd49302058e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_noether, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Feb  2 06:40:35 np0005604943 python3.9[149548]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:40:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:35 np0005604943 silly_noether[149526]: {
Feb  2 06:40:35 np0005604943 silly_noether[149526]:    "0": [
Feb  2 06:40:35 np0005604943 silly_noether[149526]:        {
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "devices": [
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "/dev/loop3"
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            ],
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "lv_name": "ceph_lv0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "lv_size": "21470642176",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "name": "ceph_lv0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "tags": {
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.cluster_name": "ceph",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.crush_device_class": "",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.encrypted": "0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.objectstore": "bluestore",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.osd_id": "0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.type": "block",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.vdo": "0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.with_tpm": "0"
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            },
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "type": "block",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "vg_name": "ceph_vg0"
Feb  2 06:40:35 np0005604943 silly_noether[149526]:        }
Feb  2 06:40:35 np0005604943 silly_noether[149526]:    ],
Feb  2 06:40:35 np0005604943 silly_noether[149526]:    "1": [
Feb  2 06:40:35 np0005604943 silly_noether[149526]:        {
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "devices": [
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "/dev/loop4"
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            ],
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "lv_name": "ceph_lv1",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "lv_size": "21470642176",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "name": "ceph_lv1",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "tags": {
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.cluster_name": "ceph",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.crush_device_class": "",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.encrypted": "0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.objectstore": "bluestore",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.osd_id": "1",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.type": "block",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.vdo": "0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.with_tpm": "0"
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            },
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "type": "block",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "vg_name": "ceph_vg1"
Feb  2 06:40:35 np0005604943 silly_noether[149526]:        }
Feb  2 06:40:35 np0005604943 silly_noether[149526]:    ],
Feb  2 06:40:35 np0005604943 silly_noether[149526]:    "2": [
Feb  2 06:40:35 np0005604943 silly_noether[149526]:        {
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "devices": [
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "/dev/loop5"
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            ],
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "lv_name": "ceph_lv2",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "lv_size": "21470642176",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "name": "ceph_lv2",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "tags": {
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.cluster_name": "ceph",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.crush_device_class": "",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.encrypted": "0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.objectstore": "bluestore",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.osd_id": "2",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.type": "block",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.vdo": "0",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:                "ceph.with_tpm": "0"
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            },
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "type": "block",
Feb  2 06:40:35 np0005604943 silly_noether[149526]:            "vg_name": "ceph_vg2"
Feb  2 06:40:35 np0005604943 silly_noether[149526]:        }
Feb  2 06:40:35 np0005604943 silly_noether[149526]:    ]
Feb  2 06:40:35 np0005604943 silly_noether[149526]: }
Feb  2 06:40:35 np0005604943 systemd[1]: libpod-5110155f3f4cd424ee3a07c9ece7dfe4d1258d6453c215a196c9dd49302058e2.scope: Deactivated successfully.
Feb  2 06:40:35 np0005604943 podman[149479]: 2026-02-02 11:40:35.854297501 +0000 UTC m=+0.449487214 container died 5110155f3f4cd424ee3a07c9ece7dfe4d1258d6453c215a196c9dd49302058e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:40:35 np0005604943 systemd[1]: var-lib-containers-storage-overlay-8320addadfe066d4dbfbaf68de44a396423ff4fcb10869da524b66b2ceac2c45-merged.mount: Deactivated successfully.
Feb  2 06:40:35 np0005604943 podman[149479]: 2026-02-02 11:40:35.900176032 +0000 UTC m=+0.495365705 container remove 5110155f3f4cd424ee3a07c9ece7dfe4d1258d6453c215a196c9dd49302058e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:40:35 np0005604943 systemd[1]: libpod-conmon-5110155f3f4cd424ee3a07c9ece7dfe4d1258d6453c215a196c9dd49302058e2.scope: Deactivated successfully.
Feb  2 06:40:36 np0005604943 python3.9[149694]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770032435.2893546-177-84759619778794/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:36 np0005604943 podman[149775]: 2026-02-02 11:40:36.332196028 +0000 UTC m=+0.041847687 container create 110b1aaf3fd5c1c9ab07bc92f3090dd0c01b5c9610c59a0ce7de1cfc1b031079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:40:36 np0005604943 systemd[1]: Started libpod-conmon-110b1aaf3fd5c1c9ab07bc92f3090dd0c01b5c9610c59a0ce7de1cfc1b031079.scope.
Feb  2 06:40:36 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:40:36 np0005604943 podman[149775]: 2026-02-02 11:40:36.399277335 +0000 UTC m=+0.108928994 container init 110b1aaf3fd5c1c9ab07bc92f3090dd0c01b5c9610c59a0ce7de1cfc1b031079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 06:40:36 np0005604943 podman[149775]: 2026-02-02 11:40:36.404764838 +0000 UTC m=+0.114416477 container start 110b1aaf3fd5c1c9ab07bc92f3090dd0c01b5c9610c59a0ce7de1cfc1b031079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:40:36 np0005604943 condescending_shtern[149841]: 167 167
Feb  2 06:40:36 np0005604943 systemd[1]: libpod-110b1aaf3fd5c1c9ab07bc92f3090dd0c01b5c9610c59a0ce7de1cfc1b031079.scope: Deactivated successfully.
Feb  2 06:40:36 np0005604943 podman[149775]: 2026-02-02 11:40:36.314684699 +0000 UTC m=+0.024336368 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:40:36 np0005604943 podman[149775]: 2026-02-02 11:40:36.410147459 +0000 UTC m=+0.119799118 container attach 110b1aaf3fd5c1c9ab07bc92f3090dd0c01b5c9610c59a0ce7de1cfc1b031079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:40:36 np0005604943 podman[149775]: 2026-02-02 11:40:36.410586772 +0000 UTC m=+0.120238411 container died 110b1aaf3fd5c1c9ab07bc92f3090dd0c01b5c9610c59a0ce7de1cfc1b031079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:40:36 np0005604943 systemd[1]: var-lib-containers-storage-overlay-2a1f78e31bda0304077442f4003ac07ed956e01232ae6e173126440c79e3a54c-merged.mount: Deactivated successfully.
Feb  2 06:40:36 np0005604943 podman[149775]: 2026-02-02 11:40:36.446317667 +0000 UTC m=+0.155969306 container remove 110b1aaf3fd5c1c9ab07bc92f3090dd0c01b5c9610c59a0ce7de1cfc1b031079 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_shtern, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 06:40:36 np0005604943 systemd[1]: libpod-conmon-110b1aaf3fd5c1c9ab07bc92f3090dd0c01b5c9610c59a0ce7de1cfc1b031079.scope: Deactivated successfully.
Feb  2 06:40:36 np0005604943 podman[149939]: 2026-02-02 11:40:36.562380487 +0000 UTC m=+0.041425916 container create 0d89a08365ef0219b34009a4b71b17a7b0e63c61e81b663fe51846d76411caa3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 06:40:36 np0005604943 systemd[1]: Started libpod-conmon-0d89a08365ef0219b34009a4b71b17a7b0e63c61e81b663fe51846d76411caa3.scope.
Feb  2 06:40:36 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:40:36 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30ba01e4fd2323b6da7d47c85e4c2f7c8e2135d7d93b4f2cb31e3434e6154ba7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:40:36 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30ba01e4fd2323b6da7d47c85e4c2f7c8e2135d7d93b4f2cb31e3434e6154ba7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:40:36 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30ba01e4fd2323b6da7d47c85e4c2f7c8e2135d7d93b4f2cb31e3434e6154ba7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:40:36 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30ba01e4fd2323b6da7d47c85e4c2f7c8e2135d7d93b4f2cb31e3434e6154ba7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:40:36 np0005604943 podman[149939]: 2026-02-02 11:40:36.545030903 +0000 UTC m=+0.024076342 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:40:36 np0005604943 podman[149939]: 2026-02-02 11:40:36.658823563 +0000 UTC m=+0.137869012 container init 0d89a08365ef0219b34009a4b71b17a7b0e63c61e81b663fe51846d76411caa3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_antonelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 06:40:36 np0005604943 podman[149939]: 2026-02-02 11:40:36.667413728 +0000 UTC m=+0.146459157 container start 0d89a08365ef0219b34009a4b71b17a7b0e63c61e81b663fe51846d76411caa3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_antonelli, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:40:36 np0005604943 podman[149939]: 2026-02-02 11:40:36.670818497 +0000 UTC m=+0.149863926 container attach 0d89a08365ef0219b34009a4b71b17a7b0e63c61e81b663fe51846d76411caa3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_antonelli, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:40:36 np0005604943 python3.9[149934]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:40:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:40:37 np0005604943 lvm[150189]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:40:37 np0005604943 lvm[150189]: VG ceph_vg1 finished
Feb  2 06:40:37 np0005604943 lvm[150188]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:40:37 np0005604943 lvm[150188]: VG ceph_vg0 finished
Feb  2 06:40:37 np0005604943 lvm[150191]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:40:37 np0005604943 lvm[150191]: VG ceph_vg2 finished
Feb  2 06:40:37 np0005604943 python3.9[150175]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:37 np0005604943 dreamy_antonelli[149956]: {}
Feb  2 06:40:37 np0005604943 systemd[1]: libpod-0d89a08365ef0219b34009a4b71b17a7b0e63c61e81b663fe51846d76411caa3.scope: Deactivated successfully.
Feb  2 06:40:37 np0005604943 systemd[1]: libpod-0d89a08365ef0219b34009a4b71b17a7b0e63c61e81b663fe51846d76411caa3.scope: Consumed 1.032s CPU time.
Feb  2 06:40:37 np0005604943 podman[149939]: 2026-02-02 11:40:37.406074275 +0000 UTC m=+0.885119714 container died 0d89a08365ef0219b34009a4b71b17a7b0e63c61e81b663fe51846d76411caa3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 06:40:37 np0005604943 systemd[1]: var-lib-containers-storage-overlay-30ba01e4fd2323b6da7d47c85e4c2f7c8e2135d7d93b4f2cb31e3434e6154ba7-merged.mount: Deactivated successfully.
Feb  2 06:40:37 np0005604943 podman[149939]: 2026-02-02 11:40:37.45933029 +0000 UTC m=+0.938375719 container remove 0d89a08365ef0219b34009a4b71b17a7b0e63c61e81b663fe51846d76411caa3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_antonelli, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  2 06:40:37 np0005604943 systemd[1]: libpod-conmon-0d89a08365ef0219b34009a4b71b17a7b0e63c61e81b663fe51846d76411caa3.scope: Deactivated successfully.
Feb  2 06:40:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:40:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:40:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:40:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:40:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:37 np0005604943 python3.9[150384]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:40:38 np0005604943 python3.9[150462]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:40:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:40:38 np0005604943 python3.9[150614]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:40:39 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:39Z|00025|memory|INFO|16384 kB peak resident set size after 30.0 seconds
Feb  2 06:40:39 np0005604943 ovn_controller[145056]: 2026-02-02T11:40:39Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Feb  2 06:40:39 np0005604943 podman[150692]: 2026-02-02 11:40:39.278028386 +0000 UTC m=+0.089981639 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb  2 06:40:39 np0005604943 python3.9[150693]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:39 np0005604943 python3.9[150871]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:40:40 np0005604943 python3.9[151023]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:40:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:40:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:40:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:40:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:40:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:40:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:40:41 np0005604943 python3.9[151101]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:40:41 np0005604943 python3.9[151253]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:40:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:41 np0005604943 python3.9[151331]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:40:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:40:42 np0005604943 python3.9[151483]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:40:42 np0005604943 systemd[1]: Reloading.
Feb  2 06:40:42 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:40:42 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:40:43 np0005604943 python3.9[151672]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:40:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:43 np0005604943 python3.9[151750]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:40:44 np0005604943 python3.9[151902]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:40:44 np0005604943 python3.9[151980]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:40:45 np0005604943 python3.9[152132]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:40:45 np0005604943 systemd[1]: Reloading.
Feb  2 06:40:45 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:40:45 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:40:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:45 np0005604943 systemd[1]: Starting Create netns directory...
Feb  2 06:40:45 np0005604943 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb  2 06:40:45 np0005604943 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb  2 06:40:45 np0005604943 systemd[1]: Finished Create netns directory.
Feb  2 06:40:46 np0005604943 python3.9[152324]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:40:47 np0005604943 python3.9[152476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:40:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:47 np0005604943 python3.9[152599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770032446.7779558-328-93784376338126/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:48 np0005604943 python3.9[152751]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:40:49 np0005604943 python3.9[152903]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:40:49 np0005604943 python3.9[153055]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:40:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:50 np0005604943 python3.9[153178]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770032449.1838741-361-20730089382697/.source.json _original_basename=.v_471_8x follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:40:50 np0005604943 python3.9[153328]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:40:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:51 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 06:40:51 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2069 writes, 9152 keys, 2069 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2069 writes, 2069 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2069 writes, 9152 keys, 2069 commit groups, 1.0 writes per commit group, ingest: 12.24 MB, 0.02 MB/s#012Interval WAL: 2069 writes, 2069 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    170.8      0.05              0.01         3    0.017       0      0       0.0       0.0#012  L6      1/0    6.75 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    175.9    153.9      0.09              0.04         2    0.046    7170    734       0.0       0.0#012 Sum      1/0    6.75 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    112.9    159.9      0.14              0.05         5    0.029    7170    734       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    116.8    165.0      0.14              0.05         4    0.035    7170    734       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    175.9    153.9      0.09              0.04         2    0.046    7170    734       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    186.8      0.05              0.01         2    0.023       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.1      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.009, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cd5e4c78d0#2 capacity: 308.00 MB usage: 639.97 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(37,552.19 KB,0.17508%) FilterBlock(6,28.61 KB,0.00907105%) IndexBlock(6,59.17 KB,0.0187614%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 06:40:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:40:52 np0005604943 python3.9[153751]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Feb  2 06:40:53 np0005604943 python3.9[153903]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb  2 06:40:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:54 np0005604943 python3[154055]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Feb  2 06:40:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:40:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:40:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:41:03 np0005604943 podman[154069]: 2026-02-02 11:41:03.531532182 +0000 UTC m=+8.726224088 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 06:41:03 np0005604943 podman[154189]: 2026-02-02 11:41:03.663393895 +0000 UTC m=+0.054447697 container create f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Feb  2 06:41:03 np0005604943 podman[154189]: 2026-02-02 11:41:03.627803094 +0000 UTC m=+0.018856906 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 06:41:03 np0005604943 python3[154055]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 06:41:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:04 np0005604943 python3.9[154379]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:41:04 np0005604943 python3.9[154533]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:05 np0005604943 python3.9[154609]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:41:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:05 np0005604943 python3.9[154760]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770032465.336776-439-165815205750845/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:06 np0005604943 python3.9[154836]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 06:41:06 np0005604943 systemd[1]: Reloading.
Feb  2 06:41:06 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:41:06 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:41:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:41:07 np0005604943 python3.9[154948]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:41:07 np0005604943 systemd[1]: Reloading.
Feb  2 06:41:07 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:41:07 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:41:07 np0005604943 systemd[1]: Starting ovn_metadata_agent container...
Feb  2 06:41:07 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:41:07 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66828488d6a738970c08a788842396543666ffb837cadd7b3c21a2f6f66275e5/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb  2 06:41:07 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66828488d6a738970c08a788842396543666ffb837cadd7b3c21a2f6f66275e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 06:41:07 np0005604943 systemd[1]: Started /usr/bin/podman healthcheck run f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9.
Feb  2 06:41:07 np0005604943 podman[154990]: 2026-02-02 11:41:07.633147311 +0000 UTC m=+0.102003343 container init f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: + sudo -E kolla_set_configs
Feb  2 06:41:07 np0005604943 podman[154990]: 2026-02-02 11:41:07.666660559 +0000 UTC m=+0.135516561 container start f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb  2 06:41:07 np0005604943 edpm-start-podman-container[154990]: ovn_metadata_agent
Feb  2 06:41:07 np0005604943 edpm-start-podman-container[154989]: Creating additional drop-in dependency for "ovn_metadata_agent" (f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9)
Feb  2 06:41:07 np0005604943 podman[155012]: 2026-02-02 11:41:07.719063892 +0000 UTC m=+0.045685548 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Feb  2 06:41:07 np0005604943 systemd[1]: Reloading.
Feb  2 06:41:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:07 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:41:07 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: INFO:__main__:Validating config file
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: INFO:__main__:Copying service configuration files
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: INFO:__main__:Writing out command to execute
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: INFO:__main__:Setting permission for /var/lib/neutron
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: ++ cat /run_command
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: + CMD=neutron-ovn-metadata-agent
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: + ARGS=
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: + sudo kolla_copy_cacerts
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: + [[ ! -n '' ]]
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: + . kolla_extend_start
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: Running command: 'neutron-ovn-metadata-agent'
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: + umask 0022
Feb  2 06:41:07 np0005604943 ovn_metadata_agent[155006]: + exec neutron-ovn-metadata-agent
Feb  2 06:41:07 np0005604943 systemd[1]: Started ovn_metadata_agent container.
Feb  2 06:41:08 np0005604943 python3.9[155243]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb  2 06:41:09 np0005604943 podman[155368]: 2026-02-02 11:41:09.415151056 +0000 UTC m=+0.074636836 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Feb  2 06:41:09 np0005604943 python3.9[155411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:41:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:41:09
Feb  2 06:41:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:41:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:41:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['backups', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.mgr']
Feb  2 06:41:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:41:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.965 155011 INFO neutron.common.config [-] Logging enabled!#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.965 155011 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.965 155011 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.966 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.966 155011 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.966 155011 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.966 155011 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.966 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.966 155011 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.966 155011 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.966 155011 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.967 155011 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.967 155011 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.967 155011 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.967 155011 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.967 155011 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.967 155011 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.967 155011 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.967 155011 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.967 155011 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.968 155011 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.968 155011 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.968 155011 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.968 155011 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.968 155011 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.968 155011 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.968 155011 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.968 155011 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.968 155011 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.968 155011 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.969 155011 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.969 155011 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.969 155011 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.969 155011 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.969 155011 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.969 155011 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.969 155011 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.969 155011 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.970 155011 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.970 155011 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.970 155011 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.970 155011 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.970 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.970 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.970 155011 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.970 155011 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.970 155011 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.970 155011 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.971 155011 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.971 155011 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.971 155011 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.971 155011 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.971 155011 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.971 155011 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.971 155011 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.971 155011 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.971 155011 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.972 155011 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.972 155011 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.972 155011 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.972 155011 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.972 155011 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.972 155011 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.972 155011 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.972 155011 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.972 155011 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.972 155011 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.973 155011 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.973 155011 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.973 155011 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.973 155011 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.973 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.973 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.973 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.973 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.974 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.974 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.974 155011 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.975 155011 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.975 155011 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.975 155011 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.975 155011 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.975 155011 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.975 155011 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.975 155011 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.975 155011 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.975 155011 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.975 155011 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.976 155011 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.976 155011 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.976 155011 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.976 155011 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.976 155011 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.976 155011 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.976 155011 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.976 155011 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.976 155011 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.977 155011 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.977 155011 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.977 155011 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.977 155011 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.977 155011 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.977 155011 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.977 155011 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.977 155011 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.977 155011 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.977 155011 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.978 155011 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.978 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.978 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.978 155011 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.978 155011 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.978 155011 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.978 155011 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.978 155011 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.978 155011 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.979 155011 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.979 155011 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.979 155011 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.979 155011 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.979 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.979 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.979 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.979 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.979 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.980 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.980 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.980 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.980 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.980 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.980 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.980 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.980 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.980 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.981 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.981 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.981 155011 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.981 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.981 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.981 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.981 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.981 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.981 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.982 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.982 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.982 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.982 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.982 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.982 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.982 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.982 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.982 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.983 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.983 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.983 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.983 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.983 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.983 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.983 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.984 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.984 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.984 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.984 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.984 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.984 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.984 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.984 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.984 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.985 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.985 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.985 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.985 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.985 155011 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.985 155011 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.985 155011 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.985 155011 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.985 155011 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.986 155011 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.986 155011 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.986 155011 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.986 155011 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.986 155011 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.986 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.986 155011 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.986 155011 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.987 155011 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.987 155011 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.987 155011 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.987 155011 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.987 155011 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.987 155011 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.987 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.987 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.988 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.988 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.988 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.988 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.988 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.988 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.988 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.988 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.989 155011 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.989 155011 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.989 155011 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.989 155011 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.989 155011 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.989 155011 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.989 155011 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.989 155011 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.989 155011 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.990 155011 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.990 155011 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.990 155011 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.990 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.990 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.990 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.990 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.990 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.990 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.991 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.991 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.991 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.991 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.991 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.991 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.991 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.991 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.991 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.992 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.992 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.992 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.992 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.992 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.992 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.992 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.992 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.992 155011 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 python3.9[155545]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770032469.137387-484-135554517904861/.source.yaml _original_basename=.6_t733x9 follow=False checksum=25fa6f71a6a309569f82bd50b637fd8c84e96a66 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.993 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.993 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.993 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.993 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.993 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.993 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.993 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.993 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.994 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.994 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.994 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.994 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.994 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.994 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.994 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.994 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.995 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.995 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.995 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.995 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.995 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.995 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.995 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.996 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.996 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.996 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.996 155011 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.996 155011 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.996 155011 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.996 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.996 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.997 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.997 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.997 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.997 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.997 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.997 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.998 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.998 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.998 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.998 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.998 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.998 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.998 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.998 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.999 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.999 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.999 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.999 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.999 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.999 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.999 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.999 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:09.999 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.000 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.000 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.000 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.000 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.000 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.000 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.000 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.000 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.000 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.001 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.001 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.001 155011 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.001 155011 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.010 155011 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.010 155011 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.010 155011 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.010 155011 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.011 155011 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.023 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 63c28000-4b99-40fb-b19f-6b3ba1922f6d (UUID: 63c28000-4b99-40fb-b19f-6b3ba1922f6d) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.054 155011 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.054 155011 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.054 155011 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.054 155011 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.058 155011 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.066 155011 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.072 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '63c28000-4b99-40fb-b19f-6b3ba1922f6d'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], external_ids={}, name=63c28000-4b99-40fb-b19f-6b3ba1922f6d, nb_cfg_timestamp=1770032417283, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.073 155011 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fe337c12b20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.074 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.074 155011 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.075 155011 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.075 155011 INFO oslo_service.service [-] Starting 1 workers#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.078 155011 DEBUG oslo_service.service [-] Started child 155570 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.081 155011 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpsq_kuak1/privsep.sock']#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.081 155570 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-169114'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.103 155570 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.103 155570 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.103 155570 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.108 155570 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.114 155570 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.120 155570 INFO eventlet.wsgi.server [-] (155570) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Feb  2 06:41:10 np0005604943 systemd[1]: session-47.scope: Deactivated successfully.
Feb  2 06:41:10 np0005604943 systemd[1]: session-47.scope: Consumed 48.865s CPU time.
Feb  2 06:41:10 np0005604943 systemd-logind[786]: Session 47 logged out. Waiting for processes to exit.
Feb  2 06:41:10 np0005604943 systemd-logind[786]: Removed session 47.
Feb  2 06:41:10 np0005604943 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.698 155011 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.699 155011 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpsq_kuak1/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.572 155575 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.577 155575 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.579 155575 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.579 155575 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155575#033[00m
Feb  2 06:41:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:10.701 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[d5db0fa4-ebb3-4889-8dca-3898e89b748d]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:41:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:41:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:41:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:41:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:41:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:41:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:41:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:41:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:41:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:41:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:41:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:41:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:41:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:41:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:41:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:41:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.187 155575 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.187 155575 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.187 155575 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.730 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[903fed11-1570-4015-ab4f-6d6bef979b70]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.733 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, column=external_ids, values=({'neutron:ovn-metadata-id': '951a5031-a6a1-5f5a-9eaa-912531a485ba'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.751 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.759 155011 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.759 155011 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.760 155011 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.760 155011 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.760 155011 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.760 155011 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.761 155011 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.761 155011 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.761 155011 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.762 155011 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.762 155011 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.762 155011 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.762 155011 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.763 155011 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.763 155011 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.763 155011 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.764 155011 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.764 155011 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.764 155011 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.764 155011 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.765 155011 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.765 155011 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.765 155011 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.765 155011 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.766 155011 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.766 155011 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.766 155011 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.767 155011 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.767 155011 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.767 155011 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.767 155011 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.768 155011 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.768 155011 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.768 155011 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.768 155011 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.769 155011 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.769 155011 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.769 155011 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.770 155011 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.770 155011 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.770 155011 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.770 155011 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.771 155011 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.771 155011 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.771 155011 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.771 155011 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.772 155011 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.772 155011 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.772 155011 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.772 155011 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.773 155011 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.773 155011 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.773 155011 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.773 155011 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.774 155011 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.774 155011 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.774 155011 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.774 155011 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.775 155011 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.775 155011 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.775 155011 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.775 155011 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.776 155011 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.776 155011 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.776 155011 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.776 155011 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.777 155011 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.777 155011 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.777 155011 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.777 155011 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.778 155011 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.778 155011 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.778 155011 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.778 155011 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.779 155011 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.779 155011 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.779 155011 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.779 155011 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.780 155011 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.780 155011 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.780 155011 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.780 155011 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.781 155011 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.781 155011 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.781 155011 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.781 155011 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.782 155011 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.782 155011 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.782 155011 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.782 155011 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.783 155011 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.783 155011 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.783 155011 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.783 155011 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.784 155011 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.784 155011 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.784 155011 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.784 155011 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.785 155011 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.785 155011 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.785 155011 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.785 155011 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.785 155011 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.786 155011 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.786 155011 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.786 155011 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.786 155011 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.787 155011 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:41:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.787 155011 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.787 155011 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.787 155011 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.788 155011 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.788 155011 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.788 155011 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.788 155011 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.788 155011 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.788 155011 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.788 155011 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.789 155011 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.789 155011 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.789 155011 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.789 155011 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.789 155011 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.789 155011 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.790 155011 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.790 155011 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.790 155011 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.790 155011 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.790 155011 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.790 155011 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.790 155011 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.791 155011 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.791 155011 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.791 155011 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.791 155011 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.791 155011 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.791 155011 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.791 155011 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.792 155011 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.792 155011 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.792 155011 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.792 155011 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.792 155011 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.792 155011 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.792 155011 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.793 155011 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.793 155011 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.793 155011 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.793 155011 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.793 155011 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.793 155011 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.793 155011 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.794 155011 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.794 155011 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.794 155011 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.794 155011 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.794 155011 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.794 155011 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.794 155011 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.794 155011 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.795 155011 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.795 155011 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.795 155011 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.795 155011 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.795 155011 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.795 155011 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.795 155011 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.796 155011 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.796 155011 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.796 155011 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.796 155011 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.796 155011 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.796 155011 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.796 155011 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.797 155011 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.797 155011 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.797 155011 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.797 155011 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.797 155011 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.797 155011 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.797 155011 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.797 155011 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.798 155011 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.798 155011 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.798 155011 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.798 155011 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.798 155011 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.798 155011 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.798 155011 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.799 155011 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.799 155011 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.799 155011 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.799 155011 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.799 155011 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.799 155011 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.799 155011 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.800 155011 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.800 155011 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.800 155011 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.800 155011 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.800 155011 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.800 155011 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.800 155011 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.801 155011 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.801 155011 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.801 155011 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.801 155011 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.801 155011 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.801 155011 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.801 155011 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.802 155011 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.802 155011 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.802 155011 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.802 155011 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.802 155011 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.802 155011 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.802 155011 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.803 155011 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.803 155011 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.803 155011 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.803 155011 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.803 155011 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.803 155011 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.803 155011 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.804 155011 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.804 155011 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.804 155011 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.804 155011 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.804 155011 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.804 155011 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.804 155011 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.804 155011 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.805 155011 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.805 155011 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.805 155011 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.805 155011 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.805 155011 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.805 155011 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.805 155011 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.805 155011 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.806 155011 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.806 155011 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.806 155011 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.806 155011 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.806 155011 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.806 155011 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.806 155011 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.806 155011 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.807 155011 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.807 155011 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.807 155011 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.807 155011 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.807 155011 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.807 155011 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.808 155011 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.808 155011 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.808 155011 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.808 155011 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.808 155011 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.808 155011 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.808 155011 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.808 155011 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.809 155011 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.809 155011 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.809 155011 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.809 155011 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.809 155011 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.809 155011 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.809 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.810 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.810 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.810 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.810 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.810 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.810 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.811 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.811 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.811 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.811 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.811 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.811 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.811 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.812 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.812 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.812 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.812 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.812 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.812 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.812 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.813 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.813 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.813 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.813 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.813 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.813 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.813 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.813 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.814 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.814 155011 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.814 155011 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.814 155011 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.814 155011 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.814 155011 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:41:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:41:11.814 155011 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:41:11.901879) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032471901933, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 783, "num_deletes": 251, "total_data_size": 1050224, "memory_usage": 1067952, "flush_reason": "Manual Compaction"}
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032471910581, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1040917, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8925, "largest_seqno": 9707, "table_properties": {"data_size": 1036950, "index_size": 1747, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8473, "raw_average_key_size": 18, "raw_value_size": 1028993, "raw_average_value_size": 2256, "num_data_blocks": 81, "num_entries": 456, "num_filter_entries": 456, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770032403, "oldest_key_time": 1770032403, "file_creation_time": 1770032471, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 8750 microseconds, and 2495 cpu microseconds.
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:41:11.910635) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1040917 bytes OK
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:41:11.910655) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:41:11.913765) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:41:11.913792) EVENT_LOG_v1 {"time_micros": 1770032471913786, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:41:11.913814) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1046292, prev total WAL file size 1046292, number of live WAL files 2.
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:41:11.914492) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1016KB)], [23(6916KB)]
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032471914529, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8123143, "oldest_snapshot_seqno": -1}
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3298 keys, 6171747 bytes, temperature: kUnknown
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032471956595, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6171747, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6147825, "index_size": 14597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79940, "raw_average_key_size": 24, "raw_value_size": 6086261, "raw_average_value_size": 1845, "num_data_blocks": 636, "num_entries": 3298, "num_filter_entries": 3298, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770032471, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:41:11.957048) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6171747 bytes
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:41:11.959671) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.8 rd, 145.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.8 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(13.7) write-amplify(5.9) OK, records in: 3812, records dropped: 514 output_compression: NoCompression
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:41:11.959715) EVENT_LOG_v1 {"time_micros": 1770032471959698, "job": 8, "event": "compaction_finished", "compaction_time_micros": 42358, "compaction_time_cpu_micros": 13383, "output_level": 6, "num_output_files": 1, "total_output_size": 6171747, "num_input_records": 3812, "num_output_records": 3298, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032471959994, "job": 8, "event": "table_file_deletion", "file_number": 25}
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032471960648, "job": 8, "event": "table_file_deletion", "file_number": 23}
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:41:11.914396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:41:11.960790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:41:11.960797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:41:11.960799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:41:11.960801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:41:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:41:11.960803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:41:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:41:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:15 np0005604943 systemd-logind[786]: New session 48 of user zuul.
Feb  2 06:41:15 np0005604943 systemd[1]: Started Session 48 of User zuul.
Feb  2 06:41:17 np0005604943 python3.9[155734]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:41:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:41:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:18 np0005604943 python3.9[155892]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:41:19 np0005604943 python3.9[156057]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 06:41:19 np0005604943 systemd[1]: Reloading.
Feb  2 06:41:19 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:41:19 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:41:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:21 np0005604943 python3.9[156244]: ansible-ansible.builtin.service_facts Invoked
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:41:21 np0005604943 network[156262]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 06:41:21 np0005604943 network[156263]: 'network-scripts' will be removed from distribution in near future.
Feb  2 06:41:21 np0005604943 network[156264]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 06:41:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:41:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:24 np0005604943 python3.9[156526]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:41:25 np0005604943 python3.9[156679]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:41:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:25 np0005604943 python3.9[156832]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:41:26 np0005604943 python3.9[156985]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:41:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:41:27 np0005604943 python3.9[157138]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:41:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:28 np0005604943 python3.9[157291]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:41:28 np0005604943 python3.9[157444]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:41:29 np0005604943 python3.9[157597]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:30 np0005604943 python3.9[157749]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:30 np0005604943 python3.9[157901]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:31 np0005604943 python3.9[158053]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:31 np0005604943 python3.9[158205]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:41:32 np0005604943 python3.9[158357]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:32 np0005604943 python3.9[158509]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:33 np0005604943 python3.9[158661]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:33 np0005604943 python3.9[158813]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:34 np0005604943 python3.9[158965]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:34 np0005604943 python3.9[159117]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:35 np0005604943 python3.9[159269]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:35 np0005604943 python3.9[159421]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:36 np0005604943 python3.9[159573]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:41:37 np0005604943 python3.9[159725]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:41:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:41:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:37 np0005604943 python3.9[159925]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 06:41:37 np0005604943 podman[159960]: 2026-02-02 11:41:37.952975309 +0000 UTC m=+0.051125729 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:41:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:41:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:41:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:41:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:41:38 np0005604943 python3.9[160169]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 06:41:38 np0005604943 systemd[1]: Reloading.
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:41:38 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:41:38 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:41:38 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:41:39 np0005604943 podman[160403]: 2026-02-02 11:41:39.171542862 +0000 UTC m=+0.062910257 container create 25a528aab624f9e18b05bdd253d925373392f20e44a30efa9990b7159d776c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 06:41:39 np0005604943 systemd[1]: Started libpod-conmon-25a528aab624f9e18b05bdd253d925373392f20e44a30efa9990b7159d776c10.scope.
Feb  2 06:41:39 np0005604943 podman[160403]: 2026-02-02 11:41:39.133223932 +0000 UTC m=+0.024591347 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:41:39 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:41:39 np0005604943 podman[160403]: 2026-02-02 11:41:39.304767816 +0000 UTC m=+0.196135251 container init 25a528aab624f9e18b05bdd253d925373392f20e44a30efa9990b7159d776c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 06:41:39 np0005604943 podman[160403]: 2026-02-02 11:41:39.310898383 +0000 UTC m=+0.202265778 container start 25a528aab624f9e18b05bdd253d925373392f20e44a30efa9990b7159d776c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:41:39 np0005604943 cool_fermi[160465]: 167 167
Feb  2 06:41:39 np0005604943 podman[160403]: 2026-02-02 11:41:39.317683388 +0000 UTC m=+0.209050783 container attach 25a528aab624f9e18b05bdd253d925373392f20e44a30efa9990b7159d776c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 06:41:39 np0005604943 systemd[1]: libpod-25a528aab624f9e18b05bdd253d925373392f20e44a30efa9990b7159d776c10.scope: Deactivated successfully.
Feb  2 06:41:39 np0005604943 podman[160403]: 2026-02-02 11:41:39.319405327 +0000 UTC m=+0.210772762 container died 25a528aab624f9e18b05bdd253d925373392f20e44a30efa9990b7159d776c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 06:41:39 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f5e652d79a6261d08e15b8a7683d591edfbb60d56078f51e1d83f893bb89c47e-merged.mount: Deactivated successfully.
Feb  2 06:41:39 np0005604943 python3.9[160462]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:41:39 np0005604943 podman[160403]: 2026-02-02 11:41:39.506500288 +0000 UTC m=+0.397867683 container remove 25a528aab624f9e18b05bdd253d925373392f20e44a30efa9990b7159d776c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermi, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:41:39 np0005604943 systemd[1]: libpod-conmon-25a528aab624f9e18b05bdd253d925373392f20e44a30efa9990b7159d776c10.scope: Deactivated successfully.
Feb  2 06:41:39 np0005604943 podman[160584]: 2026-02-02 11:41:39.693724683 +0000 UTC m=+0.085692301 container create 78b380bdb0c870a9498dd7008f469e51af54b49eb86413b14980a2184dd3bcaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mccarthy, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:41:39 np0005604943 podman[160584]: 2026-02-02 11:41:39.628791959 +0000 UTC m=+0.020759597 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:41:39 np0005604943 podman[160533]: 2026-02-02 11:41:39.731113487 +0000 UTC m=+0.176403856 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb  2 06:41:39 np0005604943 systemd[1]: Started libpod-conmon-78b380bdb0c870a9498dd7008f469e51af54b49eb86413b14980a2184dd3bcaa.scope.
Feb  2 06:41:39 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:41:39 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81f797af61bad9f47e508145f6230bc8da5da3c9a0bdb141dc71e98b7210b10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:41:39 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81f797af61bad9f47e508145f6230bc8da5da3c9a0bdb141dc71e98b7210b10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:41:39 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81f797af61bad9f47e508145f6230bc8da5da3c9a0bdb141dc71e98b7210b10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:41:39 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81f797af61bad9f47e508145f6230bc8da5da3c9a0bdb141dc71e98b7210b10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:41:39 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81f797af61bad9f47e508145f6230bc8da5da3c9a0bdb141dc71e98b7210b10/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:41:39 np0005604943 podman[160584]: 2026-02-02 11:41:39.790087539 +0000 UTC m=+0.182055187 container init 78b380bdb0c870a9498dd7008f469e51af54b49eb86413b14980a2184dd3bcaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:41:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:39 np0005604943 podman[160584]: 2026-02-02 11:41:39.798852111 +0000 UTC m=+0.190819729 container start 78b380bdb0c870a9498dd7008f469e51af54b49eb86413b14980a2184dd3bcaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 06:41:39 np0005604943 podman[160584]: 2026-02-02 11:41:39.802928058 +0000 UTC m=+0.194895676 container attach 78b380bdb0c870a9498dd7008f469e51af54b49eb86413b14980a2184dd3bcaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:41:39 np0005604943 python3.9[160690]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:41:40 np0005604943 trusting_mccarthy[160658]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:41:40 np0005604943 trusting_mccarthy[160658]: --> All data devices are unavailable
Feb  2 06:41:40 np0005604943 podman[160584]: 2026-02-02 11:41:40.200900663 +0000 UTC m=+0.592868311 container died 78b380bdb0c870a9498dd7008f469e51af54b49eb86413b14980a2184dd3bcaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mccarthy, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 06:41:40 np0005604943 systemd[1]: libpod-78b380bdb0c870a9498dd7008f469e51af54b49eb86413b14980a2184dd3bcaa.scope: Deactivated successfully.
Feb  2 06:41:40 np0005604943 systemd[1]: var-lib-containers-storage-overlay-d81f797af61bad9f47e508145f6230bc8da5da3c9a0bdb141dc71e98b7210b10-merged.mount: Deactivated successfully.
Feb  2 06:41:40 np0005604943 podman[160584]: 2026-02-02 11:41:40.23910669 +0000 UTC m=+0.631074308 container remove 78b380bdb0c870a9498dd7008f469e51af54b49eb86413b14980a2184dd3bcaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mccarthy, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 06:41:40 np0005604943 systemd[1]: libpod-conmon-78b380bdb0c870a9498dd7008f469e51af54b49eb86413b14980a2184dd3bcaa.scope: Deactivated successfully.
Feb  2 06:41:40 np0005604943 python3.9[160894]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:41:40 np0005604943 podman[160941]: 2026-02-02 11:41:40.578618147 +0000 UTC m=+0.034605635 container create 0461d6a3595c31ef5fd4976f6fa3da27c19d12e92ae23a95174410390c02469b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:41:40 np0005604943 systemd[1]: Started libpod-conmon-0461d6a3595c31ef5fd4976f6fa3da27c19d12e92ae23a95174410390c02469b.scope.
Feb  2 06:41:40 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:41:40 np0005604943 podman[160941]: 2026-02-02 11:41:40.641721618 +0000 UTC m=+0.097709126 container init 0461d6a3595c31ef5fd4976f6fa3da27c19d12e92ae23a95174410390c02469b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 06:41:40 np0005604943 podman[160941]: 2026-02-02 11:41:40.648255396 +0000 UTC m=+0.104242884 container start 0461d6a3595c31ef5fd4976f6fa3da27c19d12e92ae23a95174410390c02469b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 06:41:40 np0005604943 ecstatic_mirzakhani[160996]: 167 167
Feb  2 06:41:40 np0005604943 systemd[1]: libpod-0461d6a3595c31ef5fd4976f6fa3da27c19d12e92ae23a95174410390c02469b.scope: Deactivated successfully.
Feb  2 06:41:40 np0005604943 podman[160941]: 2026-02-02 11:41:40.652506658 +0000 UTC m=+0.108494146 container attach 0461d6a3595c31ef5fd4976f6fa3da27c19d12e92ae23a95174410390c02469b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 06:41:40 np0005604943 podman[160941]: 2026-02-02 11:41:40.653086364 +0000 UTC m=+0.109073852 container died 0461d6a3595c31ef5fd4976f6fa3da27c19d12e92ae23a95174410390c02469b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 06:41:40 np0005604943 podman[160941]: 2026-02-02 11:41:40.562775492 +0000 UTC m=+0.018763000 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:41:40 np0005604943 systemd[1]: var-lib-containers-storage-overlay-9f33259adac83ecadf324dba9d094471189f8e334bf1d9f3ea48e97c6890e675-merged.mount: Deactivated successfully.
Feb  2 06:41:40 np0005604943 podman[160941]: 2026-02-02 11:41:40.684760744 +0000 UTC m=+0.140748232 container remove 0461d6a3595c31ef5fd4976f6fa3da27c19d12e92ae23a95174410390c02469b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:41:40 np0005604943 systemd[1]: libpod-conmon-0461d6a3595c31ef5fd4976f6fa3da27c19d12e92ae23a95174410390c02469b.scope: Deactivated successfully.
Feb  2 06:41:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:41:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:41:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:41:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:41:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:41:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:41:40 np0005604943 podman[161080]: 2026-02-02 11:41:40.846019913 +0000 UTC m=+0.079122012 container create 0f498d4cd0441364f7f9510599bc98d93312102ac0b8ce76f9e0ea2e842bebaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yonath, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:41:40 np0005604943 podman[161080]: 2026-02-02 11:41:40.790640474 +0000 UTC m=+0.023742613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:41:40 np0005604943 systemd[1]: Started libpod-conmon-0f498d4cd0441364f7f9510599bc98d93312102ac0b8ce76f9e0ea2e842bebaa.scope.
Feb  2 06:41:40 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:41:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56850fc0c3dcd6a5f24d6bbd6f6f08cd76f073b03ab08da012086c71de4307b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:41:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56850fc0c3dcd6a5f24d6bbd6f6f08cd76f073b03ab08da012086c71de4307b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:41:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56850fc0c3dcd6a5f24d6bbd6f6f08cd76f073b03ab08da012086c71de4307b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:41:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56850fc0c3dcd6a5f24d6bbd6f6f08cd76f073b03ab08da012086c71de4307b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:41:40 np0005604943 podman[161080]: 2026-02-02 11:41:40.968566302 +0000 UTC m=+0.201668421 container init 0f498d4cd0441364f7f9510599bc98d93312102ac0b8ce76f9e0ea2e842bebaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:41:40 np0005604943 podman[161080]: 2026-02-02 11:41:40.974704358 +0000 UTC m=+0.207806457 container start 0f498d4cd0441364f7f9510599bc98d93312102ac0b8ce76f9e0ea2e842bebaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Feb  2 06:41:41 np0005604943 podman[161080]: 2026-02-02 11:41:41.033784924 +0000 UTC m=+0.266887043 container attach 0f498d4cd0441364f7f9510599bc98d93312102ac0b8ce76f9e0ea2e842bebaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yonath, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:41:41 np0005604943 python3.9[161138]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:41:41 np0005604943 magical_yonath[161141]: {
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:    "0": [
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:        {
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "devices": [
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "/dev/loop3"
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            ],
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "lv_name": "ceph_lv0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "lv_size": "21470642176",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "name": "ceph_lv0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "tags": {
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.cluster_name": "ceph",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.crush_device_class": "",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.encrypted": "0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.objectstore": "bluestore",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.osd_id": "0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.type": "block",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.vdo": "0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.with_tpm": "0"
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            },
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "type": "block",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "vg_name": "ceph_vg0"
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:        }
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:    ],
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:    "1": [
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:        {
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "devices": [
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "/dev/loop4"
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            ],
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "lv_name": "ceph_lv1",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "lv_size": "21470642176",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "name": "ceph_lv1",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "tags": {
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.cluster_name": "ceph",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.crush_device_class": "",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.encrypted": "0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.objectstore": "bluestore",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.osd_id": "1",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.type": "block",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.vdo": "0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.with_tpm": "0"
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            },
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "type": "block",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "vg_name": "ceph_vg1"
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:        }
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:    ],
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:    "2": [
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:        {
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "devices": [
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "/dev/loop5"
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            ],
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "lv_name": "ceph_lv2",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "lv_size": "21470642176",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "name": "ceph_lv2",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "tags": {
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.cluster_name": "ceph",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.crush_device_class": "",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.encrypted": "0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.objectstore": "bluestore",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.osd_id": "2",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.type": "block",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.vdo": "0",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:                "ceph.with_tpm": "0"
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            },
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "type": "block",
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:            "vg_name": "ceph_vg2"
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:        }
Feb  2 06:41:41 np0005604943 magical_yonath[161141]:    ]
Feb  2 06:41:41 np0005604943 magical_yonath[161141]: }
Feb  2 06:41:41 np0005604943 systemd[1]: libpod-0f498d4cd0441364f7f9510599bc98d93312102ac0b8ce76f9e0ea2e842bebaa.scope: Deactivated successfully.
Feb  2 06:41:41 np0005604943 podman[161080]: 2026-02-02 11:41:41.28932475 +0000 UTC m=+0.522426869 container died 0f498d4cd0441364f7f9510599bc98d93312102ac0b8ce76f9e0ea2e842bebaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yonath, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 06:41:41 np0005604943 systemd[1]: var-lib-containers-storage-overlay-56850fc0c3dcd6a5f24d6bbd6f6f08cd76f073b03ab08da012086c71de4307b5-merged.mount: Deactivated successfully.
Feb  2 06:41:41 np0005604943 podman[161080]: 2026-02-02 11:41:41.449964182 +0000 UTC m=+0.683066281 container remove 0f498d4cd0441364f7f9510599bc98d93312102ac0b8ce76f9e0ea2e842bebaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yonath, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 06:41:41 np0005604943 systemd[1]: libpod-conmon-0f498d4cd0441364f7f9510599bc98d93312102ac0b8ce76f9e0ea2e842bebaa.scope: Deactivated successfully.
Feb  2 06:41:41 np0005604943 python3.9[161315]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:41:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:41 np0005604943 podman[161457]: 2026-02-02 11:41:41.855307548 +0000 UTC m=+0.073651185 container create 091048fa4ea30fae57d2e0676d71c93cac208d3f3bf3884c371d02b2dac75098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 06:41:41 np0005604943 systemd[1]: Started libpod-conmon-091048fa4ea30fae57d2e0676d71c93cac208d3f3bf3884c371d02b2dac75098.scope.
Feb  2 06:41:41 np0005604943 podman[161457]: 2026-02-02 11:41:41.805323233 +0000 UTC m=+0.023666890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:41:41 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:41:41 np0005604943 podman[161457]: 2026-02-02 11:41:41.948091332 +0000 UTC m=+0.166434969 container init 091048fa4ea30fae57d2e0676d71c93cac208d3f3bf3884c371d02b2dac75098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 06:41:41 np0005604943 podman[161457]: 2026-02-02 11:41:41.954217718 +0000 UTC m=+0.172561355 container start 091048fa4ea30fae57d2e0676d71c93cac208d3f3bf3884c371d02b2dac75098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leakey, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 06:41:41 np0005604943 nifty_leakey[161519]: 167 167
Feb  2 06:41:41 np0005604943 systemd[1]: libpod-091048fa4ea30fae57d2e0676d71c93cac208d3f3bf3884c371d02b2dac75098.scope: Deactivated successfully.
Feb  2 06:41:41 np0005604943 podman[161457]: 2026-02-02 11:41:41.994509424 +0000 UTC m=+0.212853071 container attach 091048fa4ea30fae57d2e0676d71c93cac208d3f3bf3884c371d02b2dac75098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leakey, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 06:41:41 np0005604943 podman[161457]: 2026-02-02 11:41:41.996026309 +0000 UTC m=+0.214369946 container died 091048fa4ea30fae57d2e0676d71c93cac208d3f3bf3884c371d02b2dac75098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:41:42 np0005604943 python3.9[161550]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:41:42 np0005604943 systemd[1]: var-lib-containers-storage-overlay-ed8a3084a546907e9f962eb19159228e56dadb46cf9556acda1e7f1ec7191caf-merged.mount: Deactivated successfully.
Feb  2 06:41:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:41:42 np0005604943 podman[161457]: 2026-02-02 11:41:42.260305095 +0000 UTC m=+0.478648732 container remove 091048fa4ea30fae57d2e0676d71c93cac208d3f3bf3884c371d02b2dac75098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leakey, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 06:41:42 np0005604943 systemd[1]: libpod-conmon-091048fa4ea30fae57d2e0676d71c93cac208d3f3bf3884c371d02b2dac75098.scope: Deactivated successfully.
Feb  2 06:41:42 np0005604943 podman[161672]: 2026-02-02 11:41:42.353662935 +0000 UTC m=+0.017788572 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:41:42 np0005604943 podman[161672]: 2026-02-02 11:41:42.487445045 +0000 UTC m=+0.151570712 container create 9cf38e6a7790c530928fed8b5fcaf28a66a1cc2700be3d26f293d81a39a3f2c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaum, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:41:42 np0005604943 systemd[1]: Started libpod-conmon-9cf38e6a7790c530928fed8b5fcaf28a66a1cc2700be3d26f293d81a39a3f2c1.scope.
Feb  2 06:41:42 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:41:42 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb01b058ed65ea17f1ca052a69208c9e0966d39d44d39ea558a62de60bd18da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:41:42 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb01b058ed65ea17f1ca052a69208c9e0966d39d44d39ea558a62de60bd18da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:41:42 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb01b058ed65ea17f1ca052a69208c9e0966d39d44d39ea558a62de60bd18da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:41:42 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb01b058ed65ea17f1ca052a69208c9e0966d39d44d39ea558a62de60bd18da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:41:42 np0005604943 python3.9[161738]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:41:42 np0005604943 podman[161672]: 2026-02-02 11:41:42.62272657 +0000 UTC m=+0.286852207 container init 9cf38e6a7790c530928fed8b5fcaf28a66a1cc2700be3d26f293d81a39a3f2c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaum, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:41:42 np0005604943 podman[161672]: 2026-02-02 11:41:42.632088869 +0000 UTC m=+0.296214506 container start 9cf38e6a7790c530928fed8b5fcaf28a66a1cc2700be3d26f293d81a39a3f2c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 06:41:42 np0005604943 podman[161672]: 2026-02-02 11:41:42.654244755 +0000 UTC m=+0.318370382 container attach 9cf38e6a7790c530928fed8b5fcaf28a66a1cc2700be3d26f293d81a39a3f2c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaum, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:41:43 np0005604943 lvm[161917]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:41:43 np0005604943 lvm[161917]: VG ceph_vg0 finished
Feb  2 06:41:43 np0005604943 lvm[161920]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:41:43 np0005604943 lvm[161920]: VG ceph_vg1 finished
Feb  2 06:41:43 np0005604943 lvm[161946]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:41:43 np0005604943 lvm[161946]: VG ceph_vg2 finished
Feb  2 06:41:43 np0005604943 tender_chaum[161741]: {}
Feb  2 06:41:43 np0005604943 systemd[1]: libpod-9cf38e6a7790c530928fed8b5fcaf28a66a1cc2700be3d26f293d81a39a3f2c1.scope: Deactivated successfully.
Feb  2 06:41:43 np0005604943 podman[161672]: 2026-02-02 11:41:43.406717197 +0000 UTC m=+1.070842824 container died 9cf38e6a7790c530928fed8b5fcaf28a66a1cc2700be3d26f293d81a39a3f2c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaum, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 06:41:43 np0005604943 systemd[1]: var-lib-containers-storage-overlay-beb01b058ed65ea17f1ca052a69208c9e0966d39d44d39ea558a62de60bd18da-merged.mount: Deactivated successfully.
Feb  2 06:41:43 np0005604943 python3.9[161976]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Feb  2 06:41:43 np0005604943 podman[161672]: 2026-02-02 11:41:43.547091096 +0000 UTC m=+1.211216723 container remove 9cf38e6a7790c530928fed8b5fcaf28a66a1cc2700be3d26f293d81a39a3f2c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaum, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:41:43 np0005604943 systemd[1]: libpod-conmon-9cf38e6a7790c530928fed8b5fcaf28a66a1cc2700be3d26f293d81a39a3f2c1.scope: Deactivated successfully.
Feb  2 06:41:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:41:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:41:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:41:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:41:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:44 np0005604943 python3.9[162165]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 06:41:44 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:41:44 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:41:45 np0005604943 python3.9[162323]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb  2 06:41:45 np0005604943 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 06:41:45 np0005604943 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 06:41:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:46 np0005604943 python3.9[162484]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:41:46 np0005604943 python3.9[162568]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:41:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:41:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:41:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 06:41:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5590 writes, 24K keys, 5590 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5590 writes, 865 syncs, 6.46 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5590 writes, 24K keys, 5590 commit groups, 1.0 writes per commit group, ingest: 18.67 MB, 0.03 MB/s#012Interval WAL: 5590 writes, 865 syncs, 6.46 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d69e545a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d69e545a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Feb  2 06:41:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:41:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 06:41:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6937 writes, 28K keys, 6937 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6937 writes, 1285 syncs, 5.40 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6937 writes, 28K keys, 6937 commit groups, 1.0 writes per commit group, ingest: 19.72 MB, 0.03 MB/s#012Interval WAL: 6937 writes, 1285 syncs, 5.40 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0e3f1a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0e3f1a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Feb  2 06:41:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:41:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 06:42:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5452 writes, 23K keys, 5452 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5452 writes, 788 syncs, 6.92 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5452 writes, 23K keys, 5452 commit groups, 1.0 writes per commit group, ingest: 18.44 MB, 0.03 MB/s#012Interval WAL: 5452 writes, 788 syncs, 6.92 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560f677dd8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560f677dd8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Feb  2 06:42:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:42:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:04 np0005604943 ceph-mgr[75558]: [devicehealth INFO root] Check health
Feb  2 06:42:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:42:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:09 np0005604943 podman[162582]: 2026-02-02 11:42:09.027973645 +0000 UTC m=+0.049337957 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:42:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:42:09
Feb  2 06:42:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:42:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:42:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['vms', 'backups', '.mgr', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data']
Feb  2 06:42:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:42:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:42:10.002 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:42:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:42:10.003 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:42:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:42:10.003 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:42:10 np0005604943 podman[162604]: 2026-02-02 11:42:10.047428542 +0000 UTC m=+0.072770681 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb  2 06:42:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:42:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:42:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:42:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:42:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:42:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:42:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:42:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:42:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:42:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:42:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:42:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:42:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:42:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:42:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:42:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:42:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:42:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:42:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:42:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:42:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:42:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:27 np0005604943 kernel: SELinux:  Converting 2778 SID table entries...
Feb  2 06:42:27 np0005604943 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 06:42:27 np0005604943 kernel: SELinux:  policy capability open_perms=1
Feb  2 06:42:27 np0005604943 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 06:42:27 np0005604943 kernel: SELinux:  policy capability always_check_network=0
Feb  2 06:42:27 np0005604943 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 06:42:27 np0005604943 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 06:42:27 np0005604943 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 06:42:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:42:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:42:37 np0005604943 kernel: SELinux:  Converting 2778 SID table entries...
Feb  2 06:42:37 np0005604943 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 06:42:37 np0005604943 kernel: SELinux:  policy capability open_perms=1
Feb  2 06:42:37 np0005604943 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 06:42:37 np0005604943 kernel: SELinux:  policy capability always_check_network=0
Feb  2 06:42:37 np0005604943 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 06:42:37 np0005604943 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 06:42:37 np0005604943 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 06:42:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:39 np0005604943 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Feb  2 06:42:40 np0005604943 podman[162819]: 2026-02-02 11:42:40.054906494 +0000 UTC m=+0.068851779 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb  2 06:42:40 np0005604943 podman[162839]: 2026-02-02 11:42:40.142288298 +0000 UTC m=+0.063907755 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:42:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:42:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:42:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:42:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:42:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:42:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:42:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:42:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:42:44 np0005604943 podman[163008]: 2026-02-02 11:42:44.709965491 +0000 UTC m=+0.044362445 container create 660ca5a3cbb4ddfcc8fa9eb3de91f204a96fef2f76488a46d304c5dd002e5c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ellis, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 06:42:44 np0005604943 systemd[1]: Started libpod-conmon-660ca5a3cbb4ddfcc8fa9eb3de91f204a96fef2f76488a46d304c5dd002e5c6c.scope.
Feb  2 06:42:44 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:42:44 np0005604943 podman[163008]: 2026-02-02 11:42:44.686395569 +0000 UTC m=+0.020792543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:42:44 np0005604943 podman[163008]: 2026-02-02 11:42:44.796664103 +0000 UTC m=+0.131061147 container init 660ca5a3cbb4ddfcc8fa9eb3de91f204a96fef2f76488a46d304c5dd002e5c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ellis, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 06:42:44 np0005604943 podman[163008]: 2026-02-02 11:42:44.805848749 +0000 UTC m=+0.140245703 container start 660ca5a3cbb4ddfcc8fa9eb3de91f204a96fef2f76488a46d304c5dd002e5c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ellis, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:42:44 np0005604943 podman[163008]: 2026-02-02 11:42:44.811147005 +0000 UTC m=+0.145544009 container attach 660ca5a3cbb4ddfcc8fa9eb3de91f204a96fef2f76488a46d304c5dd002e5c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ellis, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 06:42:44 np0005604943 funny_ellis[163024]: 167 167
Feb  2 06:42:44 np0005604943 systemd[1]: libpod-660ca5a3cbb4ddfcc8fa9eb3de91f204a96fef2f76488a46d304c5dd002e5c6c.scope: Deactivated successfully.
Feb  2 06:42:44 np0005604943 podman[163008]: 2026-02-02 11:42:44.820409472 +0000 UTC m=+0.154806446 container died 660ca5a3cbb4ddfcc8fa9eb3de91f204a96fef2f76488a46d304c5dd002e5c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ellis, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:42:44 np0005604943 systemd[1]: var-lib-containers-storage-overlay-9e72fcaa3aa9af25647dd51b69a081c6c740038b46bcb4439c86ab5d81adb3fb-merged.mount: Deactivated successfully.
Feb  2 06:42:44 np0005604943 podman[163008]: 2026-02-02 11:42:44.863331779 +0000 UTC m=+0.197728733 container remove 660ca5a3cbb4ddfcc8fa9eb3de91f204a96fef2f76488a46d304c5dd002e5c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 06:42:44 np0005604943 systemd[1]: libpod-conmon-660ca5a3cbb4ddfcc8fa9eb3de91f204a96fef2f76488a46d304c5dd002e5c6c.scope: Deactivated successfully.
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:42:44 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:42:45 np0005604943 podman[163048]: 2026-02-02 11:42:45.003617061 +0000 UTC m=+0.041320733 container create e4f22a33a5c7ea2180b5a452f86d55c2cb96600e3c80a7845ea0fdbb25b91c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 06:42:45 np0005604943 systemd[1]: Started libpod-conmon-e4f22a33a5c7ea2180b5a452f86d55c2cb96600e3c80a7845ea0fdbb25b91c67.scope.
Feb  2 06:42:45 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:42:45 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b8baffa89578274afe9b311c6ed400ec123c9a708ad6af312a4549b9469ccc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:42:45 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b8baffa89578274afe9b311c6ed400ec123c9a708ad6af312a4549b9469ccc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:42:45 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b8baffa89578274afe9b311c6ed400ec123c9a708ad6af312a4549b9469ccc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:42:45 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b8baffa89578274afe9b311c6ed400ec123c9a708ad6af312a4549b9469ccc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:42:45 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b8baffa89578274afe9b311c6ed400ec123c9a708ad6af312a4549b9469ccc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:42:45 np0005604943 podman[163048]: 2026-02-02 11:42:45.071514879 +0000 UTC m=+0.109218551 container init e4f22a33a5c7ea2180b5a452f86d55c2cb96600e3c80a7845ea0fdbb25b91c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Feb  2 06:42:45 np0005604943 podman[163048]: 2026-02-02 11:42:45.077233829 +0000 UTC m=+0.114937501 container start e4f22a33a5c7ea2180b5a452f86d55c2cb96600e3c80a7845ea0fdbb25b91c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_cohen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 06:42:45 np0005604943 podman[163048]: 2026-02-02 11:42:44.985008373 +0000 UTC m=+0.022712065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:42:45 np0005604943 podman[163048]: 2026-02-02 11:42:45.081087136 +0000 UTC m=+0.118790808 container attach e4f22a33a5c7ea2180b5a452f86d55c2cb96600e3c80a7845ea0fdbb25b91c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_cohen, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Feb  2 06:42:45 np0005604943 optimistic_cohen[163064]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:42:45 np0005604943 optimistic_cohen[163064]: --> All data devices are unavailable
Feb  2 06:42:45 np0005604943 systemd[1]: libpod-e4f22a33a5c7ea2180b5a452f86d55c2cb96600e3c80a7845ea0fdbb25b91c67.scope: Deactivated successfully.
Feb  2 06:42:45 np0005604943 podman[163048]: 2026-02-02 11:42:45.548086898 +0000 UTC m=+0.585790570 container died e4f22a33a5c7ea2180b5a452f86d55c2cb96600e3c80a7845ea0fdbb25b91c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_cohen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 06:42:45 np0005604943 systemd[1]: var-lib-containers-storage-overlay-11b8baffa89578274afe9b311c6ed400ec123c9a708ad6af312a4549b9469ccc-merged.mount: Deactivated successfully.
Feb  2 06:42:45 np0005604943 podman[163048]: 2026-02-02 11:42:45.609462218 +0000 UTC m=+0.647165900 container remove e4f22a33a5c7ea2180b5a452f86d55c2cb96600e3c80a7845ea0fdbb25b91c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_cohen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 06:42:45 np0005604943 systemd[1]: libpod-conmon-e4f22a33a5c7ea2180b5a452f86d55c2cb96600e3c80a7845ea0fdbb25b91c67.scope: Deactivated successfully.
Feb  2 06:42:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:46 np0005604943 podman[163155]: 2026-02-02 11:42:46.014341894 +0000 UTC m=+0.045715190 container create fc0dc711eac3ca5d5283d50162697027724d0db83219eb4ae1a05a7736d5d1f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 06:42:46 np0005604943 systemd[1]: Started libpod-conmon-fc0dc711eac3ca5d5283d50162697027724d0db83219eb4ae1a05a7736d5d1f6.scope.
Feb  2 06:42:46 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:42:46 np0005604943 podman[163155]: 2026-02-02 11:42:45.995499199 +0000 UTC m=+0.026872525 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:42:46 np0005604943 podman[163155]: 2026-02-02 11:42:46.094871391 +0000 UTC m=+0.126244707 container init fc0dc711eac3ca5d5283d50162697027724d0db83219eb4ae1a05a7736d5d1f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sinoussi, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 06:42:46 np0005604943 podman[163155]: 2026-02-02 11:42:46.101060837 +0000 UTC m=+0.132434133 container start fc0dc711eac3ca5d5283d50162697027724d0db83219eb4ae1a05a7736d5d1f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:42:46 np0005604943 podman[163155]: 2026-02-02 11:42:46.104809002 +0000 UTC m=+0.136182558 container attach fc0dc711eac3ca5d5283d50162697027724d0db83219eb4ae1a05a7736d5d1f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sinoussi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:42:46 np0005604943 festive_sinoussi[163172]: 167 167
Feb  2 06:42:46 np0005604943 systemd[1]: libpod-fc0dc711eac3ca5d5283d50162697027724d0db83219eb4ae1a05a7736d5d1f6.scope: Deactivated successfully.
Feb  2 06:42:46 np0005604943 conmon[163172]: conmon fc0dc711eac3ca5d5283 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc0dc711eac3ca5d5283d50162697027724d0db83219eb4ae1a05a7736d5d1f6.scope/container/memory.events
Feb  2 06:42:46 np0005604943 podman[163155]: 2026-02-02 11:42:46.108128442 +0000 UTC m=+0.139501738 container died fc0dc711eac3ca5d5283d50162697027724d0db83219eb4ae1a05a7736d5d1f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:42:46 np0005604943 systemd[1]: var-lib-containers-storage-overlay-89e53168ac8b2b00c9a202c095f8251b796adf63e6d683343c7052f0b07911c8-merged.mount: Deactivated successfully.
Feb  2 06:42:46 np0005604943 podman[163155]: 2026-02-02 11:42:46.152050381 +0000 UTC m=+0.183423677 container remove fc0dc711eac3ca5d5283d50162697027724d0db83219eb4ae1a05a7736d5d1f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sinoussi, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:42:46 np0005604943 systemd[1]: libpod-conmon-fc0dc711eac3ca5d5283d50162697027724d0db83219eb4ae1a05a7736d5d1f6.scope: Deactivated successfully.
Feb  2 06:42:46 np0005604943 podman[163195]: 2026-02-02 11:42:46.264428567 +0000 UTC m=+0.039498225 container create acd878bba1783b382c95ed9eeda5033185ab33d8bc65e2d15af8e1d7b4443677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:42:46 np0005604943 systemd[1]: Started libpod-conmon-acd878bba1783b382c95ed9eeda5033185ab33d8bc65e2d15af8e1d7b4443677.scope.
Feb  2 06:42:46 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:42:46 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c922f8f8b2c6619290ed49582daca8d789aaf93a26c80672fe5775f76a363287/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:42:46 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c922f8f8b2c6619290ed49582daca8d789aaf93a26c80672fe5775f76a363287/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:42:46 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c922f8f8b2c6619290ed49582daca8d789aaf93a26c80672fe5775f76a363287/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:42:46 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c922f8f8b2c6619290ed49582daca8d789aaf93a26c80672fe5775f76a363287/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:42:46 np0005604943 podman[163195]: 2026-02-02 11:42:46.243019245 +0000 UTC m=+0.018088913 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:42:46 np0005604943 podman[163195]: 2026-02-02 11:42:46.343634939 +0000 UTC m=+0.118704627 container init acd878bba1783b382c95ed9eeda5033185ab33d8bc65e2d15af8e1d7b4443677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_archimedes, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 06:42:46 np0005604943 podman[163195]: 2026-02-02 11:42:46.352710031 +0000 UTC m=+0.127779699 container start acd878bba1783b382c95ed9eeda5033185ab33d8bc65e2d15af8e1d7b4443677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 06:42:46 np0005604943 podman[163195]: 2026-02-02 11:42:46.357426027 +0000 UTC m=+0.132495705 container attach acd878bba1783b382c95ed9eeda5033185ab33d8bc65e2d15af8e1d7b4443677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_archimedes, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]: {
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:    "0": [
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:        {
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "devices": [
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "/dev/loop3"
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            ],
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "lv_name": "ceph_lv0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "lv_size": "21470642176",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "name": "ceph_lv0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "tags": {
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.cluster_name": "ceph",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.crush_device_class": "",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.encrypted": "0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.objectstore": "bluestore",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.osd_id": "0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.type": "block",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.vdo": "0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.with_tpm": "0"
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            },
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "type": "block",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "vg_name": "ceph_vg0"
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:        }
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:    ],
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:    "1": [
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:        {
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "devices": [
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "/dev/loop4"
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            ],
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "lv_name": "ceph_lv1",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "lv_size": "21470642176",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "name": "ceph_lv1",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "tags": {
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.cluster_name": "ceph",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.crush_device_class": "",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.encrypted": "0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.objectstore": "bluestore",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.osd_id": "1",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.type": "block",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.vdo": "0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.with_tpm": "0"
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            },
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "type": "block",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "vg_name": "ceph_vg1"
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:        }
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:    ],
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:    "2": [
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:        {
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "devices": [
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "/dev/loop5"
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            ],
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "lv_name": "ceph_lv2",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "lv_size": "21470642176",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "name": "ceph_lv2",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "tags": {
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.cluster_name": "ceph",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.crush_device_class": "",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.encrypted": "0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.objectstore": "bluestore",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.osd_id": "2",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.type": "block",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.vdo": "0",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:                "ceph.with_tpm": "0"
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            },
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "type": "block",
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:            "vg_name": "ceph_vg2"
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:        }
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]:    ]
Feb  2 06:42:46 np0005604943 ecstatic_archimedes[163211]: }
Feb  2 06:42:46 np0005604943 systemd[1]: libpod-acd878bba1783b382c95ed9eeda5033185ab33d8bc65e2d15af8e1d7b4443677.scope: Deactivated successfully.
Feb  2 06:42:46 np0005604943 podman[163195]: 2026-02-02 11:42:46.625068853 +0000 UTC m=+0.400138511 container died acd878bba1783b382c95ed9eeda5033185ab33d8bc65e2d15af8e1d7b4443677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 06:42:46 np0005604943 systemd[1]: var-lib-containers-storage-overlay-c922f8f8b2c6619290ed49582daca8d789aaf93a26c80672fe5775f76a363287-merged.mount: Deactivated successfully.
Feb  2 06:42:46 np0005604943 podman[163195]: 2026-02-02 11:42:46.669448958 +0000 UTC m=+0.444518636 container remove acd878bba1783b382c95ed9eeda5033185ab33d8bc65e2d15af8e1d7b4443677 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 06:42:46 np0005604943 systemd[1]: libpod-conmon-acd878bba1783b382c95ed9eeda5033185ab33d8bc65e2d15af8e1d7b4443677.scope: Deactivated successfully.
Feb  2 06:42:47 np0005604943 podman[163294]: 2026-02-02 11:42:47.106972669 +0000 UTC m=+0.049704762 container create a9d0b2ebd2fd954f7dfbbea8555796ee3e17a332604a428da03701daefda3295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_kalam, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:42:47 np0005604943 systemd[1]: Started libpod-conmon-a9d0b2ebd2fd954f7dfbbea8555796ee3e17a332604a428da03701daefda3295.scope.
Feb  2 06:42:47 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:42:47 np0005604943 podman[163294]: 2026-02-02 11:42:47.164234123 +0000 UTC m=+0.106966216 container init a9d0b2ebd2fd954f7dfbbea8555796ee3e17a332604a428da03701daefda3295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_kalam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 06:42:47 np0005604943 podman[163294]: 2026-02-02 11:42:47.170118688 +0000 UTC m=+0.112850781 container start a9d0b2ebd2fd954f7dfbbea8555796ee3e17a332604a428da03701daefda3295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_kalam, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 06:42:47 np0005604943 busy_kalam[163311]: 167 167
Feb  2 06:42:47 np0005604943 systemd[1]: libpod-a9d0b2ebd2fd954f7dfbbea8555796ee3e17a332604a428da03701daefda3295.scope: Deactivated successfully.
Feb  2 06:42:47 np0005604943 podman[163294]: 2026-02-02 11:42:47.07779693 +0000 UTC m=+0.020529043 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:42:47 np0005604943 podman[163294]: 2026-02-02 11:42:47.175125445 +0000 UTC m=+0.117857618 container attach a9d0b2ebd2fd954f7dfbbea8555796ee3e17a332604a428da03701daefda3295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 06:42:47 np0005604943 podman[163294]: 2026-02-02 11:42:47.17558553 +0000 UTC m=+0.118317653 container died a9d0b2ebd2fd954f7dfbbea8555796ee3e17a332604a428da03701daefda3295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_kalam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 06:42:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:42:47 np0005604943 systemd[1]: var-lib-containers-storage-overlay-5d43eba9a5381aac7f87bdab98ae12ddb1167d24e4fcc7cc804dc5c19644af19-merged.mount: Deactivated successfully.
Feb  2 06:42:47 np0005604943 podman[163294]: 2026-02-02 11:42:47.211746112 +0000 UTC m=+0.154478205 container remove a9d0b2ebd2fd954f7dfbbea8555796ee3e17a332604a428da03701daefda3295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_kalam, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 06:42:47 np0005604943 systemd[1]: libpod-conmon-a9d0b2ebd2fd954f7dfbbea8555796ee3e17a332604a428da03701daefda3295.scope: Deactivated successfully.
Feb  2 06:42:47 np0005604943 podman[163336]: 2026-02-02 11:42:47.323947441 +0000 UTC m=+0.039039349 container create 2b9306ae60980bc918cd2d7e4ec8c7d302e91940c1a7aeaa63bc9b28053100e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 06:42:47 np0005604943 systemd[1]: Started libpod-conmon-2b9306ae60980bc918cd2d7e4ec8c7d302e91940c1a7aeaa63bc9b28053100e1.scope.
Feb  2 06:42:47 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:42:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0178a90be93d65a5077a6678da3d3e9076e6b2d0885da0de8b4f85ca67c2dcfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:42:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0178a90be93d65a5077a6678da3d3e9076e6b2d0885da0de8b4f85ca67c2dcfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:42:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0178a90be93d65a5077a6678da3d3e9076e6b2d0885da0de8b4f85ca67c2dcfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:42:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0178a90be93d65a5077a6678da3d3e9076e6b2d0885da0de8b4f85ca67c2dcfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:42:47 np0005604943 podman[163336]: 2026-02-02 11:42:47.389918283 +0000 UTC m=+0.105010221 container init 2b9306ae60980bc918cd2d7e4ec8c7d302e91940c1a7aeaa63bc9b28053100e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 06:42:47 np0005604943 podman[163336]: 2026-02-02 11:42:47.395752898 +0000 UTC m=+0.110844806 container start 2b9306ae60980bc918cd2d7e4ec8c7d302e91940c1a7aeaa63bc9b28053100e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:42:47 np0005604943 podman[163336]: 2026-02-02 11:42:47.398744637 +0000 UTC m=+0.113836565 container attach 2b9306ae60980bc918cd2d7e4ec8c7d302e91940c1a7aeaa63bc9b28053100e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 06:42:47 np0005604943 podman[163336]: 2026-02-02 11:42:47.306686277 +0000 UTC m=+0.021778205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:42:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:48 np0005604943 lvm[163431]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:42:48 np0005604943 lvm[163433]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:42:48 np0005604943 lvm[163433]: VG ceph_vg1 finished
Feb  2 06:42:48 np0005604943 lvm[163431]: VG ceph_vg0 finished
Feb  2 06:42:48 np0005604943 lvm[163435]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:42:48 np0005604943 lvm[163435]: VG ceph_vg2 finished
Feb  2 06:42:48 np0005604943 fervent_shamir[163353]: {}
Feb  2 06:42:48 np0005604943 podman[163336]: 2026-02-02 11:42:48.170414875 +0000 UTC m=+0.885506783 container died 2b9306ae60980bc918cd2d7e4ec8c7d302e91940c1a7aeaa63bc9b28053100e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 06:42:48 np0005604943 systemd[1]: libpod-2b9306ae60980bc918cd2d7e4ec8c7d302e91940c1a7aeaa63bc9b28053100e1.scope: Deactivated successfully.
Feb  2 06:42:48 np0005604943 systemd[1]: libpod-2b9306ae60980bc918cd2d7e4ec8c7d302e91940c1a7aeaa63bc9b28053100e1.scope: Consumed 1.098s CPU time.
Feb  2 06:42:48 np0005604943 systemd[1]: var-lib-containers-storage-overlay-0178a90be93d65a5077a6678da3d3e9076e6b2d0885da0de8b4f85ca67c2dcfb-merged.mount: Deactivated successfully.
Feb  2 06:42:48 np0005604943 podman[163336]: 2026-02-02 11:42:48.214338004 +0000 UTC m=+0.929429912 container remove 2b9306ae60980bc918cd2d7e4ec8c7d302e91940c1a7aeaa63bc9b28053100e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:42:48 np0005604943 systemd[1]: libpod-conmon-2b9306ae60980bc918cd2d7e4ec8c7d302e91940c1a7aeaa63bc9b28053100e1.scope: Deactivated successfully.
Feb  2 06:42:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:42:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:42:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:42:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:42:49 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:42:49 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:42:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:42:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:42:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 19 op/s
Feb  2 06:42:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 19 op/s
Feb  2 06:42:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:42:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Feb  2 06:42:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 06:43:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 06:43:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:43:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 06:43:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Feb  2 06:43:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:43:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Feb  2 06:43:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:43:09
Feb  2 06:43:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:43:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:43:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'volumes', 'images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'backups']
Feb  2 06:43:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:43:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Feb  2 06:43:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:43:10.003 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:43:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:43:10.004 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:43:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:43:10.005 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:43:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:43:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:43:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:43:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:43:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:43:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:43:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:43:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:43:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:43:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:43:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:43:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:43:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:43:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:43:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:43:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:43:11 np0005604943 podman[180354]: 2026-02-02 11:43:11.030814427 +0000 UTC m=+0.050082220 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:43:11 np0005604943 podman[180353]: 2026-02-02 11:43:11.055675712 +0000 UTC m=+0.074815582 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Feb  2 06:43:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:43:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:43:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:18 np0005604943 kernel: SELinux:  Converting 2779 SID table entries...
Feb  2 06:43:18 np0005604943 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 06:43:18 np0005604943 kernel: SELinux:  policy capability open_perms=1
Feb  2 06:43:18 np0005604943 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 06:43:18 np0005604943 kernel: SELinux:  policy capability always_check_network=0
Feb  2 06:43:18 np0005604943 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 06:43:18 np0005604943 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 06:43:18 np0005604943 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 06:43:19 np0005604943 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Feb  2 06:43:19 np0005604943 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Feb  2 06:43:19 np0005604943 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Feb  2 06:43:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:43:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:43:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:26 np0005604943 systemd[1]: Stopping OpenSSH server daemon...
Feb  2 06:43:26 np0005604943 systemd[1]: sshd.service: Deactivated successfully.
Feb  2 06:43:26 np0005604943 systemd[1]: Stopped OpenSSH server daemon.
Feb  2 06:43:26 np0005604943 systemd[1]: sshd.service: Consumed 2.595s CPU time, read 564.0K from disk, written 28.0K to disk.
Feb  2 06:43:26 np0005604943 systemd[1]: Stopped target sshd-keygen.target.
Feb  2 06:43:26 np0005604943 systemd[1]: Stopping sshd-keygen.target...
Feb  2 06:43:26 np0005604943 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 06:43:26 np0005604943 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 06:43:26 np0005604943 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 06:43:26 np0005604943 systemd[1]: Reached target sshd-keygen.target.
Feb  2 06:43:26 np0005604943 systemd[1]: Starting OpenSSH server daemon...
Feb  2 06:43:26 np0005604943 systemd[1]: Started OpenSSH server daemon.
Feb  2 06:43:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:43:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:28 np0005604943 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 06:43:28 np0005604943 systemd[1]: Starting man-db-cache-update.service...
Feb  2 06:43:28 np0005604943 systemd[1]: Reloading.
Feb  2 06:43:28 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:43:28 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:43:28 np0005604943 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 06:43:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:31 np0005604943 python3.9[187025]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 06:43:31 np0005604943 systemd[1]: Reloading.
Feb  2 06:43:31 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:43:31 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:43:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:43:32 np0005604943 python3.9[188801]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 06:43:32 np0005604943 systemd[1]: Reloading.
Feb  2 06:43:32 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:43:32 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:43:33 np0005604943 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 06:43:33 np0005604943 systemd[1]: Finished man-db-cache-update.service.
Feb  2 06:43:33 np0005604943 systemd[1]: man-db-cache-update.service: Consumed 6.508s CPU time.
Feb  2 06:43:33 np0005604943 systemd[1]: run-r9bfd128b294448e9b50732cd591255fe.service: Deactivated successfully.
Feb  2 06:43:33 np0005604943 python3.9[190432]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 06:43:33 np0005604943 systemd[1]: Reloading.
Feb  2 06:43:33 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:43:33 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:43:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:34 np0005604943 python3.9[190623]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 06:43:34 np0005604943 systemd[1]: Reloading.
Feb  2 06:43:34 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:43:34 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:43:35 np0005604943 python3.9[190813]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:35 np0005604943 systemd[1]: Reloading.
Feb  2 06:43:35 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:43:35 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:43:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:36 np0005604943 python3.9[191002]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:36 np0005604943 systemd[1]: Reloading.
Feb  2 06:43:36 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:43:36 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:43:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:43:37 np0005604943 python3.9[191192]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:37 np0005604943 systemd[1]: Reloading.
Feb  2 06:43:37 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:43:37 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:43:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:38 np0005604943 python3.9[191381]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:38 np0005604943 python3.9[191536]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:39 np0005604943 systemd[1]: Reloading.
Feb  2 06:43:39 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:43:39 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:43:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:40 np0005604943 python3.9[191726]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 06:43:40 np0005604943 systemd[1]: Reloading.
Feb  2 06:43:40 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:43:40 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:43:40 np0005604943 systemd[1]: Listening on libvirt proxy daemon socket.
Feb  2 06:43:40 np0005604943 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Feb  2 06:43:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:43:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:43:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:43:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:43:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:43:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:43:41 np0005604943 python3.9[191919]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:41 np0005604943 podman[191921]: 2026-02-02 11:43:41.254843228 +0000 UTC m=+0.068465902 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 06:43:41 np0005604943 podman[191922]: 2026-02-02 11:43:41.259714053 +0000 UTC m=+0.073207092 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Feb  2 06:43:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:41 np0005604943 python3.9[192119]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:43:42 np0005604943 python3.9[192274]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:43 np0005604943 python3.9[192429]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:43 np0005604943 python3.9[192584]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:44 np0005604943 python3.9[192739]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:45 np0005604943 python3.9[192894]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:45 np0005604943 python3.9[193049]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:46 np0005604943 python3.9[193204]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:43:47 np0005604943 python3.9[193359]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:48 np0005604943 python3.9[193514]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:48 np0005604943 python3.9[193707]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:48 np0005604943 podman[193767]: 2026-02-02 11:43:48.790556202 +0000 UTC m=+0.053805037 container exec fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 06:43:48 np0005604943 podman[193767]: 2026-02-02 11:43:48.883526219 +0000 UTC m=+0.146775034 container exec_died fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 06:43:49 np0005604943 python3.9[194000]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:43:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:43:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:43:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:43:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:43:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:43:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:43:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:43:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:43:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:43:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:43:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:43:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:43:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:43:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:43:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:43:50 np0005604943 python3.9[194321]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 06:43:50 np0005604943 podman[194427]: 2026-02-02 11:43:50.2962326 +0000 UTC m=+0.031291955 container create 57a51fb2c11f1aff06b9a59420e010e5e45b688d10361a9b26d9605b45960f75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 06:43:50 np0005604943 systemd[1]: Started libpod-conmon-57a51fb2c11f1aff06b9a59420e010e5e45b688d10361a9b26d9605b45960f75.scope.
Feb  2 06:43:50 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:43:50 np0005604943 podman[194427]: 2026-02-02 11:43:50.354080527 +0000 UTC m=+0.089139902 container init 57a51fb2c11f1aff06b9a59420e010e5e45b688d10361a9b26d9605b45960f75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:43:50 np0005604943 podman[194427]: 2026-02-02 11:43:50.359068006 +0000 UTC m=+0.094127361 container start 57a51fb2c11f1aff06b9a59420e010e5e45b688d10361a9b26d9605b45960f75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 06:43:50 np0005604943 podman[194427]: 2026-02-02 11:43:50.361974686 +0000 UTC m=+0.097034061 container attach 57a51fb2c11f1aff06b9a59420e010e5e45b688d10361a9b26d9605b45960f75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 06:43:50 np0005604943 vigilant_lalande[194443]: 167 167
Feb  2 06:43:50 np0005604943 systemd[1]: libpod-57a51fb2c11f1aff06b9a59420e010e5e45b688d10361a9b26d9605b45960f75.scope: Deactivated successfully.
Feb  2 06:43:50 np0005604943 conmon[194443]: conmon 57a51fb2c11f1aff06b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57a51fb2c11f1aff06b9a59420e010e5e45b688d10361a9b26d9605b45960f75.scope/container/memory.events
Feb  2 06:43:50 np0005604943 podman[194427]: 2026-02-02 11:43:50.364567188 +0000 UTC m=+0.099626543 container died 57a51fb2c11f1aff06b9a59420e010e5e45b688d10361a9b26d9605b45960f75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Feb  2 06:43:50 np0005604943 podman[194427]: 2026-02-02 11:43:50.283340275 +0000 UTC m=+0.018399640 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:43:50 np0005604943 systemd[1]: var-lib-containers-storage-overlay-ffa8871c9b8402fccdd573e6ddd4e881293162fa8ca3fa681fa9451f16f82087-merged.mount: Deactivated successfully.
Feb  2 06:43:50 np0005604943 podman[194427]: 2026-02-02 11:43:50.404316765 +0000 UTC m=+0.139376140 container remove 57a51fb2c11f1aff06b9a59420e010e5e45b688d10361a9b26d9605b45960f75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 06:43:50 np0005604943 systemd[1]: libpod-conmon-57a51fb2c11f1aff06b9a59420e010e5e45b688d10361a9b26d9605b45960f75.scope: Deactivated successfully.
Feb  2 06:43:50 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:43:50 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:43:50 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:43:50 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:43:50 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:43:50 np0005604943 podman[194519]: 2026-02-02 11:43:50.543780746 +0000 UTC m=+0.043157353 container create 5632a559f01999fd60bb54c31d2741ee0b018662bb395e720ff8d8d458cafd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Feb  2 06:43:50 np0005604943 systemd[1]: Started libpod-conmon-5632a559f01999fd60bb54c31d2741ee0b018662bb395e720ff8d8d458cafd28.scope.
Feb  2 06:43:50 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:43:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4c65863744e70b6c6c2dd3d4da479268d64284928b2cc2968918228627b3ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:43:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4c65863744e70b6c6c2dd3d4da479268d64284928b2cc2968918228627b3ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:43:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4c65863744e70b6c6c2dd3d4da479268d64284928b2cc2968918228627b3ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:43:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4c65863744e70b6c6c2dd3d4da479268d64284928b2cc2968918228627b3ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:43:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4c65863744e70b6c6c2dd3d4da479268d64284928b2cc2968918228627b3ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:43:50 np0005604943 podman[194519]: 2026-02-02 11:43:50.526459097 +0000 UTC m=+0.025835734 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:43:50 np0005604943 podman[194519]: 2026-02-02 11:43:50.628549367 +0000 UTC m=+0.127926004 container init 5632a559f01999fd60bb54c31d2741ee0b018662bb395e720ff8d8d458cafd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:43:50 np0005604943 podman[194519]: 2026-02-02 11:43:50.635611932 +0000 UTC m=+0.134988539 container start 5632a559f01999fd60bb54c31d2741ee0b018662bb395e720ff8d8d458cafd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:43:50 np0005604943 podman[194519]: 2026-02-02 11:43:50.639197711 +0000 UTC m=+0.138574318 container attach 5632a559f01999fd60bb54c31d2741ee0b018662bb395e720ff8d8d458cafd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:43:50 np0005604943 python3.9[194615]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:43:51 np0005604943 sweet_burnell[194581]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:43:51 np0005604943 sweet_burnell[194581]: --> All data devices are unavailable
Feb  2 06:43:51 np0005604943 systemd[1]: libpod-5632a559f01999fd60bb54c31d2741ee0b018662bb395e720ff8d8d458cafd28.scope: Deactivated successfully.
Feb  2 06:43:51 np0005604943 podman[194519]: 2026-02-02 11:43:51.06830057 +0000 UTC m=+0.567677227 container died 5632a559f01999fd60bb54c31d2741ee0b018662bb395e720ff8d8d458cafd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 06:43:51 np0005604943 systemd[1]: var-lib-containers-storage-overlay-dc4c65863744e70b6c6c2dd3d4da479268d64284928b2cc2968918228627b3ff-merged.mount: Deactivated successfully.
Feb  2 06:43:51 np0005604943 podman[194519]: 2026-02-02 11:43:51.109257501 +0000 UTC m=+0.608634108 container remove 5632a559f01999fd60bb54c31d2741ee0b018662bb395e720ff8d8d458cafd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 06:43:51 np0005604943 systemd[1]: libpod-conmon-5632a559f01999fd60bb54c31d2741ee0b018662bb395e720ff8d8d458cafd28.scope: Deactivated successfully.
Feb  2 06:43:51 np0005604943 python3.9[194813]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:43:51 np0005604943 podman[194880]: 2026-02-02 11:43:51.486278283 +0000 UTC m=+0.035653556 container create cd0df6bafd99ca0b06a9222e2a65406ff6a96efe0f4b5346d9f0368490d9a471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 06:43:51 np0005604943 systemd[1]: Started libpod-conmon-cd0df6bafd99ca0b06a9222e2a65406ff6a96efe0f4b5346d9f0368490d9a471.scope.
Feb  2 06:43:51 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:43:51 np0005604943 podman[194880]: 2026-02-02 11:43:51.558098286 +0000 UTC m=+0.107473589 container init cd0df6bafd99ca0b06a9222e2a65406ff6a96efe0f4b5346d9f0368490d9a471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:43:51 np0005604943 podman[194880]: 2026-02-02 11:43:51.56407427 +0000 UTC m=+0.113449543 container start cd0df6bafd99ca0b06a9222e2a65406ff6a96efe0f4b5346d9f0368490d9a471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_satoshi, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:43:51 np0005604943 podman[194880]: 2026-02-02 11:43:51.470939499 +0000 UTC m=+0.020314792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:43:51 np0005604943 naughty_satoshi[194948]: 167 167
Feb  2 06:43:51 np0005604943 systemd[1]: libpod-cd0df6bafd99ca0b06a9222e2a65406ff6a96efe0f4b5346d9f0368490d9a471.scope: Deactivated successfully.
Feb  2 06:43:51 np0005604943 podman[194880]: 2026-02-02 11:43:51.568464882 +0000 UTC m=+0.117840375 container attach cd0df6bafd99ca0b06a9222e2a65406ff6a96efe0f4b5346d9f0368490d9a471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_satoshi, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 06:43:51 np0005604943 podman[194880]: 2026-02-02 11:43:51.569051348 +0000 UTC m=+0.118426621 container died cd0df6bafd99ca0b06a9222e2a65406ff6a96efe0f4b5346d9f0368490d9a471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_satoshi, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 06:43:51 np0005604943 systemd[1]: var-lib-containers-storage-overlay-1bc98d2c185fec842717605e21c3f37fc1c36398581c7dc79a84268eff3d748c-merged.mount: Deactivated successfully.
Feb  2 06:43:51 np0005604943 podman[194880]: 2026-02-02 11:43:51.604789445 +0000 UTC m=+0.154164718 container remove cd0df6bafd99ca0b06a9222e2a65406ff6a96efe0f4b5346d9f0368490d9a471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Feb  2 06:43:51 np0005604943 systemd[1]: libpod-conmon-cd0df6bafd99ca0b06a9222e2a65406ff6a96efe0f4b5346d9f0368490d9a471.scope: Deactivated successfully.
Feb  2 06:43:51 np0005604943 podman[195046]: 2026-02-02 11:43:51.735113123 +0000 UTC m=+0.041838405 container create 74e58efeadb3651e47d291823f628a7d116e774b164d99e14c406de7a5ef20e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 06:43:51 np0005604943 systemd[1]: Started libpod-conmon-74e58efeadb3651e47d291823f628a7d116e774b164d99e14c406de7a5ef20e6.scope.
Feb  2 06:43:51 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:43:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5e885b744ec18dce31c956c795c7c532d768e43363275c386ba9a893de65f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:43:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5e885b744ec18dce31c956c795c7c532d768e43363275c386ba9a893de65f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:43:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5e885b744ec18dce31c956c795c7c532d768e43363275c386ba9a893de65f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:43:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5e885b744ec18dce31c956c795c7c532d768e43363275c386ba9a893de65f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:43:51 np0005604943 podman[195046]: 2026-02-02 11:43:51.717493267 +0000 UTC m=+0.024218589 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:43:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:51 np0005604943 podman[195046]: 2026-02-02 11:43:51.871689285 +0000 UTC m=+0.178414587 container init 74e58efeadb3651e47d291823f628a7d116e774b164d99e14c406de7a5ef20e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 06:43:51 np0005604943 podman[195046]: 2026-02-02 11:43:51.880707704 +0000 UTC m=+0.187432986 container start 74e58efeadb3651e47d291823f628a7d116e774b164d99e14c406de7a5ef20e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:43:51 np0005604943 python3.9[195054]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:43:51 np0005604943 podman[195046]: 2026-02-02 11:43:51.912351488 +0000 UTC m=+0.219076800 container attach 74e58efeadb3651e47d291823f628a7d116e774b164d99e14c406de7a5ef20e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:43:52 np0005604943 reverent_buck[195064]: {
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:    "0": [
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:        {
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "devices": [
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "/dev/loop3"
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            ],
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "lv_name": "ceph_lv0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "lv_size": "21470642176",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "name": "ceph_lv0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "tags": {
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.cluster_name": "ceph",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.crush_device_class": "",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.encrypted": "0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.objectstore": "bluestore",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.osd_id": "0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.type": "block",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.vdo": "0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.with_tpm": "0"
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            },
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "type": "block",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "vg_name": "ceph_vg0"
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:        }
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:    ],
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:    "1": [
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:        {
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "devices": [
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "/dev/loop4"
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            ],
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "lv_name": "ceph_lv1",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "lv_size": "21470642176",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "name": "ceph_lv1",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "tags": {
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.cluster_name": "ceph",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.crush_device_class": "",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.encrypted": "0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.objectstore": "bluestore",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.osd_id": "1",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.type": "block",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.vdo": "0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.with_tpm": "0"
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            },
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "type": "block",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "vg_name": "ceph_vg1"
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:        }
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:    ],
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:    "2": [
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:        {
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "devices": [
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "/dev/loop5"
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            ],
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "lv_name": "ceph_lv2",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "lv_size": "21470642176",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "name": "ceph_lv2",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "tags": {
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.cluster_name": "ceph",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.crush_device_class": "",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.encrypted": "0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.objectstore": "bluestore",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.osd_id": "2",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.type": "block",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.vdo": "0",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:                "ceph.with_tpm": "0"
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            },
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "type": "block",
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:            "vg_name": "ceph_vg2"
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:        }
Feb  2 06:43:52 np0005604943 reverent_buck[195064]:    ]
Feb  2 06:43:52 np0005604943 reverent_buck[195064]: }
Feb  2 06:43:52 np0005604943 systemd[1]: libpod-74e58efeadb3651e47d291823f628a7d116e774b164d99e14c406de7a5ef20e6.scope: Deactivated successfully.
Feb  2 06:43:52 np0005604943 podman[195046]: 2026-02-02 11:43:52.160802829 +0000 UTC m=+0.467528121 container died 74e58efeadb3651e47d291823f628a7d116e774b164d99e14c406de7a5ef20e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:43:52 np0005604943 systemd[1]: var-lib-containers-storage-overlay-bd5e885b744ec18dce31c956c795c7c532d768e43363275c386ba9a893de65f2-merged.mount: Deactivated successfully.
Feb  2 06:43:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:43:52 np0005604943 podman[195046]: 2026-02-02 11:43:52.261823229 +0000 UTC m=+0.568548511 container remove 74e58efeadb3651e47d291823f628a7d116e774b164d99e14c406de7a5ef20e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  2 06:43:52 np0005604943 systemd[1]: libpod-conmon-74e58efeadb3651e47d291823f628a7d116e774b164d99e14c406de7a5ef20e6.scope: Deactivated successfully.
Feb  2 06:43:52 np0005604943 python3.9[195237]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:43:52 np0005604943 podman[195397]: 2026-02-02 11:43:52.66961796 +0000 UTC m=+0.040187781 container create 355a0e42223d4db5fe554886f283e13276a3f38e405edea9da5a9c9b1b8cd278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_faraday, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:43:52 np0005604943 systemd[1]: Started libpod-conmon-355a0e42223d4db5fe554886f283e13276a3f38e405edea9da5a9c9b1b8cd278.scope.
Feb  2 06:43:52 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:43:52 np0005604943 podman[195397]: 2026-02-02 11:43:52.736318342 +0000 UTC m=+0.106888203 container init 355a0e42223d4db5fe554886f283e13276a3f38e405edea9da5a9c9b1b8cd278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:43:52 np0005604943 podman[195397]: 2026-02-02 11:43:52.740795215 +0000 UTC m=+0.111365026 container start 355a0e42223d4db5fe554886f283e13276a3f38e405edea9da5a9c9b1b8cd278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  2 06:43:52 np0005604943 gracious_faraday[195440]: 167 167
Feb  2 06:43:52 np0005604943 systemd[1]: libpod-355a0e42223d4db5fe554886f283e13276a3f38e405edea9da5a9c9b1b8cd278.scope: Deactivated successfully.
Feb  2 06:43:52 np0005604943 podman[195397]: 2026-02-02 11:43:52.744116237 +0000 UTC m=+0.114686078 container attach 355a0e42223d4db5fe554886f283e13276a3f38e405edea9da5a9c9b1b8cd278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_faraday, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 06:43:52 np0005604943 podman[195397]: 2026-02-02 11:43:52.744427415 +0000 UTC m=+0.114997236 container died 355a0e42223d4db5fe554886f283e13276a3f38e405edea9da5a9c9b1b8cd278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_faraday, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:43:52 np0005604943 podman[195397]: 2026-02-02 11:43:52.650930124 +0000 UTC m=+0.021499975 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:43:52 np0005604943 systemd[1]: var-lib-containers-storage-overlay-68253398347dcb4944feb0d025158dee23d04ca6d0f9c5442b945797e04ca83d-merged.mount: Deactivated successfully.
Feb  2 06:43:52 np0005604943 podman[195397]: 2026-02-02 11:43:52.772944373 +0000 UTC m=+0.143514194 container remove 355a0e42223d4db5fe554886f283e13276a3f38e405edea9da5a9c9b1b8cd278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:43:52 np0005604943 systemd[1]: libpod-conmon-355a0e42223d4db5fe554886f283e13276a3f38e405edea9da5a9c9b1b8cd278.scope: Deactivated successfully.
Feb  2 06:43:52 np0005604943 podman[195494]: 2026-02-02 11:43:52.894547621 +0000 UTC m=+0.034791432 container create 49afb8cd71e4376f74805eead3999392eafc075fece531d2a9a6c6c7dd44b3e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kalam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:43:52 np0005604943 systemd[1]: Started libpod-conmon-49afb8cd71e4376f74805eead3999392eafc075fece531d2a9a6c6c7dd44b3e9.scope.
Feb  2 06:43:52 np0005604943 python3.9[195474]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:43:52 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:43:52 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a03abcdf8b2cf33dfeb9a5adec23d9536bbbc19277c559bed805e35c8f52ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:43:52 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a03abcdf8b2cf33dfeb9a5adec23d9536bbbc19277c559bed805e35c8f52ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:43:52 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a03abcdf8b2cf33dfeb9a5adec23d9536bbbc19277c559bed805e35c8f52ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:43:52 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a03abcdf8b2cf33dfeb9a5adec23d9536bbbc19277c559bed805e35c8f52ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:43:52 np0005604943 podman[195494]: 2026-02-02 11:43:52.960723648 +0000 UTC m=+0.100967489 container init 49afb8cd71e4376f74805eead3999392eafc075fece531d2a9a6c6c7dd44b3e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:43:52 np0005604943 podman[195494]: 2026-02-02 11:43:52.966266702 +0000 UTC m=+0.106510523 container start 49afb8cd71e4376f74805eead3999392eafc075fece531d2a9a6c6c7dd44b3e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kalam, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 06:43:52 np0005604943 podman[195494]: 2026-02-02 11:43:52.969554062 +0000 UTC m=+0.109797903 container attach 49afb8cd71e4376f74805eead3999392eafc075fece531d2a9a6c6c7dd44b3e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 06:43:52 np0005604943 podman[195494]: 2026-02-02 11:43:52.881763487 +0000 UTC m=+0.022007328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:43:53 np0005604943 python3.9[195678]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:43:53 np0005604943 lvm[195765]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:43:53 np0005604943 lvm[195766]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:43:53 np0005604943 lvm[195766]: VG ceph_vg1 finished
Feb  2 06:43:53 np0005604943 lvm[195765]: VG ceph_vg0 finished
Feb  2 06:43:53 np0005604943 lvm[195768]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:43:53 np0005604943 lvm[195768]: VG ceph_vg2 finished
Feb  2 06:43:53 np0005604943 vigorous_kalam[195511]: {}
Feb  2 06:43:53 np0005604943 systemd[1]: libpod-49afb8cd71e4376f74805eead3999392eafc075fece531d2a9a6c6c7dd44b3e9.scope: Deactivated successfully.
Feb  2 06:43:53 np0005604943 podman[195494]: 2026-02-02 11:43:53.662776445 +0000 UTC m=+0.803020266 container died 49afb8cd71e4376f74805eead3999392eafc075fece531d2a9a6c6c7dd44b3e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kalam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 06:43:53 np0005604943 systemd[1]: libpod-49afb8cd71e4376f74805eead3999392eafc075fece531d2a9a6c6c7dd44b3e9.scope: Consumed 1.079s CPU time.
Feb  2 06:43:53 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a8a03abcdf8b2cf33dfeb9a5adec23d9536bbbc19277c559bed805e35c8f52ad-merged.mount: Deactivated successfully.
Feb  2 06:43:53 np0005604943 podman[195494]: 2026-02-02 11:43:53.695261422 +0000 UTC m=+0.835505243 container remove 49afb8cd71e4376f74805eead3999392eafc075fece531d2a9a6c6c7dd44b3e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kalam, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:43:53 np0005604943 systemd[1]: libpod-conmon-49afb8cd71e4376f74805eead3999392eafc075fece531d2a9a6c6c7dd44b3e9.scope: Deactivated successfully.
Feb  2 06:43:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:43:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:43:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:43:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:43:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:54 np0005604943 python3.9[195933]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:43:54 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:43:54 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:43:54 np0005604943 python3.9[196085]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:43:55 np0005604943 python3.9[196210]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770032634.228605-557-155528722482454/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:43:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:56 np0005604943 python3.9[196362]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:43:56 np0005604943 python3.9[196487]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770032635.5915008-557-130896686634229/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:43:57 np0005604943 python3.9[196639]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:43:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:43:57 np0005604943 python3.9[196764]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770032636.6145103-557-175392285656948/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:43:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:43:58 np0005604943 python3.9[196916]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:43:58 np0005604943 python3.9[197041]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770032637.6725123-557-247871037459682/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:43:59 np0005604943 python3.9[197194]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:43:59 np0005604943 python3.9[197319]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770032638.7466505-557-73338981070220/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:43:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:00 np0005604943 python3.9[197471]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:00 np0005604943 python3.9[197596]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770032639.853514-557-170425885751997/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:01 np0005604943 python3.9[197748]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:01 np0005604943 python3.9[197871]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770032640.8795269-557-76333777541066/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:44:02 np0005604943 python3.9[198023]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:02 np0005604943 python3.9[198148]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770032641.9286954-557-225319243719718/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:03 np0005604943 python3.9[198300]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Feb  2 06:44:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:03 np0005604943 python3.9[198453]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:04 np0005604943 python3.9[198605]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:05 np0005604943 python3.9[198757]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:05 np0005604943 python3.9[198909]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:06 np0005604943 python3.9[199061]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:06 np0005604943 python3.9[199213]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:07 np0005604943 python3.9[199365]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:44:07 np0005604943 python3.9[199517]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:08 np0005604943 python3.9[199669]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:08 np0005604943 python3.9[199821]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:09 np0005604943 python3.9[199973]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:44:09
Feb  2 06:44:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:44:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:44:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['backups', 'volumes', 'images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'vms', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data']
Feb  2 06:44:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:44:09 np0005604943 python3.9[200125]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:44:10.005 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:44:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:44:10.006 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:44:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:44:10.006 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:44:10 np0005604943 python3.9[200277]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:44:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:44:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:44:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:44:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:44:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:44:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:44:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:44:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:44:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:44:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:44:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:44:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:44:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:44:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:44:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:44:10 np0005604943 python3.9[200429]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:11 np0005604943 podman[200554]: 2026-02-02 11:44:11.412250272 +0000 UTC m=+0.051918304 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 06:44:11 np0005604943 podman[200553]: 2026-02-02 11:44:11.458864839 +0000 UTC m=+0.100058663 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 06:44:11 np0005604943 python3.9[200616]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:12 np0005604943 python3.9[200746]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032651.1449623-778-164316538555356/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:44:12 np0005604943 python3.9[200898]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:13 np0005604943 python3.9[201021]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032652.229755-778-264704542337819/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:13 np0005604943 python3.9[201173]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:14 np0005604943 python3.9[201296]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032653.200491-778-220123375719908/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:14 np0005604943 python3.9[201448]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:15 np0005604943 python3.9[201571]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032654.1518326-778-112507838836322/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:15 np0005604943 python3.9[201723]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:15 np0005604943 python3.9[201846]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032655.1245947-778-163306126558225/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:16 np0005604943 python3.9[201998]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:16 np0005604943 python3.9[202121]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032656.0624778-778-176677060802009/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:44:17 np0005604943 python3.9[202273]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:17 np0005604943 python3.9[202396]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032657.0283391-778-280425291812384/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:18 np0005604943 python3.9[202548]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:18 np0005604943 python3.9[202671]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032658.0339947-778-172764596449617/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:19 np0005604943 python3.9[202823]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:19 np0005604943 python3.9[202946]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032659.0044088-778-8091125231144/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:20 np0005604943 python3.9[203098]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:20 np0005604943 python3.9[203221]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032660.003829-778-155170384174662/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:44:21 np0005604943 python3.9[203373]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:21 np0005604943 python3.9[203496]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032661.1072116-778-131683075534293/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:44:22 np0005604943 python3.9[203648]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:23 np0005604943 python3.9[203771]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032662.1189628-778-118946304245223/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:23 np0005604943 python3.9[203923]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:24 np0005604943 python3.9[204046]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032663.171202-778-253149963512053/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:24 np0005604943 python3.9[204198]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:25 np0005604943 python3.9[204322]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032664.1418247-778-235776988850434/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:25 np0005604943 python3.9[204472]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:44:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:26 np0005604943 python3.9[204627]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Feb  2 06:44:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:44:27 np0005604943 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Feb  2 06:44:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:28 np0005604943 python3.9[204783]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:28 np0005604943 python3.9[204935]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:28 np0005604943 auditd[704]: Audit daemon rotating log files
Feb  2 06:44:29 np0005604943 python3.9[205087]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:29 np0005604943 python3.9[205239]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:30 np0005604943 python3.9[205391]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:30 np0005604943 python3.9[205543]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:31 np0005604943 python3.9[205695]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:31 np0005604943 python3.9[205847]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:44:32 np0005604943 python3.9[205999]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:32 np0005604943 python3.9[206151]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:33 np0005604943 python3.9[206303]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:44:33 np0005604943 systemd[1]: Reloading.
Feb  2 06:44:33 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:44:33 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:44:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:33 np0005604943 systemd[1]: Starting libvirt logging daemon socket...
Feb  2 06:44:33 np0005604943 systemd[1]: Listening on libvirt logging daemon socket.
Feb  2 06:44:33 np0005604943 systemd[1]: Starting libvirt logging daemon admin socket...
Feb  2 06:44:33 np0005604943 systemd[1]: Listening on libvirt logging daemon admin socket.
Feb  2 06:44:33 np0005604943 systemd[1]: Starting libvirt logging daemon...
Feb  2 06:44:34 np0005604943 systemd[1]: Started libvirt logging daemon.
Feb  2 06:44:34 np0005604943 python3.9[206496]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:44:34 np0005604943 systemd[1]: Reloading.
Feb  2 06:44:34 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:44:34 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:44:34 np0005604943 systemd[1]: Starting libvirt nodedev daemon socket...
Feb  2 06:44:34 np0005604943 systemd[1]: Listening on libvirt nodedev daemon socket.
Feb  2 06:44:34 np0005604943 systemd[1]: Starting libvirt nodedev daemon admin socket...
Feb  2 06:44:34 np0005604943 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Feb  2 06:44:34 np0005604943 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Feb  2 06:44:34 np0005604943 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Feb  2 06:44:34 np0005604943 systemd[1]: Starting libvirt nodedev daemon...
Feb  2 06:44:35 np0005604943 systemd[1]: Started libvirt nodedev daemon.
Feb  2 06:44:35 np0005604943 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Feb  2 06:44:35 np0005604943 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Feb  2 06:44:35 np0005604943 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Feb  2 06:44:35 np0005604943 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Feb  2 06:44:35 np0005604943 python3.9[206712]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:44:35 np0005604943 systemd[1]: Reloading.
Feb  2 06:44:35 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:44:35 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:44:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:35 np0005604943 systemd[1]: Starting libvirt proxy daemon admin socket...
Feb  2 06:44:35 np0005604943 systemd[1]: Starting libvirt proxy daemon read-only socket...
Feb  2 06:44:35 np0005604943 systemd[1]: Listening on libvirt proxy daemon admin socket.
Feb  2 06:44:35 np0005604943 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Feb  2 06:44:35 np0005604943 systemd[1]: Starting libvirt proxy daemon...
Feb  2 06:44:35 np0005604943 systemd[1]: Started libvirt proxy daemon.
Feb  2 06:44:36 np0005604943 setroubleshoot[206558]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 4248da84-5cd4-451b-924d-0cbda9a42c16
Feb  2 06:44:36 np0005604943 setroubleshoot[206558]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Feb  2 06:44:36 np0005604943 setroubleshoot[206558]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 4248da84-5cd4-451b-924d-0cbda9a42c16
Feb  2 06:44:36 np0005604943 setroubleshoot[206558]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Feb  2 06:44:36 np0005604943 python3.9[206932]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:44:36 np0005604943 systemd[1]: Reloading.
Feb  2 06:44:36 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:44:36 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:44:36 np0005604943 systemd[1]: Listening on libvirt locking daemon socket.
Feb  2 06:44:36 np0005604943 systemd[1]: Starting libvirt QEMU daemon socket...
Feb  2 06:44:36 np0005604943 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb  2 06:44:36 np0005604943 systemd[1]: Starting Virtual Machine and Container Registration Service...
Feb  2 06:44:36 np0005604943 systemd[1]: Listening on libvirt QEMU daemon socket.
Feb  2 06:44:36 np0005604943 systemd[1]: Starting libvirt QEMU daemon admin socket...
Feb  2 06:44:36 np0005604943 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Feb  2 06:44:36 np0005604943 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Feb  2 06:44:36 np0005604943 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Feb  2 06:44:36 np0005604943 systemd[1]: Started Virtual Machine and Container Registration Service.
Feb  2 06:44:36 np0005604943 systemd[1]: Starting libvirt QEMU daemon...
Feb  2 06:44:36 np0005604943 systemd[1]: Started libvirt QEMU daemon.
Feb  2 06:44:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:44:37 np0005604943 python3.9[207148]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:44:37 np0005604943 systemd[1]: Reloading.
Feb  2 06:44:37 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:44:37 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:44:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:37 np0005604943 systemd[1]: Starting libvirt secret daemon socket...
Feb  2 06:44:37 np0005604943 systemd[1]: Listening on libvirt secret daemon socket.
Feb  2 06:44:37 np0005604943 systemd[1]: Starting libvirt secret daemon admin socket...
Feb  2 06:44:37 np0005604943 systemd[1]: Starting libvirt secret daemon read-only socket...
Feb  2 06:44:37 np0005604943 systemd[1]: Listening on libvirt secret daemon admin socket.
Feb  2 06:44:37 np0005604943 systemd[1]: Listening on libvirt secret daemon read-only socket.
Feb  2 06:44:37 np0005604943 systemd[1]: Starting libvirt secret daemon...
Feb  2 06:44:37 np0005604943 systemd[1]: Started libvirt secret daemon.
Feb  2 06:44:38 np0005604943 python3.9[207360]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:39 np0005604943 python3.9[207512]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 06:44:39 np0005604943 python3.9[207664]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:44:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:40 np0005604943 python3.9[207818]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 06:44:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:44:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:44:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:44:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:44:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:44:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:44:40 np0005604943 python3.9[207968]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:41 np0005604943 python3.9[208089]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770032680.526211-1136-162645136708397/.source.xml follow=False _original_basename=secret.xml.j2 checksum=82929000f893699c0e0b3e35b4291f1513dbf48c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:41 np0005604943 podman[208214]: 2026-02-02 11:44:41.74895959 +0000 UTC m=+0.066265199 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent)
Feb  2 06:44:41 np0005604943 podman[208213]: 2026-02-02 11:44:41.768013633 +0000 UTC m=+0.088674355 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Feb  2 06:44:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:41 np0005604943 python3.9[208273]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 4548a36b-7cdc-5e3e-a814-4e1571be1fae#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:44:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:44:42 np0005604943 python3.9[208448]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:44 np0005604943 python3.9[208911]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:45 np0005604943 python3.9[209063]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:45 np0005604943 python3.9[209186]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1770032684.6435854-1191-141054357055665/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:46 np0005604943 python3.9[209338]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:46 np0005604943 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Feb  2 06:44:46 np0005604943 systemd[1]: setroubleshootd.service: Deactivated successfully.
Feb  2 06:44:46 np0005604943 python3.9[209490]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:47 np0005604943 python3.9[209568]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:44:47 np0005604943 python3.9[209720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:47 np0005604943 python3.9[209798]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.bufinpc_ recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:48 np0005604943 python3.9[209950]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:48 np0005604943 python3.9[210028]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:49 np0005604943 python3.9[210180]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:44:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:50 np0005604943 python3[210333]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb  2 06:44:50 np0005604943 python3.9[210485]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:51 np0005604943 python3.9[210563]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:51 np0005604943 python3.9[210715]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:44:52 np0005604943 python3.9[210840]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032691.5106227-1280-120201522183469/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:53 np0005604943 python3.9[210992]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:53 np0005604943 python3.9[211070]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:54 np0005604943 python3.9[211272]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:44:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:44:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:44:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:44:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:44:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:44:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:44:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:44:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:44:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:44:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:44:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:44:54 np0005604943 python3.9[211370]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:54 np0005604943 podman[211520]: 2026-02-02 11:44:54.804528387 +0000 UTC m=+0.036881392 container create 8f51a97b7b89603409c83cf8500742632e94f64e753a7b40a730f412bd16647d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 06:44:54 np0005604943 systemd[1]: Started libpod-conmon-8f51a97b7b89603409c83cf8500742632e94f64e753a7b40a730f412bd16647d.scope.
Feb  2 06:44:54 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:44:54 np0005604943 podman[211520]: 2026-02-02 11:44:54.788517688 +0000 UTC m=+0.020870703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:44:54 np0005604943 podman[211520]: 2026-02-02 11:44:54.887310439 +0000 UTC m=+0.119663484 container init 8f51a97b7b89603409c83cf8500742632e94f64e753a7b40a730f412bd16647d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_curie, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:44:54 np0005604943 podman[211520]: 2026-02-02 11:44:54.895808212 +0000 UTC m=+0.128161217 container start 8f51a97b7b89603409c83cf8500742632e94f64e753a7b40a730f412bd16647d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_curie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Feb  2 06:44:54 np0005604943 podman[211520]: 2026-02-02 11:44:54.899486212 +0000 UTC m=+0.131839247 container attach 8f51a97b7b89603409c83cf8500742632e94f64e753a7b40a730f412bd16647d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_curie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 06:44:54 np0005604943 systemd[1]: libpod-8f51a97b7b89603409c83cf8500742632e94f64e753a7b40a730f412bd16647d.scope: Deactivated successfully.
Feb  2 06:44:54 np0005604943 gifted_curie[211553]: 167 167
Feb  2 06:44:54 np0005604943 conmon[211553]: conmon 8f51a97b7b89603409c8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8f51a97b7b89603409c83cf8500742632e94f64e753a7b40a730f412bd16647d.scope/container/memory.events
Feb  2 06:44:54 np0005604943 podman[211520]: 2026-02-02 11:44:54.902780933 +0000 UTC m=+0.135133978 container died 8f51a97b7b89603409c83cf8500742632e94f64e753a7b40a730f412bd16647d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_curie, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:44:54 np0005604943 systemd[1]: var-lib-containers-storage-overlay-202d9326fcf5108e76974ef60767980d2097d7d6a20c4b5a7ab3e839e852ca38-merged.mount: Deactivated successfully.
Feb  2 06:44:54 np0005604943 podman[211520]: 2026-02-02 11:44:54.941291989 +0000 UTC m=+0.173644984 container remove 8f51a97b7b89603409c83cf8500742632e94f64e753a7b40a730f412bd16647d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_curie, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Feb  2 06:44:54 np0005604943 systemd[1]: libpod-conmon-8f51a97b7b89603409c83cf8500742632e94f64e753a7b40a730f412bd16647d.scope: Deactivated successfully.
Feb  2 06:44:55 np0005604943 podman[211636]: 2026-02-02 11:44:55.10282682 +0000 UTC m=+0.065786175 container create 4ad42f837a6a801b9e2c105ca93cbc8e204f9957a49f80512fae1d266510acbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 06:44:55 np0005604943 systemd[1]: Started libpod-conmon-4ad42f837a6a801b9e2c105ca93cbc8e204f9957a49f80512fae1d266510acbc.scope.
Feb  2 06:44:55 np0005604943 podman[211636]: 2026-02-02 11:44:55.073372332 +0000 UTC m=+0.036331747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:44:55 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:44:55 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:44:55 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:44:55 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:44:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de3ecbf6a2b5a7bda9a1ad7389d8cde1e5d1a034393fcd485311889cebda2a52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:44:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de3ecbf6a2b5a7bda9a1ad7389d8cde1e5d1a034393fcd485311889cebda2a52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:44:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de3ecbf6a2b5a7bda9a1ad7389d8cde1e5d1a034393fcd485311889cebda2a52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:44:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de3ecbf6a2b5a7bda9a1ad7389d8cde1e5d1a034393fcd485311889cebda2a52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:44:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de3ecbf6a2b5a7bda9a1ad7389d8cde1e5d1a034393fcd485311889cebda2a52/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:44:55 np0005604943 podman[211636]: 2026-02-02 11:44:55.238970899 +0000 UTC m=+0.201930254 container init 4ad42f837a6a801b9e2c105ca93cbc8e204f9957a49f80512fae1d266510acbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:44:55 np0005604943 python3.9[211630]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:44:55 np0005604943 podman[211636]: 2026-02-02 11:44:55.245840137 +0000 UTC m=+0.208799472 container start 4ad42f837a6a801b9e2c105ca93cbc8e204f9957a49f80512fae1d266510acbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:44:55 np0005604943 podman[211636]: 2026-02-02 11:44:55.248668345 +0000 UTC m=+0.211627680 container attach 4ad42f837a6a801b9e2c105ca93cbc8e204f9957a49f80512fae1d266510acbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 06:44:55 np0005604943 awesome_bhabha[211652]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:44:55 np0005604943 awesome_bhabha[211652]: --> All data devices are unavailable
Feb  2 06:44:55 np0005604943 systemd[1]: libpod-4ad42f837a6a801b9e2c105ca93cbc8e204f9957a49f80512fae1d266510acbc.scope: Deactivated successfully.
Feb  2 06:44:55 np0005604943 podman[211636]: 2026-02-02 11:44:55.677833015 +0000 UTC m=+0.640792410 container died 4ad42f837a6a801b9e2c105ca93cbc8e204f9957a49f80512fae1d266510acbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bhabha, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:44:55 np0005604943 systemd[1]: var-lib-containers-storage-overlay-de3ecbf6a2b5a7bda9a1ad7389d8cde1e5d1a034393fcd485311889cebda2a52-merged.mount: Deactivated successfully.
Feb  2 06:44:55 np0005604943 podman[211636]: 2026-02-02 11:44:55.730828739 +0000 UTC m=+0.693788084 container remove 4ad42f837a6a801b9e2c105ca93cbc8e204f9957a49f80512fae1d266510acbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:44:55 np0005604943 systemd[1]: libpod-conmon-4ad42f837a6a801b9e2c105ca93cbc8e204f9957a49f80512fae1d266510acbc.scope: Deactivated successfully.
Feb  2 06:44:55 np0005604943 python3.9[211791]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770032694.6573539-1319-201449601216775/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:56 np0005604943 podman[211975]: 2026-02-02 11:44:56.094944988 +0000 UTC m=+0.038243780 container create 6933b330492fefa484190daf1df57b42b9cfcbca6ce0d7f30cfc890ab10bccfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_faraday, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:44:56 np0005604943 systemd[1]: Started libpod-conmon-6933b330492fefa484190daf1df57b42b9cfcbca6ce0d7f30cfc890ab10bccfa.scope.
Feb  2 06:44:56 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:44:56 np0005604943 podman[211975]: 2026-02-02 11:44:56.076276846 +0000 UTC m=+0.019575678 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:44:56 np0005604943 podman[211975]: 2026-02-02 11:44:56.173725869 +0000 UTC m=+0.117024671 container init 6933b330492fefa484190daf1df57b42b9cfcbca6ce0d7f30cfc890ab10bccfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_faraday, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:44:56 np0005604943 podman[211975]: 2026-02-02 11:44:56.178988383 +0000 UTC m=+0.122287175 container start 6933b330492fefa484190daf1df57b42b9cfcbca6ce0d7f30cfc890ab10bccfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_faraday, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:44:56 np0005604943 pensive_faraday[212040]: 167 167
Feb  2 06:44:56 np0005604943 podman[211975]: 2026-02-02 11:44:56.183450356 +0000 UTC m=+0.126749148 container attach 6933b330492fefa484190daf1df57b42b9cfcbca6ce0d7f30cfc890ab10bccfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_faraday, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 06:44:56 np0005604943 systemd[1]: libpod-6933b330492fefa484190daf1df57b42b9cfcbca6ce0d7f30cfc890ab10bccfa.scope: Deactivated successfully.
Feb  2 06:44:56 np0005604943 podman[211975]: 2026-02-02 11:44:56.184195986 +0000 UTC m=+0.127494778 container died 6933b330492fefa484190daf1df57b42b9cfcbca6ce0d7f30cfc890ab10bccfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_faraday, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:44:56 np0005604943 systemd[1]: var-lib-containers-storage-overlay-7fa7c328137fc38b75ab78276e2ceeeb8a0d9243adbcf34ef4d84968f72fcf95-merged.mount: Deactivated successfully.
Feb  2 06:44:56 np0005604943 podman[211975]: 2026-02-02 11:44:56.223276099 +0000 UTC m=+0.166574891 container remove 6933b330492fefa484190daf1df57b42b9cfcbca6ce0d7f30cfc890ab10bccfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_faraday, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:44:56 np0005604943 systemd[1]: libpod-conmon-6933b330492fefa484190daf1df57b42b9cfcbca6ce0d7f30cfc890ab10bccfa.scope: Deactivated successfully.
Feb  2 06:44:56 np0005604943 podman[212065]: 2026-02-02 11:44:56.329938315 +0000 UTC m=+0.032472852 container create 86936a57a5dcbcf933b3902aaccfd9ca4fbf53eb269059144b3dbe5680901b0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_aryabhata, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 06:44:56 np0005604943 python3.9[212042]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:56 np0005604943 systemd[1]: Started libpod-conmon-86936a57a5dcbcf933b3902aaccfd9ca4fbf53eb269059144b3dbe5680901b0c.scope.
Feb  2 06:44:56 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:44:56 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b30132691f40809f01811aab2a407fab1c712822409d3ff6066ce81f84d822/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:44:56 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b30132691f40809f01811aab2a407fab1c712822409d3ff6066ce81f84d822/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:44:56 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b30132691f40809f01811aab2a407fab1c712822409d3ff6066ce81f84d822/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:44:56 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b30132691f40809f01811aab2a407fab1c712822409d3ff6066ce81f84d822/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:44:56 np0005604943 podman[212065]: 2026-02-02 11:44:56.315291042 +0000 UTC m=+0.017825579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:44:56 np0005604943 podman[212065]: 2026-02-02 11:44:56.412579642 +0000 UTC m=+0.115114179 container init 86936a57a5dcbcf933b3902aaccfd9ca4fbf53eb269059144b3dbe5680901b0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 06:44:56 np0005604943 podman[212065]: 2026-02-02 11:44:56.418514204 +0000 UTC m=+0.121048771 container start 86936a57a5dcbcf933b3902aaccfd9ca4fbf53eb269059144b3dbe5680901b0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 06:44:56 np0005604943 podman[212065]: 2026-02-02 11:44:56.422181095 +0000 UTC m=+0.124715632 container attach 86936a57a5dcbcf933b3902aaccfd9ca4fbf53eb269059144b3dbe5680901b0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]: {
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:    "0": [
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:        {
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "devices": [
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "/dev/loop3"
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            ],
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "lv_name": "ceph_lv0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "lv_size": "21470642176",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "name": "ceph_lv0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "tags": {
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.cluster_name": "ceph",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.crush_device_class": "",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.encrypted": "0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.objectstore": "bluestore",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.osd_id": "0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.type": "block",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.vdo": "0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.with_tpm": "0"
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            },
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "type": "block",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "vg_name": "ceph_vg0"
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:        }
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:    ],
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:    "1": [
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:        {
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "devices": [
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "/dev/loop4"
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            ],
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "lv_name": "ceph_lv1",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "lv_size": "21470642176",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "name": "ceph_lv1",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "tags": {
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.cluster_name": "ceph",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.crush_device_class": "",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.encrypted": "0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.objectstore": "bluestore",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.osd_id": "1",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.type": "block",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.vdo": "0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.with_tpm": "0"
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            },
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "type": "block",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "vg_name": "ceph_vg1"
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:        }
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:    ],
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:    "2": [
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:        {
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "devices": [
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "/dev/loop5"
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            ],
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "lv_name": "ceph_lv2",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "lv_size": "21470642176",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "name": "ceph_lv2",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "tags": {
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.cluster_name": "ceph",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.crush_device_class": "",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.encrypted": "0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.objectstore": "bluestore",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.osd_id": "2",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.type": "block",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.vdo": "0",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:                "ceph.with_tpm": "0"
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            },
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "type": "block",
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:            "vg_name": "ceph_vg2"
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:        }
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]:    ]
Feb  2 06:44:56 np0005604943 clever_aryabhata[212082]: }
Feb  2 06:44:56 np0005604943 systemd[1]: libpod-86936a57a5dcbcf933b3902aaccfd9ca4fbf53eb269059144b3dbe5680901b0c.scope: Deactivated successfully.
Feb  2 06:44:56 np0005604943 podman[212065]: 2026-02-02 11:44:56.743722235 +0000 UTC m=+0.446256782 container died 86936a57a5dcbcf933b3902aaccfd9ca4fbf53eb269059144b3dbe5680901b0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_aryabhata, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:44:56 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b5b30132691f40809f01811aab2a407fab1c712822409d3ff6066ce81f84d822-merged.mount: Deactivated successfully.
Feb  2 06:44:56 np0005604943 podman[212065]: 2026-02-02 11:44:56.77995477 +0000 UTC m=+0.482489307 container remove 86936a57a5dcbcf933b3902aaccfd9ca4fbf53eb269059144b3dbe5680901b0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_aryabhata, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 06:44:56 np0005604943 systemd[1]: libpod-conmon-86936a57a5dcbcf933b3902aaccfd9ca4fbf53eb269059144b3dbe5680901b0c.scope: Deactivated successfully.
Feb  2 06:44:56 np0005604943 python3.9[212253]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:44:57 np0005604943 podman[212347]: 2026-02-02 11:44:57.171515912 +0000 UTC m=+0.039539996 container create 2921ed6c5ec06d054c3bbcd920b69c86390a8959992b280c5874eacd1b82f824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_northcutt, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:44:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:44:57 np0005604943 systemd[1]: Started libpod-conmon-2921ed6c5ec06d054c3bbcd920b69c86390a8959992b280c5874eacd1b82f824.scope.
Feb  2 06:44:57 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:44:57 np0005604943 podman[212347]: 2026-02-02 11:44:57.154602918 +0000 UTC m=+0.022627022 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:44:57 np0005604943 podman[212347]: 2026-02-02 11:44:57.26406251 +0000 UTC m=+0.132086624 container init 2921ed6c5ec06d054c3bbcd920b69c86390a8959992b280c5874eacd1b82f824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_northcutt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:44:57 np0005604943 podman[212347]: 2026-02-02 11:44:57.270319092 +0000 UTC m=+0.138343176 container start 2921ed6c5ec06d054c3bbcd920b69c86390a8959992b280c5874eacd1b82f824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:44:57 np0005604943 podman[212347]: 2026-02-02 11:44:57.274409864 +0000 UTC m=+0.142433968 container attach 2921ed6c5ec06d054c3bbcd920b69c86390a8959992b280c5874eacd1b82f824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:44:57 np0005604943 intelligent_northcutt[212414]: 167 167
Feb  2 06:44:57 np0005604943 systemd[1]: libpod-2921ed6c5ec06d054c3bbcd920b69c86390a8959992b280c5874eacd1b82f824.scope: Deactivated successfully.
Feb  2 06:44:57 np0005604943 conmon[212414]: conmon 2921ed6c5ec06d054c3b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2921ed6c5ec06d054c3bbcd920b69c86390a8959992b280c5874eacd1b82f824.scope/container/memory.events
Feb  2 06:44:57 np0005604943 podman[212347]: 2026-02-02 11:44:57.277587742 +0000 UTC m=+0.145611846 container died 2921ed6c5ec06d054c3bbcd920b69c86390a8959992b280c5874eacd1b82f824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_northcutt, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:44:57 np0005604943 systemd[1]: var-lib-containers-storage-overlay-5a1e9889d2949ac70cdc8e03b6034b23dee61a6d9b0c76851cc4a8be287774f0-merged.mount: Deactivated successfully.
Feb  2 06:44:57 np0005604943 podman[212347]: 2026-02-02 11:44:57.314569676 +0000 UTC m=+0.182593750 container remove 2921ed6c5ec06d054c3bbcd920b69c86390a8959992b280c5874eacd1b82f824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_northcutt, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:44:57 np0005604943 systemd[1]: libpod-conmon-2921ed6c5ec06d054c3bbcd920b69c86390a8959992b280c5874eacd1b82f824.scope: Deactivated successfully.
Feb  2 06:44:57 np0005604943 podman[212445]: 2026-02-02 11:44:57.444307625 +0000 UTC m=+0.038498307 container create 5db76a8ae34c5f6de64264957bffa9fc4587b4ce4277e598017613bb991eb54a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_snyder, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:44:57 np0005604943 systemd[1]: Started libpod-conmon-5db76a8ae34c5f6de64264957bffa9fc4587b4ce4277e598017613bb991eb54a.scope.
Feb  2 06:44:57 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:44:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6732d2dc2b9e54efc89694b19c3d21950943a6bf1dd5ecda869bb6c33b0d98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:44:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6732d2dc2b9e54efc89694b19c3d21950943a6bf1dd5ecda869bb6c33b0d98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:44:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6732d2dc2b9e54efc89694b19c3d21950943a6bf1dd5ecda869bb6c33b0d98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:44:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6732d2dc2b9e54efc89694b19c3d21950943a6bf1dd5ecda869bb6c33b0d98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:44:57 np0005604943 podman[212445]: 2026-02-02 11:44:57.518680466 +0000 UTC m=+0.112871198 container init 5db76a8ae34c5f6de64264957bffa9fc4587b4ce4277e598017613bb991eb54a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_snyder, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 06:44:57 np0005604943 podman[212445]: 2026-02-02 11:44:57.426556018 +0000 UTC m=+0.020746720 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:44:57 np0005604943 podman[212445]: 2026-02-02 11:44:57.526405677 +0000 UTC m=+0.120596359 container start 5db76a8ae34c5f6de64264957bffa9fc4587b4ce4277e598017613bb991eb54a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb  2 06:44:57 np0005604943 podman[212445]: 2026-02-02 11:44:57.529685878 +0000 UTC m=+0.123876580 container attach 5db76a8ae34c5f6de64264957bffa9fc4587b4ce4277e598017613bb991eb54a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 06:44:57 np0005604943 python3.9[212536]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:44:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:58 np0005604943 lvm[212760]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:44:58 np0005604943 lvm[212761]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:44:58 np0005604943 lvm[212760]: VG ceph_vg0 finished
Feb  2 06:44:58 np0005604943 lvm[212761]: VG ceph_vg1 finished
Feb  2 06:44:58 np0005604943 lvm[212765]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:44:58 np0005604943 lvm[212765]: VG ceph_vg2 finished
Feb  2 06:44:58 np0005604943 nervous_snyder[212500]: {}
Feb  2 06:44:58 np0005604943 python3.9[212763]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:44:58 np0005604943 systemd[1]: libpod-5db76a8ae34c5f6de64264957bffa9fc4587b4ce4277e598017613bb991eb54a.scope: Deactivated successfully.
Feb  2 06:44:58 np0005604943 podman[212445]: 2026-02-02 11:44:58.331510754 +0000 UTC m=+0.925701436 container died 5db76a8ae34c5f6de64264957bffa9fc4587b4ce4277e598017613bb991eb54a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:44:58 np0005604943 systemd[1]: var-lib-containers-storage-overlay-fd6732d2dc2b9e54efc89694b19c3d21950943a6bf1dd5ecda869bb6c33b0d98-merged.mount: Deactivated successfully.
Feb  2 06:44:58 np0005604943 podman[212445]: 2026-02-02 11:44:58.373583188 +0000 UTC m=+0.967773890 container remove 5db76a8ae34c5f6de64264957bffa9fc4587b4ce4277e598017613bb991eb54a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_snyder, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 06:44:58 np0005604943 systemd[1]: libpod-conmon-5db76a8ae34c5f6de64264957bffa9fc4587b4ce4277e598017613bb991eb54a.scope: Deactivated successfully.
Feb  2 06:44:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:44:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:44:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:44:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:44:58 np0005604943 python3.9[212957]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:44:59 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:44:59 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:44:59 np0005604943 python3.9[213111]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:44:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:44:59 np0005604943 python3.9[213266]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:00 np0005604943 python3.9[213418]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:45:00 np0005604943 python3.9[213541]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770032700.0967104-1391-87268405950334/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:01 np0005604943 python3.9[213693]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:45:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:01 np0005604943 python3.9[213816]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770032701.1062343-1406-197020333886306/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:45:02 np0005604943 python3.9[213968]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:45:02 np0005604943 python3.9[214091]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770032702.0967863-1421-83238635513649/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:03 np0005604943 python3.9[214243]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:45:03 np0005604943 systemd[1]: Reloading.
Feb  2 06:45:03 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:45:03 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:45:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:03 np0005604943 systemd[1]: Reached target edpm_libvirt.target.
Feb  2 06:45:04 np0005604943 python3.9[214435]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb  2 06:45:04 np0005604943 systemd[1]: Reloading.
Feb  2 06:45:04 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:45:04 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:45:04 np0005604943 systemd[1]: Reloading.
Feb  2 06:45:05 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:45:05 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:45:05 np0005604943 systemd[1]: session-48.scope: Deactivated successfully.
Feb  2 06:45:05 np0005604943 systemd[1]: session-48.scope: Consumed 2min 50.639s CPU time.
Feb  2 06:45:05 np0005604943 systemd-logind[786]: Session 48 logged out. Waiting for processes to exit.
Feb  2 06:45:05 np0005604943 systemd-logind[786]: Removed session 48.
Feb  2 06:45:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:45:06.492830) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032706492875, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2045, "num_deletes": 251, "total_data_size": 3578679, "memory_usage": 3641728, "flush_reason": "Manual Compaction"}
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032706508248, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3491439, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9708, "largest_seqno": 11752, "table_properties": {"data_size": 3482131, "index_size": 5930, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17897, "raw_average_key_size": 19, "raw_value_size": 3463709, "raw_average_value_size": 3768, "num_data_blocks": 269, "num_entries": 919, "num_filter_entries": 919, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770032474, "oldest_key_time": 1770032474, "file_creation_time": 1770032706, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 15474 microseconds, and 7895 cpu microseconds.
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:45:06.508304) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3491439 bytes OK
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:45:06.508326) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:45:06.509770) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:45:06.509795) EVENT_LOG_v1 {"time_micros": 1770032706509788, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:45:06.509834) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3570137, prev total WAL file size 3570137, number of live WAL files 2.
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:45:06.510783) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3409KB)], [26(6027KB)]
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032706510866, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9663186, "oldest_snapshot_seqno": -1}
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3703 keys, 8163335 bytes, temperature: kUnknown
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032706550576, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8163335, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8134647, "index_size": 18338, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88912, "raw_average_key_size": 24, "raw_value_size": 8063869, "raw_average_value_size": 2177, "num_data_blocks": 795, "num_entries": 3703, "num_filter_entries": 3703, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770032706, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:45:06.550862) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8163335 bytes
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:45:06.552368) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 242.5 rd, 204.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.9 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4217, records dropped: 514 output_compression: NoCompression
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:45:06.552394) EVENT_LOG_v1 {"time_micros": 1770032706552381, "job": 10, "event": "compaction_finished", "compaction_time_micros": 39856, "compaction_time_cpu_micros": 24232, "output_level": 6, "num_output_files": 1, "total_output_size": 8163335, "num_input_records": 4217, "num_output_records": 3703, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032706553313, "job": 10, "event": "table_file_deletion", "file_number": 28}
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032706553921, "job": 10, "event": "table_file_deletion", "file_number": 26}
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:45:06.510629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:45:06.554070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:45:06.554075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:45:06.554078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:45:06.554080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:45:06 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:45:06.554082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:45:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:45:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:45:09
Feb  2 06:45:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:45:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:45:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'vms']
Feb  2 06:45:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:45:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:45:10.007 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:45:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:45:10.008 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:45:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:45:10.008 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:45:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:45:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:45:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:45:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:45:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:45:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:45:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:45:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:45:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:45:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:45:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:45:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:45:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:45:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:45:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:45:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:45:11 np0005604943 systemd-logind[786]: New session 49 of user zuul.
Feb  2 06:45:11 np0005604943 systemd[1]: Started Session 49 of User zuul.
Feb  2 06:45:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:11 np0005604943 python3.9[214686]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:45:12 np0005604943 podman[214692]: 2026-02-02 11:45:12.023844481 +0000 UTC m=+0.042725643 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:45:12 np0005604943 podman[214691]: 2026-02-02 11:45:12.044836477 +0000 UTC m=+0.065195610 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Feb  2 06:45:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:45:13 np0005604943 python3.9[214885]: ansible-ansible.builtin.service_facts Invoked
Feb  2 06:45:13 np0005604943 network[214902]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 06:45:13 np0005604943 network[214903]: 'network-scripts' will be removed from distribution in near future.
Feb  2 06:45:13 np0005604943 network[214904]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 06:45:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:16 np0005604943 python3.9[215176]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 06:45:17 np0005604943 python3.9[215260]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:45:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:45:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:45:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:45:23 np0005604943 python3.9[215413]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:45:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:24 np0005604943 python3.9[215565]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:45:24 np0005604943 python3.9[215718]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:45:25 np0005604943 python3.9[215870]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:45:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:25 np0005604943 python3.9[216023]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:45:26 np0005604943 python3.9[216146]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770032725.5339944-90-87309317811151/.source.iscsi _original_basename=.m2xe9b5h follow=False checksum=3a8004bf38b9e37341b91817e45d2efe3f49ffcd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:45:27 np0005604943 python3.9[216298]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:28 np0005604943 python3.9[216450]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:29 np0005604943 python3.9[216602]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:45:29 np0005604943 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Feb  2 06:45:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:29 np0005604943 python3.9[216758]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:45:30 np0005604943 systemd[1]: Reloading.
Feb  2 06:45:30 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:45:30 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:45:30 np0005604943 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb  2 06:45:30 np0005604943 systemd[1]: Starting Open-iSCSI...
Feb  2 06:45:30 np0005604943 kernel: Loading iSCSI transport class v2.0-870.
Feb  2 06:45:30 np0005604943 systemd[1]: Started Open-iSCSI.
Feb  2 06:45:30 np0005604943 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Feb  2 06:45:30 np0005604943 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Feb  2 06:45:31 np0005604943 python3.9[216958]: ansible-ansible.builtin.service_facts Invoked
Feb  2 06:45:31 np0005604943 network[216975]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 06:45:31 np0005604943 network[216976]: 'network-scripts' will be removed from distribution in near future.
Feb  2 06:45:31 np0005604943 network[216977]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 06:45:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:45:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:34 np0005604943 python3.9[217249]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:45:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:36 np0005604943 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 06:45:36 np0005604943 systemd[1]: Starting man-db-cache-update.service...
Feb  2 06:45:36 np0005604943 systemd[1]: Reloading.
Feb  2 06:45:36 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:45:36 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:45:36 np0005604943 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 06:45:36 np0005604943 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 06:45:36 np0005604943 systemd[1]: Finished man-db-cache-update.service.
Feb  2 06:45:36 np0005604943 systemd[1]: run-r6b2658e952a24deb960165906980c7a2.service: Deactivated successfully.
Feb  2 06:45:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:45:37 np0005604943 python3.9[217565]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb  2 06:45:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:38 np0005604943 python3.9[217717]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Feb  2 06:45:39 np0005604943 python3.9[217873]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:45:39 np0005604943 python3.9[217996]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770032738.6019137-178-47311430655313/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:40 np0005604943 python3.9[218148]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:45:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:45:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:45:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:45:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:45:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:45:41 np0005604943 python3.9[218300]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:45:41 np0005604943 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb  2 06:45:41 np0005604943 systemd[1]: Stopped Load Kernel Modules.
Feb  2 06:45:41 np0005604943 systemd[1]: Stopping Load Kernel Modules...
Feb  2 06:45:41 np0005604943 systemd[1]: Starting Load Kernel Modules...
Feb  2 06:45:41 np0005604943 systemd[1]: Finished Load Kernel Modules.
Feb  2 06:45:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:42 np0005604943 python3.9[218456]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:45:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:45:42 np0005604943 podman[218582]: 2026-02-02 11:45:42.614432442 +0000 UTC m=+0.054280317 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Feb  2 06:45:42 np0005604943 podman[218581]: 2026-02-02 11:45:42.634932782 +0000 UTC m=+0.082104037 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb  2 06:45:42 np0005604943 python3.9[218647]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:45:43 np0005604943 python3.9[218805]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:45:43 np0005604943 python3.9[218928]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770032742.993368-229-2351157069439/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:44 np0005604943 python3.9[219080]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:45:44 np0005604943 python3.9[219233]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:45 np0005604943 python3.9[219385]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:46 np0005604943 python3.9[219537]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:46 np0005604943 python3.9[219689]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:45:47 np0005604943 python3.9[219841]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:48 np0005604943 python3.9[219993]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:48 np0005604943 python3.9[220145]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:49 np0005604943 python3.9[220297]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:45:49 np0005604943 python3.9[220451]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:45:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:50 np0005604943 python3.9[220604]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:45:50 np0005604943 systemd[1]: Listening on multipathd control socket.
Feb  2 06:45:51 np0005604943 python3.9[220760]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:45:51 np0005604943 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Feb  2 06:45:51 np0005604943 udevadm[220765]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Feb  2 06:45:51 np0005604943 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Feb  2 06:45:51 np0005604943 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb  2 06:45:51 np0005604943 multipathd[220769]: --------start up--------
Feb  2 06:45:51 np0005604943 multipathd[220769]: read /etc/multipath.conf
Feb  2 06:45:51 np0005604943 multipathd[220769]: path checkers start up
Feb  2 06:45:51 np0005604943 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb  2 06:45:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:52 np0005604943 python3.9[220928]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb  2 06:45:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:45:52 np0005604943 python3.9[221080]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Feb  2 06:45:52 np0005604943 kernel: Key type psk registered
Feb  2 06:45:53 np0005604943 python3.9[221243]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:45:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:54 np0005604943 python3.9[221366]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770032753.1478336-359-242442457470395/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:54 np0005604943 python3.9[221518]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:45:55 np0005604943 python3.9[221670]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:45:55 np0005604943 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb  2 06:45:55 np0005604943 systemd[1]: Stopped Load Kernel Modules.
Feb  2 06:45:55 np0005604943 systemd[1]: Stopping Load Kernel Modules...
Feb  2 06:45:55 np0005604943 systemd[1]: Starting Load Kernel Modules...
Feb  2 06:45:55 np0005604943 systemd[1]: Finished Load Kernel Modules.
Feb  2 06:45:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:56 np0005604943 python3.9[221826]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 06:45:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:45:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:58 np0005604943 systemd[1]: Reloading.
Feb  2 06:45:58 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:45:58 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:45:58 np0005604943 systemd[1]: Reloading.
Feb  2 06:45:58 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:45:58 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:45:58 np0005604943 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Feb  2 06:45:58 np0005604943 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb  2 06:45:58 np0005604943 lvm[221990]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:45:58 np0005604943 lvm[221990]: VG ceph_vg1 finished
Feb  2 06:45:58 np0005604943 lvm[221993]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:45:58 np0005604943 lvm[221993]: VG ceph_vg0 finished
Feb  2 06:45:58 np0005604943 lvm[221991]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:45:58 np0005604943 lvm[221991]: VG ceph_vg2 finished
Feb  2 06:45:58 np0005604943 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 06:45:59 np0005604943 systemd[1]: Starting man-db-cache-update.service...
Feb  2 06:45:59 np0005604943 systemd[1]: Reloading.
Feb  2 06:45:59 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:45:59 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:45:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:45:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:45:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:45:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:45:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:45:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:45:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:45:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:45:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:45:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:45:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:45:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:45:59 np0005604943 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 06:45:59 np0005604943 podman[222888]: 2026-02-02 11:45:59.644778675 +0000 UTC m=+0.034687581 container create 1fb14d58cb1a2e3118ecadac5dd7e5f9479b8112c67cfa390331d69b556d2567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_cray, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 06:45:59 np0005604943 systemd[1]: Started libpod-conmon-1fb14d58cb1a2e3118ecadac5dd7e5f9479b8112c67cfa390331d69b556d2567.scope.
Feb  2 06:45:59 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:45:59 np0005604943 podman[222888]: 2026-02-02 11:45:59.72757968 +0000 UTC m=+0.117488606 container init 1fb14d58cb1a2e3118ecadac5dd7e5f9479b8112c67cfa390331d69b556d2567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 06:45:59 np0005604943 podman[222888]: 2026-02-02 11:45:59.629721392 +0000 UTC m=+0.019630318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:45:59 np0005604943 podman[222888]: 2026-02-02 11:45:59.735850127 +0000 UTC m=+0.125759043 container start 1fb14d58cb1a2e3118ecadac5dd7e5f9479b8112c67cfa390331d69b556d2567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  2 06:45:59 np0005604943 podman[222888]: 2026-02-02 11:45:59.740380341 +0000 UTC m=+0.130289267 container attach 1fb14d58cb1a2e3118ecadac5dd7e5f9479b8112c67cfa390331d69b556d2567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_cray, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:45:59 np0005604943 condescending_cray[223034]: 167 167
Feb  2 06:45:59 np0005604943 systemd[1]: libpod-1fb14d58cb1a2e3118ecadac5dd7e5f9479b8112c67cfa390331d69b556d2567.scope: Deactivated successfully.
Feb  2 06:45:59 np0005604943 podman[222888]: 2026-02-02 11:45:59.743693002 +0000 UTC m=+0.133601908 container died 1fb14d58cb1a2e3118ecadac5dd7e5f9479b8112c67cfa390331d69b556d2567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_cray, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:45:59 np0005604943 systemd[1]: var-lib-containers-storage-overlay-1f17548d887069c90242d1fdd523bef1258aa221f13c2e77254c296234dcf094-merged.mount: Deactivated successfully.
Feb  2 06:45:59 np0005604943 podman[222888]: 2026-02-02 11:45:59.781378763 +0000 UTC m=+0.171287669 container remove 1fb14d58cb1a2e3118ecadac5dd7e5f9479b8112c67cfa390331d69b556d2567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:45:59 np0005604943 systemd[1]: libpod-conmon-1fb14d58cb1a2e3118ecadac5dd7e5f9479b8112c67cfa390331d69b556d2567.scope: Deactivated successfully.
Feb  2 06:45:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:45:59 np0005604943 podman[223306]: 2026-02-02 11:45:59.903458124 +0000 UTC m=+0.036017427 container create 93492d55f363253f9fb1c0ecb1912c8ac5d92103e8a40beeb1f180755da19185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 06:45:59 np0005604943 systemd[1]: Started libpod-conmon-93492d55f363253f9fb1c0ecb1912c8ac5d92103e8a40beeb1f180755da19185.scope.
Feb  2 06:45:59 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:45:59 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b0e40339bd3c2fa7ef2e9dc7e738394c17dcfe5fd2b0a4c9c8843813ddbea2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:45:59 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b0e40339bd3c2fa7ef2e9dc7e738394c17dcfe5fd2b0a4c9c8843813ddbea2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:45:59 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b0e40339bd3c2fa7ef2e9dc7e738394c17dcfe5fd2b0a4c9c8843813ddbea2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:45:59 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b0e40339bd3c2fa7ef2e9dc7e738394c17dcfe5fd2b0a4c9c8843813ddbea2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:45:59 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03b0e40339bd3c2fa7ef2e9dc7e738394c17dcfe5fd2b0a4c9c8843813ddbea2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:45:59 np0005604943 podman[223306]: 2026-02-02 11:45:59.968112303 +0000 UTC m=+0.100671636 container init 93492d55f363253f9fb1c0ecb1912c8ac5d92103e8a40beeb1f180755da19185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 06:45:59 np0005604943 podman[223306]: 2026-02-02 11:45:59.974503958 +0000 UTC m=+0.107063261 container start 93492d55f363253f9fb1c0ecb1912c8ac5d92103e8a40beeb1f180755da19185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:45:59 np0005604943 podman[223306]: 2026-02-02 11:45:59.980560064 +0000 UTC m=+0.113119367 container attach 93492d55f363253f9fb1c0ecb1912c8ac5d92103e8a40beeb1f180755da19185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cohen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:45:59 np0005604943 podman[223306]: 2026-02-02 11:45:59.886535691 +0000 UTC m=+0.019095014 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:46:00 np0005604943 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 06:46:00 np0005604943 systemd[1]: Finished man-db-cache-update.service.
Feb  2 06:46:00 np0005604943 systemd[1]: man-db-cache-update.service: Consumed 1.107s CPU time.
Feb  2 06:46:00 np0005604943 systemd[1]: run-r6ab1825d299e4dc7b4c557a595e04fb6.service: Deactivated successfully.
Feb  2 06:46:00 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:46:00 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:46:00 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:46:00 np0005604943 festive_cohen[223410]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:46:00 np0005604943 festive_cohen[223410]: --> All data devices are unavailable
Feb  2 06:46:00 np0005604943 systemd[1]: libpod-93492d55f363253f9fb1c0ecb1912c8ac5d92103e8a40beeb1f180755da19185.scope: Deactivated successfully.
Feb  2 06:46:00 np0005604943 podman[223306]: 2026-02-02 11:46:00.399731554 +0000 UTC m=+0.532290857 container died 93492d55f363253f9fb1c0ecb1912c8ac5d92103e8a40beeb1f180755da19185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cohen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:46:00 np0005604943 systemd[1]: var-lib-containers-storage-overlay-03b0e40339bd3c2fa7ef2e9dc7e738394c17dcfe5fd2b0a4c9c8843813ddbea2-merged.mount: Deactivated successfully.
Feb  2 06:46:00 np0005604943 python3.9[223503]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:46:00 np0005604943 podman[223306]: 2026-02-02 11:46:00.443008218 +0000 UTC m=+0.575567521 container remove 93492d55f363253f9fb1c0ecb1912c8ac5d92103e8a40beeb1f180755da19185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cohen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Feb  2 06:46:00 np0005604943 systemd[1]: libpod-conmon-93492d55f363253f9fb1c0ecb1912c8ac5d92103e8a40beeb1f180755da19185.scope: Deactivated successfully.
Feb  2 06:46:00 np0005604943 systemd[1]: Stopping Open-iSCSI...
Feb  2 06:46:00 np0005604943 iscsid[216799]: iscsid shutting down.
Feb  2 06:46:00 np0005604943 systemd[1]: iscsid.service: Deactivated successfully.
Feb  2 06:46:00 np0005604943 systemd[1]: Stopped Open-iSCSI.
Feb  2 06:46:00 np0005604943 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb  2 06:46:00 np0005604943 systemd[1]: Starting Open-iSCSI...
Feb  2 06:46:00 np0005604943 systemd[1]: Started Open-iSCSI.
Feb  2 06:46:00 np0005604943 podman[223719]: 2026-02-02 11:46:00.843359914 +0000 UTC m=+0.033931659 container create bd164c346b179e0d3b2e183cd3b20ec67beb9f7e0f0d0f7df23470b323d2c43e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 06:46:00 np0005604943 systemd[1]: Started libpod-conmon-bd164c346b179e0d3b2e183cd3b20ec67beb9f7e0f0d0f7df23470b323d2c43e.scope.
Feb  2 06:46:00 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:46:00 np0005604943 podman[223719]: 2026-02-02 11:46:00.906678776 +0000 UTC m=+0.097250531 container init bd164c346b179e0d3b2e183cd3b20ec67beb9f7e0f0d0f7df23470b323d2c43e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 06:46:00 np0005604943 podman[223719]: 2026-02-02 11:46:00.912507616 +0000 UTC m=+0.103079371 container start bd164c346b179e0d3b2e183cd3b20ec67beb9f7e0f0d0f7df23470b323d2c43e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:46:00 np0005604943 podman[223719]: 2026-02-02 11:46:00.91593783 +0000 UTC m=+0.106509595 container attach bd164c346b179e0d3b2e183cd3b20ec67beb9f7e0f0d0f7df23470b323d2c43e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 06:46:00 np0005604943 blissful_blackwell[223764]: 167 167
Feb  2 06:46:00 np0005604943 systemd[1]: libpod-bd164c346b179e0d3b2e183cd3b20ec67beb9f7e0f0d0f7df23470b323d2c43e.scope: Deactivated successfully.
Feb  2 06:46:00 np0005604943 conmon[223764]: conmon bd164c346b179e0d3b2e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd164c346b179e0d3b2e183cd3b20ec67beb9f7e0f0d0f7df23470b323d2c43e.scope/container/memory.events
Feb  2 06:46:00 np0005604943 podman[223719]: 2026-02-02 11:46:00.917572435 +0000 UTC m=+0.108144200 container died bd164c346b179e0d3b2e183cd3b20ec67beb9f7e0f0d0f7df23470b323d2c43e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:46:00 np0005604943 podman[223719]: 2026-02-02 11:46:00.830101072 +0000 UTC m=+0.020672837 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:46:00 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a4d27add44f651867607ac9922586a8796a699b630a9fac324413b8007faac38-merged.mount: Deactivated successfully.
Feb  2 06:46:00 np0005604943 podman[223719]: 2026-02-02 11:46:00.950315521 +0000 UTC m=+0.140887276 container remove bd164c346b179e0d3b2e183cd3b20ec67beb9f7e0f0d0f7df23470b323d2c43e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 06:46:00 np0005604943 systemd[1]: libpod-conmon-bd164c346b179e0d3b2e183cd3b20ec67beb9f7e0f0d0f7df23470b323d2c43e.scope: Deactivated successfully.
Feb  2 06:46:01 np0005604943 podman[223788]: 2026-02-02 11:46:01.057813693 +0000 UTC m=+0.030931898 container create 9994b66fd94b756c36c3804321e7e0b85ec192a155c812c3d81e3d936a6e2d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 06:46:01 np0005604943 systemd[1]: Started libpod-conmon-9994b66fd94b756c36c3804321e7e0b85ec192a155c812c3d81e3d936a6e2d5b.scope.
Feb  2 06:46:01 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:46:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8971eabe304f6745b93f625fcaae12f2777db1c863e46399d5e6fa39ccb5ae1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:46:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8971eabe304f6745b93f625fcaae12f2777db1c863e46399d5e6fa39ccb5ae1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:46:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8971eabe304f6745b93f625fcaae12f2777db1c863e46399d5e6fa39ccb5ae1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:46:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8971eabe304f6745b93f625fcaae12f2777db1c863e46399d5e6fa39ccb5ae1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:46:01 np0005604943 podman[223788]: 2026-02-02 11:46:01.122102612 +0000 UTC m=+0.095220837 container init 9994b66fd94b756c36c3804321e7e0b85ec192a155c812c3d81e3d936a6e2d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:46:01 np0005604943 podman[223788]: 2026-02-02 11:46:01.126312127 +0000 UTC m=+0.099430362 container start 9994b66fd94b756c36c3804321e7e0b85ec192a155c812c3d81e3d936a6e2d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:46:01 np0005604943 podman[223788]: 2026-02-02 11:46:01.130312677 +0000 UTC m=+0.103430902 container attach 9994b66fd94b756c36c3804321e7e0b85ec192a155c812c3d81e3d936a6e2d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:46:01 np0005604943 podman[223788]: 2026-02-02 11:46:01.044367615 +0000 UTC m=+0.017485840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:46:01 np0005604943 python3.9[223763]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:46:01 np0005604943 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Feb  2 06:46:01 np0005604943 multipathd[220769]: exit (signal)
Feb  2 06:46:01 np0005604943 multipathd[220769]: --------shut down-------
Feb  2 06:46:01 np0005604943 systemd[1]: multipathd.service: Deactivated successfully.
Feb  2 06:46:01 np0005604943 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Feb  2 06:46:01 np0005604943 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb  2 06:46:01 np0005604943 multipathd[223815]: --------start up--------
Feb  2 06:46:01 np0005604943 multipathd[223815]: read /etc/multipath.conf
Feb  2 06:46:01 np0005604943 multipathd[223815]: path checkers start up
Feb  2 06:46:01 np0005604943 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]: {
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:    "0": [
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:        {
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "devices": [
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "/dev/loop3"
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            ],
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "lv_name": "ceph_lv0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "lv_size": "21470642176",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "name": "ceph_lv0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "tags": {
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.cluster_name": "ceph",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.crush_device_class": "",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.encrypted": "0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.objectstore": "bluestore",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.osd_id": "0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.type": "block",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.vdo": "0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.with_tpm": "0"
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            },
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "type": "block",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "vg_name": "ceph_vg0"
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:        }
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:    ],
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:    "1": [
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:        {
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "devices": [
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "/dev/loop4"
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            ],
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "lv_name": "ceph_lv1",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "lv_size": "21470642176",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "name": "ceph_lv1",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "tags": {
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.cluster_name": "ceph",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.crush_device_class": "",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.encrypted": "0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.objectstore": "bluestore",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.osd_id": "1",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.type": "block",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.vdo": "0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.with_tpm": "0"
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            },
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "type": "block",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "vg_name": "ceph_vg1"
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:        }
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:    ],
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:    "2": [
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:        {
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "devices": [
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "/dev/loop5"
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            ],
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "lv_name": "ceph_lv2",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "lv_size": "21470642176",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "name": "ceph_lv2",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "tags": {
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.cluster_name": "ceph",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.crush_device_class": "",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.encrypted": "0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.objectstore": "bluestore",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.osd_id": "2",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.type": "block",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.vdo": "0",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:                "ceph.with_tpm": "0"
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            },
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "type": "block",
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:            "vg_name": "ceph_vg2"
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:        }
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]:    ]
Feb  2 06:46:01 np0005604943 lucid_dirac[223805]: }
Feb  2 06:46:01 np0005604943 systemd[1]: libpod-9994b66fd94b756c36c3804321e7e0b85ec192a155c812c3d81e3d936a6e2d5b.scope: Deactivated successfully.
Feb  2 06:46:01 np0005604943 podman[223788]: 2026-02-02 11:46:01.385751727 +0000 UTC m=+0.358869932 container died 9994b66fd94b756c36c3804321e7e0b85ec192a155c812c3d81e3d936a6e2d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 06:46:01 np0005604943 systemd[1]: var-lib-containers-storage-overlay-8971eabe304f6745b93f625fcaae12f2777db1c863e46399d5e6fa39ccb5ae1b-merged.mount: Deactivated successfully.
Feb  2 06:46:01 np0005604943 podman[223788]: 2026-02-02 11:46:01.431646223 +0000 UTC m=+0.404764438 container remove 9994b66fd94b756c36c3804321e7e0b85ec192a155c812c3d81e3d936a6e2d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 06:46:01 np0005604943 systemd[1]: libpod-conmon-9994b66fd94b756c36c3804321e7e0b85ec192a155c812c3d81e3d936a6e2d5b.scope: Deactivated successfully.
Feb  2 06:46:01 np0005604943 podman[224049]: 2026-02-02 11:46:01.842596409 +0000 UTC m=+0.040841979 container create 53f9423e0a40ba4d0588c87d1cbddf9fc12296633edd70b5d0d4acfd028a84b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Feb  2 06:46:01 np0005604943 systemd[1]: Started libpod-conmon-53f9423e0a40ba4d0588c87d1cbddf9fc12296633edd70b5d0d4acfd028a84b0.scope.
Feb  2 06:46:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:01 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:46:01 np0005604943 podman[224049]: 2026-02-02 11:46:01.823020993 +0000 UTC m=+0.021266623 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:46:01 np0005604943 podman[224049]: 2026-02-02 11:46:01.924886491 +0000 UTC m=+0.123132101 container init 53f9423e0a40ba4d0588c87d1cbddf9fc12296633edd70b5d0d4acfd028a84b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_varahamihira, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:46:01 np0005604943 podman[224049]: 2026-02-02 11:46:01.92960351 +0000 UTC m=+0.127849110 container start 53f9423e0a40ba4d0588c87d1cbddf9fc12296633edd70b5d0d4acfd028a84b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  2 06:46:01 np0005604943 podman[224049]: 2026-02-02 11:46:01.933534158 +0000 UTC m=+0.131779778 container attach 53f9423e0a40ba4d0588c87d1cbddf9fc12296633edd70b5d0d4acfd028a84b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_varahamihira, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:46:01 np0005604943 hungry_varahamihira[224066]: 167 167
Feb  2 06:46:01 np0005604943 systemd[1]: libpod-53f9423e0a40ba4d0588c87d1cbddf9fc12296633edd70b5d0d4acfd028a84b0.scope: Deactivated successfully.
Feb  2 06:46:01 np0005604943 podman[224049]: 2026-02-02 11:46:01.93436542 +0000 UTC m=+0.132611010 container died 53f9423e0a40ba4d0588c87d1cbddf9fc12296633edd70b5d0d4acfd028a84b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_varahamihira, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 06:46:01 np0005604943 python3.9[224037]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 06:46:01 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b1ccf4bfe8b559e30dbc7de64a099bc8b2bb7357ec053c6bd951bba37101b835-merged.mount: Deactivated successfully.
Feb  2 06:46:01 np0005604943 podman[224049]: 2026-02-02 11:46:01.983593857 +0000 UTC m=+0.181839437 container remove 53f9423e0a40ba4d0588c87d1cbddf9fc12296633edd70b5d0d4acfd028a84b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 06:46:02 np0005604943 systemd[1]: libpod-conmon-53f9423e0a40ba4d0588c87d1cbddf9fc12296633edd70b5d0d4acfd028a84b0.scope: Deactivated successfully.
Feb  2 06:46:02 np0005604943 podman[224094]: 2026-02-02 11:46:02.113315557 +0000 UTC m=+0.047200323 container create b79a9f11905a770141a98a61dfb7123d84d09e884a662c943fef3653496ba5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 06:46:02 np0005604943 systemd[1]: Started libpod-conmon-b79a9f11905a770141a98a61dfb7123d84d09e884a662c943fef3653496ba5d2.scope.
Feb  2 06:46:02 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:46:02 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93feb76fbed76bf4e3f54ed1a9914bc5206fe3b79bf452e93083c2153c5eec52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:46:02 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93feb76fbed76bf4e3f54ed1a9914bc5206fe3b79bf452e93083c2153c5eec52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:46:02 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93feb76fbed76bf4e3f54ed1a9914bc5206fe3b79bf452e93083c2153c5eec52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:46:02 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93feb76fbed76bf4e3f54ed1a9914bc5206fe3b79bf452e93083c2153c5eec52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:46:02 np0005604943 podman[224094]: 2026-02-02 11:46:02.087371678 +0000 UTC m=+0.021256524 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:46:02 np0005604943 podman[224094]: 2026-02-02 11:46:02.185451232 +0000 UTC m=+0.119336028 container init b79a9f11905a770141a98a61dfb7123d84d09e884a662c943fef3653496ba5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:46:02 np0005604943 podman[224094]: 2026-02-02 11:46:02.191355923 +0000 UTC m=+0.125240689 container start b79a9f11905a770141a98a61dfb7123d84d09e884a662c943fef3653496ba5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 06:46:02 np0005604943 podman[224094]: 2026-02-02 11:46:02.194664503 +0000 UTC m=+0.128549269 container attach b79a9f11905a770141a98a61dfb7123d84d09e884a662c943fef3653496ba5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bhabha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:46:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:46:02 np0005604943 lvm[224339]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:46:02 np0005604943 lvm[224339]: VG ceph_vg0 finished
Feb  2 06:46:02 np0005604943 lvm[224342]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:46:02 np0005604943 lvm[224342]: VG ceph_vg1 finished
Feb  2 06:46:02 np0005604943 python3.9[224321]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:02 np0005604943 lvm[224343]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:46:02 np0005604943 lvm[224343]: VG ceph_vg2 finished
Feb  2 06:46:02 np0005604943 cranky_bhabha[224111]: {}
Feb  2 06:46:02 np0005604943 systemd[1]: libpod-b79a9f11905a770141a98a61dfb7123d84d09e884a662c943fef3653496ba5d2.scope: Deactivated successfully.
Feb  2 06:46:02 np0005604943 podman[224094]: 2026-02-02 11:46:02.901901107 +0000 UTC m=+0.835785903 container died b79a9f11905a770141a98a61dfb7123d84d09e884a662c943fef3653496ba5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bhabha, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 06:46:02 np0005604943 systemd[1]: var-lib-containers-storage-overlay-93feb76fbed76bf4e3f54ed1a9914bc5206fe3b79bf452e93083c2153c5eec52-merged.mount: Deactivated successfully.
Feb  2 06:46:02 np0005604943 podman[224094]: 2026-02-02 11:46:02.935864887 +0000 UTC m=+0.869749653 container remove b79a9f11905a770141a98a61dfb7123d84d09e884a662c943fef3653496ba5d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_bhabha, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:46:02 np0005604943 systemd[1]: libpod-conmon-b79a9f11905a770141a98a61dfb7123d84d09e884a662c943fef3653496ba5d2.scope: Deactivated successfully.
Feb  2 06:46:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:46:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:46:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:46:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:46:03 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:46:03 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:46:03 np0005604943 python3.9[224534]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 06:46:03 np0005604943 systemd[1]: Reloading.
Feb  2 06:46:03 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:46:03 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:46:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:04 np0005604943 python3.9[224719]: ansible-ansible.builtin.service_facts Invoked
Feb  2 06:46:04 np0005604943 network[224736]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 06:46:04 np0005604943 network[224737]: 'network-scripts' will be removed from distribution in near future.
Feb  2 06:46:04 np0005604943 network[224738]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 06:46:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:46:07 np0005604943 python3.9[225011]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:46:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:08 np0005604943 python3.9[225164]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:46:08 np0005604943 python3.9[225317]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:46:09 np0005604943 python3.9[225470]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:46:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:46:09
Feb  2 06:46:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:46:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:46:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', '.rgw.root', 'default.rgw.log', 'volumes', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta']
Feb  2 06:46:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:46:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:46:10.009 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:46:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:46:10.010 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:46:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:46:10.010 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:46:10 np0005604943 python3.9[225623]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:46:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:46:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:46:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:46:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:46:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:46:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:46:10 np0005604943 python3.9[225776]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:46:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:46:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:46:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:46:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:46:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:46:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:46:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:46:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:46:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:46:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:46:11 np0005604943 python3.9[225929]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:46:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:12 np0005604943 python3.9[226082]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:46:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:46:12 np0005604943 podman[226235]: 2026-02-02 11:46:12.785687051 +0000 UTC m=+0.092933474 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 06:46:12 np0005604943 podman[226236]: 2026-02-02 11:46:12.785961849 +0000 UTC m=+0.086336884 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb  2 06:46:12 np0005604943 python3.9[226237]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:13 np0005604943 python3.9[226432]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:13 np0005604943 python3.9[226584]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:14 np0005604943 python3.9[226736]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:14 np0005604943 python3.9[226888]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:15 np0005604943 python3.9[227040]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:15 np0005604943 python3.9[227192]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:16 np0005604943 python3.9[227344]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:17 np0005604943 python3.9[227496]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:46:17 np0005604943 python3.9[227648]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:18 np0005604943 python3.9[227800]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:18 np0005604943 python3.9[227952]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:19 np0005604943 python3.9[228104]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:19 np0005604943 python3.9[228256]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:20 np0005604943 python3.9[228408]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:20 np0005604943 python3.9[228560]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:46:21 np0005604943 python3.9[228712]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:46:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:22 np0005604943 python3.9[228864]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 06:46:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:46:22 np0005604943 python3.9[229016]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 06:46:22 np0005604943 systemd[1]: Reloading.
Feb  2 06:46:23 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:46:23 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:46:23 np0005604943 python3.9[229202]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:46:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:24 np0005604943 python3.9[229355]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:46:25 np0005604943 python3.9[229508]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:46:25 np0005604943 python3.9[229661]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:46:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:25 np0005604943 python3.9[229814]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:46:26 np0005604943 python3.9[229967]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:46:27 np0005604943 python3.9[230120]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:46:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:46:27 np0005604943 python3.9[230273]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 06:46:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:28 np0005604943 python3.9[230426]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:29 np0005604943 python3.9[230578]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:29 np0005604943 python3.9[230730]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:30 np0005604943 python3.9[230882]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:31 np0005604943 python3.9[231034]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:31 np0005604943 python3.9[231186]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:32 np0005604943 python3.9[231338]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:46:32 np0005604943 python3.9[231490]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:33 np0005604943 python3.9[231642]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:33 np0005604943 python3.9[231794]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:35 np0005604943 systemd[1]: virtnodedevd.service: Deactivated successfully.
Feb  2 06:46:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:35 np0005604943 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb  2 06:46:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:46:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:39 np0005604943 python3.9[231948]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Feb  2 06:46:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:40 np0005604943 python3.9[232101]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 06:46:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:46:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:46:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:46:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:46:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:46:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:46:41 np0005604943 python3.9[232259]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb  2 06:46:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:42 np0005604943 systemd-logind[786]: New session 50 of user zuul.
Feb  2 06:46:42 np0005604943 systemd[1]: Started Session 50 of User zuul.
Feb  2 06:46:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:46:42 np0005604943 systemd[1]: session-50.scope: Deactivated successfully.
Feb  2 06:46:42 np0005604943 systemd-logind[786]: Session 50 logged out. Waiting for processes to exit.
Feb  2 06:46:42 np0005604943 systemd-logind[786]: Removed session 50.
Feb  2 06:46:42 np0005604943 python3.9[232445]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:46:43 np0005604943 podman[232507]: 2026-02-02 11:46:43.045270263 +0000 UTC m=+0.068201673 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:46:43 np0005604943 podman[232494]: 2026-02-02 11:46:43.050026136 +0000 UTC m=+0.073129371 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:46:43 np0005604943 python3.9[232610]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770032802.401258-986-78365677484993/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:43 np0005604943 python3.9[232760]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:46:43 np0005604943 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb  2 06:46:43 np0005604943 systemd[1]: virtqemud.service: Deactivated successfully.
Feb  2 06:46:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:44 np0005604943 python3.9[232838]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:44 np0005604943 python3.9[232988]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:46:45 np0005604943 python3.9[233109]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770032804.212996-986-6891272294149/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:45 np0005604943 python3.9[233259]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:46:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:46 np0005604943 python3.9[233380]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770032805.353655-986-23750328745934/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:46 np0005604943 python3.9[233530]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:46:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:46:47 np0005604943 python3.9[233651]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770032806.3682394-986-54876977075707/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:47 np0005604943 python3.9[233801]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:46:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:48 np0005604943 python3.9[233922]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770032807.3675065-986-186831402335887/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:48 np0005604943 python3.9[234074]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:49 np0005604943 python3.9[234226]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:46:49 np0005604943 python3.9[234378]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:46:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:50 np0005604943 python3.9[234530]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:46:51 np0005604943 python3.9[234653]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1770032810.117216-1093-206256203460209/.source _original_basename=.rb55k8hi follow=False checksum=a9ec8932b2bed8c2dd44707a0bb28678e16a25f1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Feb  2 06:46:51 np0005604943 python3.9[234805]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:46:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:46:52 np0005604943 python3.9[234957]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:46:52 np0005604943 python3.9[235078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770032811.879746-1119-81618800395155/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:53 np0005604943 python3.9[235228]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 06:46:53 np0005604943 python3.9[235349]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770032812.8933399-1134-4110081247854/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 06:46:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:54 np0005604943 python3.9[235501]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Feb  2 06:46:55 np0005604943 python3.9[235653]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb  2 06:46:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:56 np0005604943 python3[235805]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Feb  2 06:46:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:46:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:46:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:47:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:04 np0005604943 podman[235818]: 2026-02-02 11:47:04.906343781 +0000 UTC m=+7.935838418 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb  2 06:47:05 np0005604943 podman[235966]: 2026-02-02 11:47:05.01264832 +0000 UTC m=+0.040309569 container create 3e31e3cd375528e57c299e6338ab958bae8e250e949c22ed5df21174bc4a2ec8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init)
Feb  2 06:47:05 np0005604943 podman[235966]: 2026-02-02 11:47:04.989004784 +0000 UTC m=+0.016666063 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb  2 06:47:05 np0005604943 python3[235805]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Feb  2 06:47:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:47:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:47:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:47:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:47:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:47:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:47:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:47:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:47:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:47:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:47:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:47:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:47:05 np0005604943 podman[236176]: 2026-02-02 11:47:05.414734814 +0000 UTC m=+0.040695620 container create fe419adcf314769da2c4c0e669ee0d2f7872253c7cd3f2ef8afce6de9d204059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:47:05 np0005604943 systemd[1]: Started libpod-conmon-fe419adcf314769da2c4c0e669ee0d2f7872253c7cd3f2ef8afce6de9d204059.scope.
Feb  2 06:47:05 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:47:05 np0005604943 podman[236176]: 2026-02-02 11:47:05.395778457 +0000 UTC m=+0.021739323 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:47:05 np0005604943 podman[236176]: 2026-02-02 11:47:05.492157311 +0000 UTC m=+0.118118147 container init fe419adcf314769da2c4c0e669ee0d2f7872253c7cd3f2ef8afce6de9d204059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:47:05 np0005604943 podman[236176]: 2026-02-02 11:47:05.49859476 +0000 UTC m=+0.124555576 container start fe419adcf314769da2c4c0e669ee0d2f7872253c7cd3f2ef8afce6de9d204059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:47:05 np0005604943 podman[236176]: 2026-02-02 11:47:05.502188709 +0000 UTC m=+0.128149525 container attach fe419adcf314769da2c4c0e669ee0d2f7872253c7cd3f2ef8afce6de9d204059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 06:47:05 np0005604943 compassionate_ritchie[236242]: 167 167
Feb  2 06:47:05 np0005604943 systemd[1]: libpod-fe419adcf314769da2c4c0e669ee0d2f7872253c7cd3f2ef8afce6de9d204059.scope: Deactivated successfully.
Feb  2 06:47:05 np0005604943 podman[236176]: 2026-02-02 11:47:05.504322909 +0000 UTC m=+0.130283725 container died fe419adcf314769da2c4c0e669ee0d2f7872253c7cd3f2ef8afce6de9d204059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 06:47:05 np0005604943 systemd[1]: var-lib-containers-storage-overlay-849fb4d6516da02b28f2a1c6518860efec861b91706e70eb85e2c88fc2cbd37d-merged.mount: Deactivated successfully.
Feb  2 06:47:05 np0005604943 podman[236176]: 2026-02-02 11:47:05.545651205 +0000 UTC m=+0.171612021 container remove fe419adcf314769da2c4c0e669ee0d2f7872253c7cd3f2ef8afce6de9d204059 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 06:47:05 np0005604943 systemd[1]: libpod-conmon-fe419adcf314769da2c4c0e669ee0d2f7872253c7cd3f2ef8afce6de9d204059.scope: Deactivated successfully.
Feb  2 06:47:05 np0005604943 python3.9[236247]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:47:05 np0005604943 podman[236268]: 2026-02-02 11:47:05.66300929 +0000 UTC m=+0.036363159 container create 5204a7344c5a74a40550474c5fd3a5b2fe13b06fd0f5ff0616edc76d586b1c8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Feb  2 06:47:05 np0005604943 systemd[1]: Started libpod-conmon-5204a7344c5a74a40550474c5fd3a5b2fe13b06fd0f5ff0616edc76d586b1c8c.scope.
Feb  2 06:47:05 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:47:05 np0005604943 podman[236268]: 2026-02-02 11:47:05.645273248 +0000 UTC m=+0.018627147 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:47:05 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389bbd7815e792de34605d03202304a11e2343403d34a450c129100cc98d8097/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:05 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389bbd7815e792de34605d03202304a11e2343403d34a450c129100cc98d8097/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:05 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389bbd7815e792de34605d03202304a11e2343403d34a450c129100cc98d8097/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:05 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389bbd7815e792de34605d03202304a11e2343403d34a450c129100cc98d8097/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:05 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389bbd7815e792de34605d03202304a11e2343403d34a450c129100cc98d8097/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:05 np0005604943 podman[236268]: 2026-02-02 11:47:05.778512875 +0000 UTC m=+0.151866834 container init 5204a7344c5a74a40550474c5fd3a5b2fe13b06fd0f5ff0616edc76d586b1c8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_davinci, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:47:05 np0005604943 podman[236268]: 2026-02-02 11:47:05.785814127 +0000 UTC m=+0.159167996 container start 5204a7344c5a74a40550474c5fd3a5b2fe13b06fd0f5ff0616edc76d586b1c8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_davinci, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 06:47:05 np0005604943 podman[236268]: 2026-02-02 11:47:05.789014036 +0000 UTC m=+0.162367945 container attach 5204a7344c5a74a40550474c5fd3a5b2fe13b06fd0f5ff0616edc76d586b1c8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_davinci, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:47:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:06 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:47:06 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:47:06 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:47:06 np0005604943 brave_davinci[236287]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:47:06 np0005604943 brave_davinci[236287]: --> All data devices are unavailable
Feb  2 06:47:06 np0005604943 systemd[1]: libpod-5204a7344c5a74a40550474c5fd3a5b2fe13b06fd0f5ff0616edc76d586b1c8c.scope: Deactivated successfully.
Feb  2 06:47:06 np0005604943 podman[236268]: 2026-02-02 11:47:06.233874936 +0000 UTC m=+0.607228845 container died 5204a7344c5a74a40550474c5fd3a5b2fe13b06fd0f5ff0616edc76d586b1c8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_davinci, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:47:06 np0005604943 systemd[1]: var-lib-containers-storage-overlay-389bbd7815e792de34605d03202304a11e2343403d34a450c129100cc98d8097-merged.mount: Deactivated successfully.
Feb  2 06:47:06 np0005604943 podman[236268]: 2026-02-02 11:47:06.282896596 +0000 UTC m=+0.656250515 container remove 5204a7344c5a74a40550474c5fd3a5b2fe13b06fd0f5ff0616edc76d586b1c8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_davinci, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:47:06 np0005604943 systemd[1]: libpod-conmon-5204a7344c5a74a40550474c5fd3a5b2fe13b06fd0f5ff0616edc76d586b1c8c.scope: Deactivated successfully.
Feb  2 06:47:06 np0005604943 python3.9[236469]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Feb  2 06:47:06 np0005604943 podman[236557]: 2026-02-02 11:47:06.701273972 +0000 UTC m=+0.033467460 container create 28b112c0b805cdd3d0095a14748ee712bf75a1ba1fbe92594f86a63c2abf248c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 06:47:06 np0005604943 systemd[1]: Started libpod-conmon-28b112c0b805cdd3d0095a14748ee712bf75a1ba1fbe92594f86a63c2abf248c.scope.
Feb  2 06:47:06 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:47:06 np0005604943 podman[236557]: 2026-02-02 11:47:06.763049625 +0000 UTC m=+0.095243163 container init 28b112c0b805cdd3d0095a14748ee712bf75a1ba1fbe92594f86a63c2abf248c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_goldstine, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 06:47:06 np0005604943 podman[236557]: 2026-02-02 11:47:06.768065014 +0000 UTC m=+0.100258522 container start 28b112c0b805cdd3d0095a14748ee712bf75a1ba1fbe92594f86a63c2abf248c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:47:06 np0005604943 jovial_goldstine[236575]: 167 167
Feb  2 06:47:06 np0005604943 systemd[1]: libpod-28b112c0b805cdd3d0095a14748ee712bf75a1ba1fbe92594f86a63c2abf248c.scope: Deactivated successfully.
Feb  2 06:47:06 np0005604943 conmon[236575]: conmon 28b112c0b805cdd3d009 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28b112c0b805cdd3d0095a14748ee712bf75a1ba1fbe92594f86a63c2abf248c.scope/container/memory.events
Feb  2 06:47:06 np0005604943 podman[236557]: 2026-02-02 11:47:06.774519643 +0000 UTC m=+0.106713181 container attach 28b112c0b805cdd3d0095a14748ee712bf75a1ba1fbe92594f86a63c2abf248c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:47:06 np0005604943 podman[236557]: 2026-02-02 11:47:06.775235003 +0000 UTC m=+0.107428521 container died 28b112c0b805cdd3d0095a14748ee712bf75a1ba1fbe92594f86a63c2abf248c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 06:47:06 np0005604943 podman[236557]: 2026-02-02 11:47:06.685116583 +0000 UTC m=+0.017310101 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:47:06 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f3634959ec353e36b5456492246fdc812c03fc1c932000b2e3387c00c4fcb566-merged.mount: Deactivated successfully.
Feb  2 06:47:06 np0005604943 podman[236557]: 2026-02-02 11:47:06.815763027 +0000 UTC m=+0.147956535 container remove 28b112c0b805cdd3d0095a14748ee712bf75a1ba1fbe92594f86a63c2abf248c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_goldstine, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:47:06 np0005604943 systemd[1]: libpod-conmon-28b112c0b805cdd3d0095a14748ee712bf75a1ba1fbe92594f86a63c2abf248c.scope: Deactivated successfully.
Feb  2 06:47:06 np0005604943 podman[236650]: 2026-02-02 11:47:06.9348396 +0000 UTC m=+0.040196285 container create d69df092ead9c024ac75ed362b1d8300a10c1c9683bbf197d9ab3e81a7fbfca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hermann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:47:06 np0005604943 systemd[1]: Started libpod-conmon-d69df092ead9c024ac75ed362b1d8300a10c1c9683bbf197d9ab3e81a7fbfca0.scope.
Feb  2 06:47:06 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:47:07 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743e0f32a84747581f03dfc146e0ef0e09a115fc05494d4f84bdb0fc37cd1635/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:07 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743e0f32a84747581f03dfc146e0ef0e09a115fc05494d4f84bdb0fc37cd1635/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:07 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743e0f32a84747581f03dfc146e0ef0e09a115fc05494d4f84bdb0fc37cd1635/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:07 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743e0f32a84747581f03dfc146e0ef0e09a115fc05494d4f84bdb0fc37cd1635/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:07 np0005604943 podman[236650]: 2026-02-02 11:47:06.916868052 +0000 UTC m=+0.022224757 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:47:07 np0005604943 podman[236650]: 2026-02-02 11:47:07.019358715 +0000 UTC m=+0.124715420 container init d69df092ead9c024ac75ed362b1d8300a10c1c9683bbf197d9ab3e81a7fbfca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:47:07 np0005604943 podman[236650]: 2026-02-02 11:47:07.023389897 +0000 UTC m=+0.128746572 container start d69df092ead9c024ac75ed362b1d8300a10c1c9683bbf197d9ab3e81a7fbfca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  2 06:47:07 np0005604943 podman[236650]: 2026-02-02 11:47:07.026537874 +0000 UTC m=+0.131894549 container attach d69df092ead9c024ac75ed362b1d8300a10c1c9683bbf197d9ab3e81a7fbfca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Feb  2 06:47:07 np0005604943 python3.9[236746]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb  2 06:47:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]: {
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:    "0": [
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:        {
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "devices": [
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "/dev/loop3"
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            ],
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "lv_name": "ceph_lv0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "lv_size": "21470642176",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "name": "ceph_lv0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "tags": {
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.cluster_name": "ceph",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.crush_device_class": "",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.encrypted": "0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.objectstore": "bluestore",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.osd_id": "0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.type": "block",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.vdo": "0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.with_tpm": "0"
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            },
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "type": "block",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "vg_name": "ceph_vg0"
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:        }
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:    ],
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:    "1": [
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:        {
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "devices": [
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "/dev/loop4"
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            ],
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "lv_name": "ceph_lv1",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "lv_size": "21470642176",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "name": "ceph_lv1",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "tags": {
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.cluster_name": "ceph",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.crush_device_class": "",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.encrypted": "0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.objectstore": "bluestore",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.osd_id": "1",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.type": "block",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.vdo": "0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.with_tpm": "0"
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            },
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "type": "block",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "vg_name": "ceph_vg1"
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:        }
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:    ],
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:    "2": [
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:        {
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "devices": [
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "/dev/loop5"
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            ],
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "lv_name": "ceph_lv2",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "lv_size": "21470642176",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "name": "ceph_lv2",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "tags": {
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.cluster_name": "ceph",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.crush_device_class": "",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.encrypted": "0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.objectstore": "bluestore",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.osd_id": "2",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.type": "block",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.vdo": "0",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:                "ceph.with_tpm": "0"
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            },
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "type": "block",
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:            "vg_name": "ceph_vg2"
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:        }
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]:    ]
Feb  2 06:47:07 np0005604943 amazing_hermann[236713]: }
Feb  2 06:47:07 np0005604943 systemd[1]: libpod-d69df092ead9c024ac75ed362b1d8300a10c1c9683bbf197d9ab3e81a7fbfca0.scope: Deactivated successfully.
Feb  2 06:47:07 np0005604943 podman[236650]: 2026-02-02 11:47:07.336549193 +0000 UTC m=+0.441905878 container died d69df092ead9c024ac75ed362b1d8300a10c1c9683bbf197d9ab3e81a7fbfca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hermann, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 06:47:07 np0005604943 systemd[1]: var-lib-containers-storage-overlay-743e0f32a84747581f03dfc146e0ef0e09a115fc05494d4f84bdb0fc37cd1635-merged.mount: Deactivated successfully.
Feb  2 06:47:07 np0005604943 podman[236650]: 2026-02-02 11:47:07.378812076 +0000 UTC m=+0.484168751 container remove d69df092ead9c024ac75ed362b1d8300a10c1c9683bbf197d9ab3e81a7fbfca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:47:07 np0005604943 systemd[1]: libpod-conmon-d69df092ead9c024ac75ed362b1d8300a10c1c9683bbf197d9ab3e81a7fbfca0.scope: Deactivated successfully.
Feb  2 06:47:07 np0005604943 podman[236946]: 2026-02-02 11:47:07.811070007 +0000 UTC m=+0.040893156 container create aa3b0726ef07e706fd2f4e3597bccc06eccc709f32032e03343d83a298ff5f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:47:07 np0005604943 systemd[1]: Started libpod-conmon-aa3b0726ef07e706fd2f4e3597bccc06eccc709f32032e03343d83a298ff5f35.scope.
Feb  2 06:47:07 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:47:07 np0005604943 podman[236946]: 2026-02-02 11:47:07.878872998 +0000 UTC m=+0.108696157 container init aa3b0726ef07e706fd2f4e3597bccc06eccc709f32032e03343d83a298ff5f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cori, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 06:47:07 np0005604943 podman[236946]: 2026-02-02 11:47:07.88329082 +0000 UTC m=+0.113113969 container start aa3b0726ef07e706fd2f4e3597bccc06eccc709f32032e03343d83a298ff5f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cori, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 06:47:07 np0005604943 podman[236946]: 2026-02-02 11:47:07.790660281 +0000 UTC m=+0.020483460 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:47:07 np0005604943 nostalgic_cori[236994]: 167 167
Feb  2 06:47:07 np0005604943 systemd[1]: libpod-aa3b0726ef07e706fd2f4e3597bccc06eccc709f32032e03343d83a298ff5f35.scope: Deactivated successfully.
Feb  2 06:47:07 np0005604943 podman[236946]: 2026-02-02 11:47:07.894604814 +0000 UTC m=+0.124427963 container attach aa3b0726ef07e706fd2f4e3597bccc06eccc709f32032e03343d83a298ff5f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cori, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:47:07 np0005604943 podman[236946]: 2026-02-02 11:47:07.895055176 +0000 UTC m=+0.124878335 container died aa3b0726ef07e706fd2f4e3597bccc06eccc709f32032e03343d83a298ff5f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cori, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 06:47:07 np0005604943 systemd[1]: var-lib-containers-storage-overlay-30c8419b119506184db0c47ba71129a7383d0d9cc44701e29bebfd09a9cce85e-merged.mount: Deactivated successfully.
Feb  2 06:47:07 np0005604943 podman[236946]: 2026-02-02 11:47:07.924104772 +0000 UTC m=+0.153927921 container remove aa3b0726ef07e706fd2f4e3597bccc06eccc709f32032e03343d83a298ff5f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cori, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:47:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:07 np0005604943 systemd[1]: libpod-conmon-aa3b0726ef07e706fd2f4e3597bccc06eccc709f32032e03343d83a298ff5f35.scope: Deactivated successfully.
Feb  2 06:47:08 np0005604943 podman[237019]: 2026-02-02 11:47:08.02890454 +0000 UTC m=+0.032877264 container create 92328bb37c51ada86948c5c4a4adb276fe7ea170605e2723103736ef112ffb51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:47:08 np0005604943 systemd[1]: Started libpod-conmon-92328bb37c51ada86948c5c4a4adb276fe7ea170605e2723103736ef112ffb51.scope.
Feb  2 06:47:08 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:47:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ae602147afbd7fd3dc40392267a5e76443b70577380c854990dd083adfdacd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ae602147afbd7fd3dc40392267a5e76443b70577380c854990dd083adfdacd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ae602147afbd7fd3dc40392267a5e76443b70577380c854990dd083adfdacd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ae602147afbd7fd3dc40392267a5e76443b70577380c854990dd083adfdacd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:08 np0005604943 podman[237019]: 2026-02-02 11:47:08.109064163 +0000 UTC m=+0.113036887 container init 92328bb37c51ada86948c5c4a4adb276fe7ea170605e2723103736ef112ffb51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 06:47:08 np0005604943 podman[237019]: 2026-02-02 11:47:08.012291709 +0000 UTC m=+0.016264453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:47:08 np0005604943 python3[236996]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Feb  2 06:47:08 np0005604943 podman[237019]: 2026-02-02 11:47:08.11831946 +0000 UTC m=+0.122292184 container start 92328bb37c51ada86948c5c4a4adb276fe7ea170605e2723103736ef112ffb51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_goldberg, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:47:08 np0005604943 podman[237019]: 2026-02-02 11:47:08.121260462 +0000 UTC m=+0.125233216 container attach 92328bb37c51ada86948c5c4a4adb276fe7ea170605e2723103736ef112ffb51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_goldberg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:47:08 np0005604943 podman[237075]: 2026-02-02 11:47:08.246913656 +0000 UTC m=+0.038672393 container create e8469079de5f6cf853327e20582fc7412a39dd166f8e8fc6edb7e70c21cf9b07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=nova_compute, managed_by=edpm_ansible)
Feb  2 06:47:08 np0005604943 podman[237075]: 2026-02-02 11:47:08.227372895 +0000 UTC m=+0.019131652 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb  2 06:47:08 np0005604943 python3[236996]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Feb  2 06:47:08 np0005604943 lvm[237286]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:47:08 np0005604943 lvm[237286]: VG ceph_vg1 finished
Feb  2 06:47:08 np0005604943 lvm[237282]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:47:08 np0005604943 lvm[237282]: VG ceph_vg0 finished
Feb  2 06:47:08 np0005604943 lvm[237288]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:47:08 np0005604943 lvm[237288]: VG ceph_vg2 finished
Feb  2 06:47:08 np0005604943 lvm[237292]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:47:08 np0005604943 lvm[237292]: VG ceph_vg1 finished
Feb  2 06:47:08 np0005604943 lvm[237307]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:47:08 np0005604943 lvm[237307]: VG ceph_vg0 finished
Feb  2 06:47:08 np0005604943 affectionate_goldberg[237035]: {}
Feb  2 06:47:08 np0005604943 systemd[1]: libpod-92328bb37c51ada86948c5c4a4adb276fe7ea170605e2723103736ef112ffb51.scope: Deactivated successfully.
Feb  2 06:47:08 np0005604943 podman[237019]: 2026-02-02 11:47:08.73585254 +0000 UTC m=+0.739825254 container died 92328bb37c51ada86948c5c4a4adb276fe7ea170605e2723103736ef112ffb51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb  2 06:47:08 np0005604943 systemd[1]: var-lib-containers-storage-overlay-c0ae602147afbd7fd3dc40392267a5e76443b70577380c854990dd083adfdacd-merged.mount: Deactivated successfully.
Feb  2 06:47:08 np0005604943 podman[237019]: 2026-02-02 11:47:08.795941207 +0000 UTC m=+0.799913931 container remove 92328bb37c51ada86948c5c4a4adb276fe7ea170605e2723103736ef112ffb51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 06:47:08 np0005604943 systemd[1]: libpod-conmon-92328bb37c51ada86948c5c4a4adb276fe7ea170605e2723103736ef112ffb51.scope: Deactivated successfully.
Feb  2 06:47:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:47:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:47:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:47:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:47:08 np0005604943 python3.9[237345]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:47:09 np0005604943 python3.9[237535]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:47:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:47:09
Feb  2 06:47:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:47:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:47:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', '.rgw.root', '.mgr', 'images']
Feb  2 06:47:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:47:09 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:47:09 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:47:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:47:10.010 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:47:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:47:10.011 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:47:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:47:10.011 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:47:10 np0005604943 python3.9[237686]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770032829.5964274-1230-132468480919134/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 06:47:10 np0005604943 python3.9[237762]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 06:47:10 np0005604943 systemd[1]: Reloading.
Feb  2 06:47:10 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:47:10 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:47:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:47:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:47:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:47:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:47:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:47:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:47:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:47:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:47:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:47:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:47:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:47:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:47:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:47:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:47:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:47:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:47:11 np0005604943 python3.9[237872]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 06:47:11 np0005604943 systemd[1]: Reloading.
Feb  2 06:47:11 np0005604943 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 06:47:11 np0005604943 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 06:47:11 np0005604943 systemd[1]: Starting nova_compute container...
Feb  2 06:47:11 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:47:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a64f6f7b79961cf555e70f1e1cba521849f0814f4f805b139c10e64115ac7d0/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a64f6f7b79961cf555e70f1e1cba521849f0814f4f805b139c10e64115ac7d0/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a64f6f7b79961cf555e70f1e1cba521849f0814f4f805b139c10e64115ac7d0/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a64f6f7b79961cf555e70f1e1cba521849f0814f4f805b139c10e64115ac7d0/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a64f6f7b79961cf555e70f1e1cba521849f0814f4f805b139c10e64115ac7d0/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:11 np0005604943 podman[237911]: 2026-02-02 11:47:11.793114907 +0000 UTC m=+0.087789766 container init e8469079de5f6cf853327e20582fc7412a39dd166f8e8fc6edb7e70c21cf9b07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, org.label-schema.build-date=20260127, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Feb  2 06:47:11 np0005604943 podman[237911]: 2026-02-02 11:47:11.798185878 +0000 UTC m=+0.092860717 container start e8469079de5f6cf853327e20582fc7412a39dd166f8e8fc6edb7e70c21cf9b07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Feb  2 06:47:11 np0005604943 podman[237911]: nova_compute
Feb  2 06:47:11 np0005604943 nova_compute[237927]: + sudo -E kolla_set_configs
Feb  2 06:47:11 np0005604943 systemd[1]: Started nova_compute container.
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Validating config file
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Copying service configuration files
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Deleting /etc/ceph
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Creating directory /etc/ceph
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /etc/ceph
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Writing out command to execute
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb  2 06:47:11 np0005604943 nova_compute[237927]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb  2 06:47:11 np0005604943 nova_compute[237927]: ++ cat /run_command
Feb  2 06:47:11 np0005604943 nova_compute[237927]: + CMD=nova-compute
Feb  2 06:47:11 np0005604943 nova_compute[237927]: + ARGS=
Feb  2 06:47:11 np0005604943 nova_compute[237927]: + sudo kolla_copy_cacerts
Feb  2 06:47:11 np0005604943 nova_compute[237927]: + [[ ! -n '' ]]
Feb  2 06:47:11 np0005604943 nova_compute[237927]: + . kolla_extend_start
Feb  2 06:47:11 np0005604943 nova_compute[237927]: Running command: 'nova-compute'
Feb  2 06:47:11 np0005604943 nova_compute[237927]: + echo 'Running command: '\''nova-compute'\'''
Feb  2 06:47:11 np0005604943 nova_compute[237927]: + umask 0022
Feb  2 06:47:11 np0005604943 nova_compute[237927]: + exec nova-compute
Feb  2 06:47:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:47:12 np0005604943 python3.9[238088]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:47:13 np0005604943 podman[238213]: 2026-02-02 11:47:13.196929548 +0000 UTC m=+0.079607729 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Feb  2 06:47:13 np0005604943 podman[238214]: 2026-02-02 11:47:13.207027578 +0000 UTC m=+0.089331819 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Feb  2 06:47:13 np0005604943 python3.9[238256]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:47:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:13 np0005604943 nova_compute[237927]: 2026-02-02 11:47:13.953 237931 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 06:47:13 np0005604943 nova_compute[237927]: 2026-02-02 11:47:13.953 237931 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 06:47:13 np0005604943 nova_compute[237927]: 2026-02-02 11:47:13.954 237931 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 06:47:13 np0005604943 nova_compute[237927]: 2026-02-02 11:47:13.954 237931 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Feb  2 06:47:13 np0005604943 python3.9[238430]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.104 237931 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.122 237931 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.123 237931 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.726 237931 INFO nova.virt.driver [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.875 237931 INFO nova.compute.provider_config [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Feb  2 06:47:14 np0005604943 python3.9[238586]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.892 237931 DEBUG oslo_concurrency.lockutils [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.892 237931 DEBUG oslo_concurrency.lockutils [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:47:14 np0005604943 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.893 237931 DEBUG oslo_concurrency.lockutils [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:47:14 np0005604943 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.893 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.893 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.893 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.894 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.894 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.894 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.894 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.894 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.895 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.895 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.895 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.895 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.895 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.896 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.896 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.896 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.896 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.896 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.896 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.897 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.897 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.897 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.897 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.897 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.897 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.898 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.898 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.898 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.898 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.898 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.898 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.898 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.899 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.899 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.899 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.899 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.899 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.899 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.900 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.900 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.900 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.900 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.900 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.900 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.901 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.901 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.901 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.901 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.901 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.901 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.901 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.902 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.902 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.902 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.902 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.902 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.902 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.902 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.903 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.903 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.903 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.903 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.903 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.903 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.903 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.904 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.904 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.904 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.904 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.904 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.904 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.904 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.904 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.905 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.905 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.905 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.905 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.905 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.905 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.905 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.906 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.906 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.906 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.906 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.906 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.906 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.906 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.907 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.907 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.907 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.907 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.907 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.907 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.907 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.907 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.908 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.908 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.908 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.908 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.908 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.908 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.909 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.909 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.909 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.909 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.909 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.909 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.910 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.910 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.910 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.910 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.910 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.910 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.911 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.911 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.911 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.911 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.911 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.912 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.912 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.912 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.912 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.912 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.912 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.913 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.913 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.913 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.913 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.913 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.913 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.913 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.913 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.914 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.914 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.914 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.914 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.914 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.914 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.914 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.915 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.915 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.915 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.915 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.915 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.915 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.915 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.916 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.916 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.916 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.916 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.916 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.916 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.916 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.917 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.917 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.917 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.917 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.917 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.917 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.918 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.918 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.918 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.918 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.918 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.918 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.918 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.918 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.919 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.919 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.919 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.919 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.919 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.919 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.919 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.920 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.920 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.920 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.920 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.920 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.920 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.920 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.921 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.921 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.921 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.921 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.921 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.921 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.921 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.922 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.922 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.922 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.922 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.922 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.922 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.922 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.922 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.923 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.923 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.923 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.923 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.923 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.923 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.923 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.924 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.924 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.924 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.924 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.924 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.924 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.924 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.925 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.925 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.925 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.925 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.925 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.925 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.925 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.925 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.926 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.926 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.926 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.926 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.926 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.926 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.926 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.927 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.927 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.927 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.927 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.927 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.927 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.927 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.928 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.928 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.928 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.928 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.928 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.928 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.928 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.928 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.929 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.929 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.929 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.929 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.929 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.929 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.929 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.930 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.930 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.930 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.930 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.930 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.930 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.930 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.931 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.931 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.931 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.931 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.931 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.931 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.931 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.932 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.932 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.932 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.932 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.932 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.932 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.933 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.933 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.933 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.933 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.933 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.933 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.933 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.933 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.934 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.934 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.934 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.934 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.934 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.934 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.934 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.935 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.935 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.935 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.935 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.935 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.935 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.935 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.936 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.936 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.936 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.936 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.936 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.936 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.936 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.937 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.937 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.937 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.937 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.937 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.937 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.937 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.938 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.938 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.938 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.938 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.938 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.938 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.938 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.939 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.939 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.939 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.939 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.939 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.939 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.939 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.940 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.940 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.940 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.940 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.940 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.940 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.940 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.941 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.941 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.941 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.941 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.941 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.941 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.941 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.942 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.942 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.942 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.942 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.942 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.942 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.942 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.943 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.943 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.943 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.943 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.943 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.943 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.943 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.944 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.944 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.944 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.944 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.944 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.944 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.944 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.945 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.945 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.945 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.945 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.945 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.945 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.946 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.946 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.946 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.946 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.946 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.946 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.946 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.947 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.947 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.947 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.947 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.947 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.947 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.947 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.948 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.948 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.948 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.948 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.948 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.948 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.948 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.949 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.949 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.949 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.949 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.949 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.949 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.949 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.950 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.950 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.950 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.950 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.950 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.950 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.950 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.951 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.951 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.951 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.951 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.951 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.951 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.951 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.952 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.952 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.952 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.952 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.952 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.952 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.952 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.953 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.953 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.953 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.953 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.953 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.953 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.953 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.953 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.954 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.954 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.954 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.954 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.954 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.954 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.954 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.955 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.955 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.955 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.955 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.955 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.955 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.955 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.956 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.956 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.956 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.956 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.956 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.956 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.957 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.957 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.957 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.957 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.957 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.957 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.958 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.958 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.958 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.958 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.958 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.958 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.958 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.959 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.959 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.959 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.959 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.959 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.959 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.960 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.960 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.960 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.960 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.960 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.960 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.960 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.961 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.961 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.961 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.961 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.961 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.961 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.961 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.962 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.962 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.962 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.962 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.962 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.962 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.962 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.963 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.963 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.963 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.963 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.963 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.963 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.963 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.964 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.964 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.964 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.964 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.964 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.964 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.964 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.965 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.965 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.965 237931 WARNING oslo_config.cfg [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb  2 06:47:14 np0005604943 nova_compute[237927]: live_migration_uri is deprecated for removal in favor of two other options that
Feb  2 06:47:14 np0005604943 nova_compute[237927]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb  2 06:47:14 np0005604943 nova_compute[237927]: and ``live_migration_inbound_addr`` respectively.
Feb  2 06:47:14 np0005604943 nova_compute[237927]: ).  Its value may be silently ignored in the future.#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.965 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.965 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.965 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.966 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.966 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.966 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.966 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.966 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.966 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.966 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.967 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.967 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.967 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.967 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.967 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.967 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.967 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.968 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.968 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.rbd_secret_uuid        = 4548a36b-7cdc-5e3e-a814-4e1571be1fae log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.968 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.968 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.968 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.968 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.968 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.969 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.969 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.969 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.969 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.969 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.969 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.969 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.970 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.970 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.970 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.970 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.970 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.970 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.970 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.971 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.971 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.971 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.971 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.971 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.971 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.972 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.972 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.972 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.972 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.972 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.973 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.973 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.973 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.973 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.973 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.973 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.974 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.974 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.974 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.974 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.974 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.974 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.974 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.975 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.975 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.975 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.975 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.975 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.975 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.975 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.976 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.976 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.976 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.976 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.976 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.976 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.976 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.977 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.977 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.977 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.977 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.977 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.977 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.977 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.978 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.978 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.978 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.978 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.978 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.978 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.978 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.979 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.979 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.979 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.979 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.979 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.979 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.979 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.979 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.980 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.980 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.980 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.980 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.980 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.980 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.980 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.981 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.981 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.981 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.981 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.981 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.981 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.981 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.981 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.982 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.982 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.982 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.982 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.982 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.982 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.982 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.982 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.983 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.983 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.983 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.983 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.983 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.983 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.983 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.984 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.984 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.984 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.984 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.984 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.984 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.984 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.985 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.985 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.985 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.985 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.985 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.985 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.986 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.986 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.986 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.986 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.986 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.986 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.987 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.987 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.987 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.987 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.987 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.987 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.987 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.988 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.988 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.988 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.988 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.988 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.988 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.988 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.989 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.989 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.989 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.989 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.989 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.989 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.989 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.990 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.990 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.990 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.990 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.990 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.990 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.990 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.991 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.991 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.991 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.991 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.991 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.991 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.992 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.992 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.992 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.992 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.992 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.992 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.992 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.992 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.993 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.993 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.993 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.993 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.993 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.993 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.993 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.994 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.994 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.994 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.994 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.994 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.994 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.995 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.995 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.995 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.995 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.995 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.995 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.995 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.996 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.996 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.996 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.996 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.996 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.996 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.997 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.997 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.997 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.997 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.997 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.997 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.997 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.998 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.998 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.998 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.998 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.998 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.998 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.999 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.999 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.999 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.999 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:14 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.999 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:14.999 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.000 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.000 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.000 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.000 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.000 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.000 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.000 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.001 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.001 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.001 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.001 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.001 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.001 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.001 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.002 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.002 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.002 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.002 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.002 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.002 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.003 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.003 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.003 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.003 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.003 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.003 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.003 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.003 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.004 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.004 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.004 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.004 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.004 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.004 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.004 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.005 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.005 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.005 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.005 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.005 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.005 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.005 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.005 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.006 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.006 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.006 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.006 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.006 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.006 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.006 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.007 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.007 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.007 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.007 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.007 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.007 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.007 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.008 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.008 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.008 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.008 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.008 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.008 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.008 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.009 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.009 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.009 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.009 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.009 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.009 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.010 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.010 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.010 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.010 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.010 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.010 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.010 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.011 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.011 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.011 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.011 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.011 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.011 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.012 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.012 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.012 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.012 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.012 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.012 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.013 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.013 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.013 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.013 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.013 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.013 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.013 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.014 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.014 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.014 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.014 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.014 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.015 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.015 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.015 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.015 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.015 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.016 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.016 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.016 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.016 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.016 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.016 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.016 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.017 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.017 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.017 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.017 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.017 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.017 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.017 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.018 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.018 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.018 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.018 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.018 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.018 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.019 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.019 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.019 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.019 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.019 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.020 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.020 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.020 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.020 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.020 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.020 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.021 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.021 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.021 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.021 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.021 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.022 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.022 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.022 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.022 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.022 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.022 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.023 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.023 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.023 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.023 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.023 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.023 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.023 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.024 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.024 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.024 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.024 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.024 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.024 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.024 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.025 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.025 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.025 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.025 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.025 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.025 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.025 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.025 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.026 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.026 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.026 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.026 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.026 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.026 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.026 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.027 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.027 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.027 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.027 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.027 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.027 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.027 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.028 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.028 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.028 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.028 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.028 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.028 237931 DEBUG oslo_service.service [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.029 237931 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.047 237931 DEBUG nova.virt.libvirt.host [None req-4dec1df8-cde1-4c97-af73-24481750ede0 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.048 237931 DEBUG nova.virt.libvirt.host [None req-4dec1df8-cde1-4c97-af73-24481750ede0 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.048 237931 DEBUG nova.virt.libvirt.host [None req-4dec1df8-cde1-4c97-af73-24481750ede0 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.048 237931 DEBUG nova.virt.libvirt.host [None req-4dec1df8-cde1-4c97-af73-24481750ede0 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Feb  2 06:47:15 np0005604943 systemd[1]: Starting libvirt QEMU daemon...
Feb  2 06:47:15 np0005604943 systemd[1]: Started libvirt QEMU daemon.
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.109 237931 DEBUG nova.virt.libvirt.host [None req-4dec1df8-cde1-4c97-af73-24481750ede0 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fbb189e3ee0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.112 237931 DEBUG nova.virt.libvirt.host [None req-4dec1df8-cde1-4c97-af73-24481750ede0 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fbb189e3ee0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.113 237931 INFO nova.virt.libvirt.driver [None req-4dec1df8-cde1-4c97-af73-24481750ede0 - - - - - -] Connection event '1' reason 'None'#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.127 237931 WARNING nova.virt.libvirt.driver [None req-4dec1df8-cde1-4c97-af73-24481750ede0 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.127 237931 DEBUG nova.virt.libvirt.volume.mount [None req-4dec1df8-cde1-4c97-af73-24481750ede0 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Feb  2 06:47:15 np0005604943 python3.9[238812]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 06:47:15 np0005604943 systemd[1]: Stopping nova_compute container...
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.838 237931 DEBUG oslo_concurrency.lockutils [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.840 237931 DEBUG oslo_concurrency.lockutils [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:47:15 np0005604943 nova_compute[237927]: 2026-02-02 11:47:15.840 237931 DEBUG oslo_concurrency.lockutils [None req-89d3b7bb-afe5-4eb6-9645-fb46ceeb9c34 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:47:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:16 np0005604943 virtqemud[238654]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Feb  2 06:47:16 np0005604943 virtqemud[238654]: hostname: compute-0
Feb  2 06:47:16 np0005604943 virtqemud[238654]: End of file while reading data: Input/output error
Feb  2 06:47:16 np0005604943 systemd[1]: libpod-e8469079de5f6cf853327e20582fc7412a39dd166f8e8fc6edb7e70c21cf9b07.scope: Deactivated successfully.
Feb  2 06:47:16 np0005604943 systemd[1]: libpod-e8469079de5f6cf853327e20582fc7412a39dd166f8e8fc6edb7e70c21cf9b07.scope: Consumed 2.912s CPU time.
Feb  2 06:47:16 np0005604943 conmon[237927]: conmon e8469079de5f6cf85332 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8469079de5f6cf853327e20582fc7412a39dd166f8e8fc6edb7e70c21cf9b07.scope/container/memory.events
Feb  2 06:47:16 np0005604943 podman[238824]: 2026-02-02 11:47:16.349626893 +0000 UTC m=+0.547974132 container died e8469079de5f6cf853327e20582fc7412a39dd166f8e8fc6edb7e70c21cf9b07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 06:47:16 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8469079de5f6cf853327e20582fc7412a39dd166f8e8fc6edb7e70c21cf9b07-userdata-shm.mount: Deactivated successfully.
Feb  2 06:47:16 np0005604943 systemd[1]: var-lib-containers-storage-overlay-1a64f6f7b79961cf555e70f1e1cba521849f0814f4f805b139c10e64115ac7d0-merged.mount: Deactivated successfully.
Feb  2 06:47:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:47:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:18 np0005604943 podman[238824]: 2026-02-02 11:47:18.25118113 +0000 UTC m=+2.449528369 container cleanup e8469079de5f6cf853327e20582fc7412a39dd166f8e8fc6edb7e70c21cf9b07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:47:18 np0005604943 podman[238824]: nova_compute
Feb  2 06:47:18 np0005604943 podman[238855]: nova_compute
Feb  2 06:47:18 np0005604943 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Feb  2 06:47:18 np0005604943 systemd[1]: Stopped nova_compute container.
Feb  2 06:47:18 np0005604943 systemd[1]: Starting nova_compute container...
Feb  2 06:47:18 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:47:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a64f6f7b79961cf555e70f1e1cba521849f0814f4f805b139c10e64115ac7d0/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a64f6f7b79961cf555e70f1e1cba521849f0814f4f805b139c10e64115ac7d0/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a64f6f7b79961cf555e70f1e1cba521849f0814f4f805b139c10e64115ac7d0/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a64f6f7b79961cf555e70f1e1cba521849f0814f4f805b139c10e64115ac7d0/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a64f6f7b79961cf555e70f1e1cba521849f0814f4f805b139c10e64115ac7d0/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:18 np0005604943 podman[238868]: 2026-02-02 11:47:18.44191005 +0000 UTC m=+0.104208612 container init e8469079de5f6cf853327e20582fc7412a39dd166f8e8fc6edb7e70c21cf9b07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:47:18 np0005604943 podman[238868]: 2026-02-02 11:47:18.446062975 +0000 UTC m=+0.108361527 container start e8469079de5f6cf853327e20582fc7412a39dd166f8e8fc6edb7e70c21cf9b07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Feb  2 06:47:18 np0005604943 nova_compute[238883]: + sudo -E kolla_set_configs
Feb  2 06:47:18 np0005604943 podman[238868]: nova_compute
Feb  2 06:47:18 np0005604943 systemd[1]: Started nova_compute container.
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Validating config file
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Copying service configuration files
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Deleting /etc/ceph
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Creating directory /etc/ceph
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /etc/ceph
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Writing out command to execute
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb  2 06:47:18 np0005604943 nova_compute[238883]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb  2 06:47:18 np0005604943 nova_compute[238883]: ++ cat /run_command
Feb  2 06:47:18 np0005604943 nova_compute[238883]: + CMD=nova-compute
Feb  2 06:47:18 np0005604943 nova_compute[238883]: + ARGS=
Feb  2 06:47:18 np0005604943 nova_compute[238883]: + sudo kolla_copy_cacerts
Feb  2 06:47:18 np0005604943 nova_compute[238883]: + [[ ! -n '' ]]
Feb  2 06:47:18 np0005604943 nova_compute[238883]: + . kolla_extend_start
Feb  2 06:47:18 np0005604943 nova_compute[238883]: Running command: 'nova-compute'
Feb  2 06:47:18 np0005604943 nova_compute[238883]: + echo 'Running command: '\''nova-compute'\'''
Feb  2 06:47:18 np0005604943 nova_compute[238883]: + umask 0022
Feb  2 06:47:18 np0005604943 nova_compute[238883]: + exec nova-compute
Feb  2 06:47:19 np0005604943 python3.9[239046]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb  2 06:47:19 np0005604943 systemd[1]: Started libpod-conmon-3e31e3cd375528e57c299e6338ab958bae8e250e949c22ed5df21174bc4a2ec8.scope.
Feb  2 06:47:19 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:47:19 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b3bb5ae5500404a1e5c4fe8543cff556499d9f8c42fc7f2c6450b2087b3a1e/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:19 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b3bb5ae5500404a1e5c4fe8543cff556499d9f8c42fc7f2c6450b2087b3a1e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:19 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b3bb5ae5500404a1e5c4fe8543cff556499d9f8c42fc7f2c6450b2087b3a1e/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Feb  2 06:47:19 np0005604943 podman[239072]: 2026-02-02 11:47:19.324603675 +0000 UTC m=+0.108573622 container init 3e31e3cd375528e57c299e6338ab958bae8e250e949c22ed5df21174bc4a2ec8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init)
Feb  2 06:47:19 np0005604943 podman[239072]: 2026-02-02 11:47:19.332649809 +0000 UTC m=+0.116619726 container start 3e31e3cd375528e57c299e6338ab958bae8e250e949c22ed5df21174bc4a2ec8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute_init, managed_by=edpm_ansible)
Feb  2 06:47:19 np0005604943 python3.9[239046]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Feb  2 06:47:19 np0005604943 nova_compute_init[239094]: INFO:nova_statedir:Applying nova statedir ownership
Feb  2 06:47:19 np0005604943 nova_compute_init[239094]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Feb  2 06:47:19 np0005604943 nova_compute_init[239094]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Feb  2 06:47:19 np0005604943 nova_compute_init[239094]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Feb  2 06:47:19 np0005604943 nova_compute_init[239094]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Feb  2 06:47:19 np0005604943 nova_compute_init[239094]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Feb  2 06:47:19 np0005604943 nova_compute_init[239094]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Feb  2 06:47:19 np0005604943 nova_compute_init[239094]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Feb  2 06:47:19 np0005604943 nova_compute_init[239094]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Feb  2 06:47:19 np0005604943 nova_compute_init[239094]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Feb  2 06:47:19 np0005604943 nova_compute_init[239094]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Feb  2 06:47:19 np0005604943 nova_compute_init[239094]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Feb  2 06:47:19 np0005604943 nova_compute_init[239094]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Feb  2 06:47:19 np0005604943 nova_compute_init[239094]: INFO:nova_statedir:Nova statedir ownership complete
Feb  2 06:47:19 np0005604943 systemd[1]: libpod-3e31e3cd375528e57c299e6338ab958bae8e250e949c22ed5df21174bc4a2ec8.scope: Deactivated successfully.
Feb  2 06:47:19 np0005604943 podman[239107]: 2026-02-02 11:47:19.437052945 +0000 UTC m=+0.040273608 container died 3e31e3cd375528e57c299e6338ab958bae8e250e949c22ed5df21174bc4a2ec8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0)
Feb  2 06:47:19 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e31e3cd375528e57c299e6338ab958bae8e250e949c22ed5df21174bc4a2ec8-userdata-shm.mount: Deactivated successfully.
Feb  2 06:47:19 np0005604943 systemd[1]: var-lib-containers-storage-overlay-83b3bb5ae5500404a1e5c4fe8543cff556499d9f8c42fc7f2c6450b2087b3a1e-merged.mount: Deactivated successfully.
Feb  2 06:47:19 np0005604943 podman[239107]: 2026-02-02 11:47:19.472596641 +0000 UTC m=+0.075817214 container cleanup 3e31e3cd375528e57c299e6338ab958bae8e250e949c22ed5df21174bc4a2ec8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb  2 06:47:19 np0005604943 systemd[1]: libpod-conmon-3e31e3cd375528e57c299e6338ab958bae8e250e949c22ed5df21174bc4a2ec8.scope: Deactivated successfully.
Feb  2 06:47:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:20 np0005604943 systemd[1]: session-49.scope: Deactivated successfully.
Feb  2 06:47:20 np0005604943 systemd[1]: session-49.scope: Consumed 1min 42.720s CPU time.
Feb  2 06:47:20 np0005604943 systemd-logind[786]: Session 49 logged out. Waiting for processes to exit.
Feb  2 06:47:20 np0005604943 systemd-logind[786]: Removed session 49.
Feb  2 06:47:20 np0005604943 nova_compute[238883]: 2026-02-02 11:47:20.479 238887 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 06:47:20 np0005604943 nova_compute[238883]: 2026-02-02 11:47:20.479 238887 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 06:47:20 np0005604943 nova_compute[238883]: 2026-02-02 11:47:20.480 238887 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 06:47:20 np0005604943 nova_compute[238883]: 2026-02-02 11:47:20.480 238887 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Feb  2 06:47:20 np0005604943 nova_compute[238883]: 2026-02-02 11:47:20.619 238887 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:47:20 np0005604943 nova_compute[238883]: 2026-02-02 11:47:20.629 238887 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:47:20 np0005604943 nova_compute[238883]: 2026-02-02 11:47:20.630 238887 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.074 238887 INFO nova.virt.driver [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.182 238887 INFO nova.compute.provider_config [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.251 238887 DEBUG oslo_concurrency.lockutils [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.252 238887 DEBUG oslo_concurrency.lockutils [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.252 238887 DEBUG oslo_concurrency.lockutils [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.252 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.253 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.253 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.253 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.253 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.253 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.253 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.253 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.254 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.254 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.254 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.254 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.254 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.254 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.254 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.255 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.255 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.255 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.255 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.255 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.255 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.256 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.256 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.256 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.256 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.256 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.256 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.257 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.257 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.257 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.257 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.258 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.258 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.258 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.258 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.258 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.259 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.259 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.259 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.259 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.259 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.260 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.260 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.260 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.260 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.260 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.261 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.261 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.261 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.261 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.261 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.262 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.262 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.262 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.262 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.262 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.262 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.263 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.263 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.263 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.263 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.263 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.264 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.264 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.264 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.264 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.264 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.265 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.265 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.265 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.265 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.265 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.266 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.266 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.266 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.266 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.266 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.266 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.267 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.267 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.267 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.267 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.267 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.267 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.268 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.268 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.268 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.268 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.268 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.269 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.269 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.269 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.269 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.269 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.269 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.270 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.270 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.270 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.270 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.270 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.270 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.270 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.271 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.271 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.271 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.271 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.271 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.271 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.271 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.272 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.272 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.272 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.272 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.272 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.272 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.273 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.273 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.273 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.273 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.273 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.274 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.274 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.274 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.274 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.274 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.274 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.274 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.275 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.275 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.275 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.275 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.275 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.276 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.276 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.276 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.276 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.276 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.277 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.277 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.277 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.277 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.277 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.278 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.278 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.278 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.278 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.279 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.279 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.279 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.279 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.279 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.280 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.280 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.280 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.280 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.280 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.281 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.281 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.281 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.281 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.281 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.282 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.282 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.282 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.282 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.282 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.283 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.283 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.283 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.283 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.283 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.284 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.284 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.284 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.284 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.284 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.285 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.285 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.285 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.285 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.285 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.286 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.286 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.286 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.286 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.286 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.286 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.287 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.287 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.287 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.287 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.287 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.287 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.288 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.288 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.288 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.288 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.288 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.289 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.289 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.289 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.289 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.289 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.289 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.290 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.290 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.290 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.290 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.290 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.290 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.291 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.291 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.291 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.291 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.291 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.291 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.292 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.292 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.292 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.292 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.292 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.292 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.292 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.293 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.293 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.293 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.293 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.293 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.293 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.294 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.294 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.294 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.294 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.294 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.294 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.294 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.295 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.295 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.295 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.295 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.295 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.295 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.295 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.296 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.296 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.296 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.296 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.296 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.296 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.296 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.297 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.297 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.297 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.297 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.297 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.298 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.298 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.298 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.298 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.298 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.298 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.299 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.299 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.299 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.299 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.299 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.300 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.300 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.300 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.300 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.300 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.300 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.301 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.301 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.301 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.301 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.301 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.302 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.302 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.302 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.302 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.302 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.303 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.303 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.303 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.303 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.303 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.303 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.304 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.304 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.304 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.304 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.304 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.304 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.305 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.305 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.305 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.305 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.305 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.305 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.305 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.306 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.306 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.306 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.306 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.306 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.306 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.307 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.307 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.307 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.307 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.307 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.307 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.308 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.308 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.308 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.308 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.308 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.308 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.308 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.309 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.309 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.309 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.309 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.309 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.309 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.310 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.310 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.310 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.310 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.310 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.310 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.310 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.311 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.311 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.311 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.311 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.311 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.311 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.311 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.312 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.312 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.312 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.312 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.312 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.313 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.313 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.313 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.313 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.313 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.313 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.313 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.314 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.314 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.314 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.314 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.314 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.314 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.315 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.315 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.315 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.315 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.315 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.315 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.315 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.316 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.316 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.316 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.316 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.316 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.316 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.316 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.317 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.317 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.317 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.317 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.317 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.317 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.317 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.317 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.318 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.318 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.318 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.318 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.318 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.318 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.318 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.319 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.319 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.319 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.319 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.319 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.319 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.319 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.320 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.320 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.320 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.320 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.320 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.320 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.321 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.321 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.321 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.321 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.321 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.321 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.321 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.322 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.322 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.322 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.322 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.322 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.322 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.323 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.323 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.323 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.323 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.323 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.323 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.323 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.324 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.324 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.324 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.324 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.324 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.324 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.325 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.325 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.325 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.325 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.325 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.325 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.326 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.326 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.326 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.326 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.326 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.327 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.327 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.327 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.327 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.327 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.328 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.328 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.328 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.328 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.329 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.329 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.329 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.329 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.329 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.330 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.330 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.330 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.330 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.331 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.331 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.331 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.331 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.331 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.332 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.332 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.332 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.332 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.332 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.333 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.333 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.333 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.333 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.334 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.334 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.334 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.334 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.334 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.335 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.335 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.335 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.335 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.335 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.336 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.336 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.336 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.336 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.337 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.337 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.337 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.338 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.338 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.338 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.338 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.338 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.339 238887 WARNING oslo_config.cfg [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb  2 06:47:21 np0005604943 nova_compute[238883]: live_migration_uri is deprecated for removal in favor of two other options that
Feb  2 06:47:21 np0005604943 nova_compute[238883]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb  2 06:47:21 np0005604943 nova_compute[238883]: and ``live_migration_inbound_addr`` respectively.
Feb  2 06:47:21 np0005604943 nova_compute[238883]: ).  Its value may be silently ignored in the future.#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.339 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.339 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.340 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.340 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.340 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.340 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.340 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.341 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.341 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.341 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.341 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.342 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.342 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.342 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.342 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.342 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.343 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.343 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.343 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.rbd_secret_uuid        = 4548a36b-7cdc-5e3e-a814-4e1571be1fae log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.343 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.344 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.344 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.344 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.344 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.344 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.345 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.345 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.345 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.345 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.345 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.346 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.346 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.346 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.346 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.347 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.347 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.347 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.347 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.348 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.348 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.348 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.348 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.348 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.349 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.349 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.349 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.349 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.349 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.350 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.350 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.350 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.350 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.351 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.351 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.351 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.351 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.351 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.352 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.352 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.352 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.352 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.352 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.353 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.353 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.353 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.353 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.354 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.354 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.354 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.354 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.354 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.355 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.355 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.355 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.355 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.355 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.356 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.356 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.356 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.356 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.357 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.357 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.357 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.357 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.357 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.358 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.358 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.358 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.358 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.358 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.359 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.359 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.359 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.359 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.360 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.360 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.360 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.360 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.360 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.361 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.361 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.361 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.362 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.362 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.362 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.362 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.363 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.363 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.363 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.363 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.363 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.364 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.364 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.364 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.364 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.364 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.365 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.365 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.365 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.365 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.366 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.366 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.366 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.366 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.366 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.367 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.367 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.367 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.367 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.367 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.368 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.368 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.368 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.368 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.369 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.369 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.369 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.369 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.369 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.370 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.370 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.370 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.370 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.371 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.371 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.371 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.371 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.372 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.372 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.372 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.372 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.372 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.373 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.373 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.373 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.373 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.373 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.374 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.374 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.374 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.374 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.375 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.375 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.375 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.375 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.375 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.376 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.376 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.376 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.376 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.376 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.377 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.377 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.377 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.377 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.377 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.378 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.378 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.378 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.378 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.379 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.379 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.379 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.379 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.380 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.380 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.380 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.380 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.380 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.381 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.381 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.381 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.381 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.381 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.382 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.382 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.382 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.382 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.383 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.383 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.383 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.383 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.384 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.384 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.384 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.384 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.384 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.385 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.385 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.385 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.385 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.385 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.386 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.386 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.386 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.386 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.386 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.387 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.387 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.387 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.387 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.388 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.388 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.388 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.388 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.388 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.389 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.389 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.389 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.389 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.389 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.390 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.390 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.390 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.390 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.390 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.391 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.391 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.391 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.391 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.391 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.392 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.392 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.392 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.392 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.393 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.393 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.393 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.393 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.394 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.394 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.394 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.394 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.395 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.395 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.395 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.395 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.395 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.396 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.396 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.396 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.396 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.396 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.397 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.397 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.397 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.397 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.397 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.398 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.398 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.398 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.398 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.399 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.399 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.399 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.399 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.399 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.400 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.400 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.400 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.400 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.400 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.401 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.401 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.401 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.401 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.401 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.402 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.402 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.402 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.402 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.403 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.403 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.403 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.403 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.404 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.404 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.404 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.404 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.404 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.405 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.405 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.405 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.405 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.406 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.406 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.406 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.406 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.406 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.407 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.407 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.407 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.407 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.407 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.408 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.408 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.408 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.408 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.409 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.409 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.409 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.409 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.409 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.410 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.410 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.410 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.410 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.410 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.411 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.411 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.411 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.411 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.412 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.412 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.412 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.412 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.412 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.413 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.413 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.413 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.413 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.414 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.414 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.414 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.414 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.414 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.415 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.415 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.415 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.415 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.415 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.416 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.416 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.416 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.416 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.416 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.417 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.417 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.417 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.417 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.418 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.418 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.418 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.418 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.418 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.419 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.419 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.419 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.419 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.419 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.420 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.420 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.420 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.420 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.420 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.421 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.421 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.421 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.421 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.422 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.422 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.422 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.422 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.422 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.423 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.423 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.423 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.423 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.423 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.424 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.424 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.424 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.424 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.424 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.425 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.425 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.425 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.425 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.426 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.426 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.426 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.426 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.426 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.427 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.427 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.427 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.427 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.427 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.428 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.428 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.428 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.428 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.429 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.429 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.429 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.429 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.429 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.430 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.430 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.430 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.430 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.430 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.431 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.431 238887 DEBUG oslo_service.service [None req-23788588-f352-4a5e-b1a3-d5738465b55d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.432 238887 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.451 238887 DEBUG nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.452 238887 DEBUG nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.452 238887 DEBUG nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.452 238887 DEBUG nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.464 238887 DEBUG nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f1f003fe880> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.467 238887 DEBUG nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f1f003fe880> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.467 238887 INFO nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Connection event '1' reason 'None'#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.473 238887 INFO nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Libvirt host capabilities <capabilities>
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <host>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <uuid>4ccddb6b-e5c4-4cee-96ab-cfd456961526</uuid>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <cpu>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <arch>x86_64</arch>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model>EPYC-Rome-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <vendor>AMD</vendor>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <microcode version='16777317'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <signature family='23' model='49' stepping='0'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <maxphysaddr mode='emulate' bits='40'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='x2apic'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='tsc-deadline'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='osxsave'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='hypervisor'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='tsc_adjust'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='spec-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='stibp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='arch-capabilities'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='ssbd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='cmp_legacy'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='topoext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='virt-ssbd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='lbrv'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='tsc-scale'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='vmcb-clean'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='pause-filter'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='pfthreshold'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='svme-addr-chk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='rdctl-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='skip-l1dfl-vmentry'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='mds-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature name='pschange-mc-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <pages unit='KiB' size='4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <pages unit='KiB' size='2048'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <pages unit='KiB' size='1048576'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </cpu>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <power_management>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <suspend_mem/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </power_management>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <iommu support='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <migration_features>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <live/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <uri_transports>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <uri_transport>tcp</uri_transport>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <uri_transport>rdma</uri_transport>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </uri_transports>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </migration_features>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <topology>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <cells num='1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <cell id='0'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:          <memory unit='KiB'>7864300</memory>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:          <pages unit='KiB' size='4'>1966075</pages>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:          <pages unit='KiB' size='2048'>0</pages>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:          <pages unit='KiB' size='1048576'>0</pages>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:          <distances>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:            <sibling id='0' value='10'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:          </distances>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:          <cpus num='8'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:          </cpus>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        </cell>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </cells>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </topology>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <cache>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </cache>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <secmodel>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model>selinux</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <doi>0</doi>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </secmodel>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <secmodel>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model>dac</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <doi>0</doi>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <baselabel type='kvm'>+107:+107</baselabel>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <baselabel type='qemu'>+107:+107</baselabel>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </secmodel>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </host>
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <guest>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <os_type>hvm</os_type>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <arch name='i686'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <wordsize>32</wordsize>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <domain type='qemu'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <domain type='kvm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </arch>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <features>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <pae/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <nonpae/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <acpi default='on' toggle='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <apic default='on' toggle='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <cpuselection/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <deviceboot/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <disksnapshot default='on' toggle='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <externalSnapshot/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </features>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </guest>
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <guest>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <os_type>hvm</os_type>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <arch name='x86_64'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <wordsize>64</wordsize>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <domain type='qemu'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <domain type='kvm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </arch>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <features>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <acpi default='on' toggle='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <apic default='on' toggle='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <cpuselection/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <deviceboot/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <disksnapshot default='on' toggle='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <externalSnapshot/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </features>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </guest>
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 
Feb  2 06:47:21 np0005604943 nova_compute[238883]: </capabilities>
Feb  2 06:47:21 np0005604943 nova_compute[238883]: #033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.477 238887 DEBUG nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.495 238887 WARNING nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.496 238887 DEBUG nova.virt.libvirt.volume.mount [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.500 238887 DEBUG nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb  2 06:47:21 np0005604943 nova_compute[238883]: <domainCapabilities>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <path>/usr/libexec/qemu-kvm</path>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <domain>kvm</domain>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <machine>pc-q35-rhel9.8.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <arch>i686</arch>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <vcpu max='4096'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <iothreads supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <os supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <enum name='firmware'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <loader supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>rom</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pflash</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='readonly'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>yes</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>no</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='secure'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>no</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </loader>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </os>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <cpu>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <mode name='host-passthrough' supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='hostPassthroughMigratable'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>on</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>off</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </mode>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <mode name='maximum' supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='maximumMigratable'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>on</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>off</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </mode>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <mode name='host-model' supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <vendor>AMD</vendor>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='x2apic'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='tsc-deadline'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='hypervisor'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='tsc_adjust'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='spec-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='stibp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='ssbd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='cmp_legacy'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='overflow-recov'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='succor'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='amd-ssbd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='virt-ssbd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='lbrv'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='tsc-scale'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='vmcb-clean'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='flushbyasid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='pause-filter'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='pfthreshold'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='svme-addr-chk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='disable' name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </mode>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <mode name='custom' supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-noTSX'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='ClearwaterForest'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ddpd-u'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='intel-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ipred-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='lam'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rrsba-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sha512'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sm3'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sm4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='ClearwaterForest-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ddpd-u'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='intel-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ipred-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='lam'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rrsba-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sha512'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sm3'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sm4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cooperlake'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cooperlake-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cooperlake-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Denverton'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mpx'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Denverton-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mpx'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Denverton-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Denverton-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Dhyana-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Genoa'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Genoa-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Genoa-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fs-gs-base-ns'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='perfmon-v2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Milan'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Milan-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Milan-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Milan-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Rome'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Rome-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Rome-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Rome-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Turin'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vp2intersect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fs-gs-base-ns'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibpb-brtype'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='perfmon-v2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbpb'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='srso-user-kernel-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Turin-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vp2intersect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fs-gs-base-ns'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibpb-brtype'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='perfmon-v2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbpb'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='srso-user-kernel-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-v5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='GraniteRapids'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='GraniteRapids-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='GraniteRapids-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-128'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-256'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-512'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='GraniteRapids-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-128'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-256'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-512'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-noTSX'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-noTSX'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v6'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v7'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='IvyBridge'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='IvyBridge-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='IvyBridge-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='IvyBridge-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='KnightsMill'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-4fmaps'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-4vnniw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512er'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512pf'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='KnightsMill-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-4fmaps'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-4vnniw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512er'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512pf'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Opteron_G4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fma4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xop'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Opteron_G4-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fma4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xop'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Opteron_G5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fma4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tbm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xop'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Opteron_G5-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fma4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tbm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xop'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SierraForest'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SierraForest-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SierraForest-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='intel-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ipred-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='lam'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rrsba-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SierraForest-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='intel-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ipred-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='lam'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rrsba-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='core-capability'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mpx'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='split-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='core-capability'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mpx'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='split-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='core-capability'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='split-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='core-capability'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='split-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='athlon'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnow'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnowext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='athlon-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnow'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnowext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='core2duo'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='core2duo-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='coreduo'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='coreduo-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='n270'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='n270-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='phenom'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnow'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnowext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='phenom-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnow'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnowext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </mode>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <memoryBacking supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <enum name='sourceType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>file</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>anonymous</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>memfd</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </memoryBacking>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <devices>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <disk supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='diskDevice'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>disk</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>cdrom</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>floppy</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>lun</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='bus'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>fdc</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>scsi</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>usb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>sata</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio-transitional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio-non-transitional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <graphics supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vnc</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>egl-headless</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>dbus</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </graphics>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <video supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='modelType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vga</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>cirrus</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>none</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>bochs</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>ramfb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </video>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <hostdev supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='mode'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>subsystem</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='startupPolicy'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>default</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>mandatory</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>requisite</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>optional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='subsysType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>usb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pci</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>scsi</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='capsType'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='pciBackend'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </hostdev>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <rng supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio-transitional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio-non-transitional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendModel'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>random</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>egd</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>builtin</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </rng>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <filesystem supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='driverType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>path</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>handle</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtiofs</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </filesystem>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <tpm supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>tpm-tis</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>tpm-crb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendModel'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>emulator</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>external</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendVersion'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>2.0</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </tpm>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <redirdev supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='bus'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>usb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </redirdev>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <channel supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pty</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>unix</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </channel>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <crypto supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>qemu</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendModel'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>builtin</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </crypto>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <interface supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>default</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>passt</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </interface>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <panic supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>isa</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>hyperv</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </panic>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <console supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>null</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vc</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pty</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>dev</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>file</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pipe</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>stdio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>udp</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>tcp</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>unix</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>qemu-vdagent</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>dbus</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </console>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </devices>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <features>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <gic supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <vmcoreinfo supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <genid supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <backingStoreInput supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <backup supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <async-teardown supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <s390-pv supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <ps2 supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <tdx supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <sev supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <sgx supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <hyperv supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='features'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>relaxed</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vapic</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>spinlocks</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vpindex</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>runtime</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>synic</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>stimer</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>reset</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vendor_id</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>frequencies</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>reenlightenment</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>tlbflush</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>ipi</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>avic</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>emsr_bitmap</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>xmm_input</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <defaults>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <spinlocks>4095</spinlocks>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <stimer_direct>on</stimer_direct>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <tlbflush_direct>on</tlbflush_direct>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <tlbflush_extended>on</tlbflush_extended>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </defaults>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </hyperv>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <launchSecurity supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </features>
Feb  2 06:47:21 np0005604943 nova_compute[238883]: </domainCapabilities>
Feb  2 06:47:21 np0005604943 nova_compute[238883]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.505 238887 DEBUG nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb  2 06:47:21 np0005604943 nova_compute[238883]: <domainCapabilities>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <path>/usr/libexec/qemu-kvm</path>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <domain>kvm</domain>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <arch>i686</arch>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <vcpu max='240'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <iothreads supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <os supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <enum name='firmware'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <loader supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>rom</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pflash</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='readonly'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>yes</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>no</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='secure'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>no</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </loader>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </os>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <cpu>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <mode name='host-passthrough' supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='hostPassthroughMigratable'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>on</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>off</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </mode>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <mode name='maximum' supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='maximumMigratable'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>on</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>off</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </mode>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <mode name='host-model' supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <vendor>AMD</vendor>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='x2apic'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='tsc-deadline'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='hypervisor'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='tsc_adjust'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='spec-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='stibp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='ssbd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='cmp_legacy'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='overflow-recov'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='succor'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='amd-ssbd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='virt-ssbd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='lbrv'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='tsc-scale'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='vmcb-clean'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='flushbyasid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='pause-filter'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='pfthreshold'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='svme-addr-chk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='disable' name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </mode>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <mode name='custom' supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-noTSX'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='ClearwaterForest'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ddpd-u'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='intel-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ipred-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='lam'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rrsba-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sha512'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sm3'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sm4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='ClearwaterForest-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ddpd-u'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='intel-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ipred-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='lam'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rrsba-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sha512'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sm3'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sm4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cooperlake'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cooperlake-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cooperlake-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Denverton'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mpx'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Denverton-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mpx'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Denverton-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Denverton-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Dhyana-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Genoa'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Genoa-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Genoa-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fs-gs-base-ns'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='perfmon-v2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Milan'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Milan-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Milan-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Milan-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Rome'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Rome-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Rome-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Rome-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Turin'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vp2intersect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fs-gs-base-ns'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibpb-brtype'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='perfmon-v2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbpb'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='srso-user-kernel-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Turin-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vp2intersect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fs-gs-base-ns'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibpb-brtype'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='perfmon-v2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbpb'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='srso-user-kernel-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-v5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='GraniteRapids'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='GraniteRapids-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='GraniteRapids-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-128'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-256'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-512'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='GraniteRapids-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-128'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-256'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-512'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-noTSX'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-noTSX'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v6'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v7'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='IvyBridge'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='IvyBridge-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='IvyBridge-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='IvyBridge-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='KnightsMill'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-4fmaps'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-4vnniw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512er'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512pf'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='KnightsMill-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-4fmaps'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-4vnniw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512er'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512pf'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Opteron_G4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fma4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xop'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Opteron_G4-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fma4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xop'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Opteron_G5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fma4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tbm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xop'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Opteron_G5-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fma4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tbm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xop'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SierraForest'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SierraForest-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SierraForest-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='intel-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ipred-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='lam'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rrsba-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SierraForest-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='intel-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ipred-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='lam'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rrsba-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='core-capability'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mpx'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='split-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='core-capability'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mpx'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='split-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='core-capability'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='split-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='core-capability'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='split-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='athlon'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnow'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnowext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='athlon-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnow'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnowext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='core2duo'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='core2duo-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='coreduo'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='coreduo-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='n270'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='n270-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='phenom'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnow'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnowext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='phenom-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnow'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnowext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </mode>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <memoryBacking supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <enum name='sourceType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>file</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>anonymous</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>memfd</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </memoryBacking>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <devices>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <disk supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='diskDevice'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>disk</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>cdrom</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>floppy</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>lun</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='bus'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>ide</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>fdc</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>scsi</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>usb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>sata</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio-transitional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio-non-transitional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <graphics supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vnc</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>egl-headless</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>dbus</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </graphics>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <video supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='modelType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vga</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>cirrus</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>none</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>bochs</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>ramfb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </video>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <hostdev supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='mode'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>subsystem</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='startupPolicy'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>default</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>mandatory</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>requisite</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>optional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='subsysType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>usb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pci</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>scsi</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='capsType'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='pciBackend'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </hostdev>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <rng supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio-transitional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio-non-transitional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendModel'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>random</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>egd</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>builtin</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </rng>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <filesystem supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='driverType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>path</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>handle</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtiofs</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </filesystem>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <tpm supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>tpm-tis</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>tpm-crb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendModel'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>emulator</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>external</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendVersion'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>2.0</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </tpm>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <redirdev supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='bus'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>usb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </redirdev>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <channel supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pty</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>unix</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </channel>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <crypto supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>qemu</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendModel'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>builtin</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </crypto>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <interface supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>default</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>passt</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </interface>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <panic supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>isa</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>hyperv</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </panic>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <console supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>null</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vc</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pty</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>dev</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>file</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pipe</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>stdio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>udp</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>tcp</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>unix</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>qemu-vdagent</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>dbus</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </console>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </devices>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <features>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <gic supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <vmcoreinfo supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <genid supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <backingStoreInput supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <backup supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <async-teardown supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <s390-pv supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <ps2 supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <tdx supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <sev supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <sgx supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <hyperv supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='features'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>relaxed</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vapic</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>spinlocks</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vpindex</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>runtime</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>synic</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>stimer</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>reset</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vendor_id</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>frequencies</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>reenlightenment</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>tlbflush</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>ipi</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>avic</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>emsr_bitmap</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>xmm_input</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <defaults>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <spinlocks>4095</spinlocks>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <stimer_direct>on</stimer_direct>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <tlbflush_direct>on</tlbflush_direct>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <tlbflush_extended>on</tlbflush_extended>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </defaults>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </hyperv>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <launchSecurity supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </features>
Feb  2 06:47:21 np0005604943 nova_compute[238883]: </domainCapabilities>
Feb  2 06:47:21 np0005604943 nova_compute[238883]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.573 238887 DEBUG nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.581 238887 DEBUG nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb  2 06:47:21 np0005604943 nova_compute[238883]: <domainCapabilities>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <path>/usr/libexec/qemu-kvm</path>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <domain>kvm</domain>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <machine>pc-q35-rhel9.8.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <arch>x86_64</arch>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <vcpu max='4096'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <iothreads supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <os supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <enum name='firmware'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>efi</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <loader supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>rom</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pflash</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='readonly'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>yes</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>no</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='secure'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>yes</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>no</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </loader>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </os>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <cpu>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <mode name='host-passthrough' supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='hostPassthroughMigratable'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>on</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>off</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </mode>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <mode name='maximum' supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='maximumMigratable'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>on</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>off</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </mode>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <mode name='host-model' supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <vendor>AMD</vendor>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='x2apic'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='tsc-deadline'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='hypervisor'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='tsc_adjust'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='spec-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='stibp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='ssbd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='cmp_legacy'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='overflow-recov'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='succor'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='amd-ssbd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='virt-ssbd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='lbrv'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='tsc-scale'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='vmcb-clean'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='flushbyasid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='pause-filter'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='pfthreshold'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='svme-addr-chk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='disable' name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </mode>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <mode name='custom' supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-noTSX'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='ClearwaterForest'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ddpd-u'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='intel-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ipred-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='lam'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rrsba-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sha512'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sm3'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sm4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='ClearwaterForest-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ddpd-u'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='intel-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ipred-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='lam'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rrsba-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sha512'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sm3'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sm4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cooperlake'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cooperlake-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cooperlake-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Denverton'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mpx'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Denverton-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mpx'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Denverton-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Denverton-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Dhyana-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Genoa'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Genoa-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Genoa-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fs-gs-base-ns'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='perfmon-v2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Milan'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Milan-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Milan-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Milan-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Rome'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Rome-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Rome-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Rome-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Turin'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vp2intersect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fs-gs-base-ns'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibpb-brtype'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='perfmon-v2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbpb'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='srso-user-kernel-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Turin-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vp2intersect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fs-gs-base-ns'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibpb-brtype'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='perfmon-v2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbpb'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='srso-user-kernel-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-v5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='GraniteRapids'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='GraniteRapids-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='GraniteRapids-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-128'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-256'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-512'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='GraniteRapids-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-128'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-256'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-512'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-noTSX'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-noTSX'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v6'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v7'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='IvyBridge'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='IvyBridge-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='IvyBridge-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='IvyBridge-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='KnightsMill'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-4fmaps'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-4vnniw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512er'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512pf'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='KnightsMill-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-4fmaps'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-4vnniw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512er'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512pf'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Opteron_G4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fma4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xop'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Opteron_G4-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fma4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xop'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Opteron_G5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fma4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tbm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xop'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Opteron_G5-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fma4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tbm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xop'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SierraForest'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SierraForest-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SierraForest-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='intel-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ipred-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='lam'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rrsba-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SierraForest-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='intel-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ipred-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='lam'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rrsba-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='core-capability'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mpx'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='split-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='core-capability'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mpx'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='split-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='core-capability'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='split-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='core-capability'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='split-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='athlon'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnow'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnowext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='athlon-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnow'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnowext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='core2duo'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='core2duo-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='coreduo'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='coreduo-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='n270'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='n270-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='phenom'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnow'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnowext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='phenom-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnow'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnowext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </mode>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <memoryBacking supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <enum name='sourceType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>file</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>anonymous</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>memfd</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </memoryBacking>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <devices>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <disk supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='diskDevice'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>disk</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>cdrom</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>floppy</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>lun</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='bus'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>fdc</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>scsi</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>usb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>sata</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio-transitional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio-non-transitional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <graphics supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vnc</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>egl-headless</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>dbus</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </graphics>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <video supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='modelType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vga</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>cirrus</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>none</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>bochs</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>ramfb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </video>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <hostdev supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='mode'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>subsystem</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='startupPolicy'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>default</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>mandatory</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>requisite</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>optional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='subsysType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>usb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pci</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>scsi</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='capsType'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='pciBackend'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </hostdev>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <rng supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio-transitional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio-non-transitional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendModel'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>random</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>egd</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>builtin</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </rng>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <filesystem supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='driverType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>path</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>handle</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtiofs</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </filesystem>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <tpm supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>tpm-tis</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>tpm-crb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendModel'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>emulator</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>external</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendVersion'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>2.0</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </tpm>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <redirdev supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='bus'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>usb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </redirdev>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <channel supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pty</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>unix</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </channel>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <crypto supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>qemu</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendModel'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>builtin</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </crypto>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <interface supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>default</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>passt</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </interface>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <panic supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>isa</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>hyperv</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </panic>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <console supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>null</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vc</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pty</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>dev</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>file</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pipe</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>stdio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>udp</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>tcp</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>unix</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>qemu-vdagent</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>dbus</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </console>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </devices>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <features>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <gic supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <vmcoreinfo supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <genid supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <backingStoreInput supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <backup supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <async-teardown supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <s390-pv supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <ps2 supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <tdx supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <sev supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <sgx supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <hyperv supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='features'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>relaxed</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vapic</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>spinlocks</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vpindex</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>runtime</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>synic</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>stimer</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>reset</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vendor_id</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>frequencies</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>reenlightenment</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>tlbflush</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>ipi</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>avic</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>emsr_bitmap</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>xmm_input</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <defaults>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <spinlocks>4095</spinlocks>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <stimer_direct>on</stimer_direct>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <tlbflush_direct>on</tlbflush_direct>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <tlbflush_extended>on</tlbflush_extended>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </defaults>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </hyperv>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <launchSecurity supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </features>
Feb  2 06:47:21 np0005604943 nova_compute[238883]: </domainCapabilities>
Feb  2 06:47:21 np0005604943 nova_compute[238883]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.657 238887 DEBUG nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb  2 06:47:21 np0005604943 nova_compute[238883]: <domainCapabilities>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <path>/usr/libexec/qemu-kvm</path>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <domain>kvm</domain>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <arch>x86_64</arch>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <vcpu max='240'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <iothreads supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <os supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <enum name='firmware'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <loader supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>rom</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pflash</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='readonly'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>yes</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>no</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='secure'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>no</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </loader>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </os>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <cpu>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <mode name='host-passthrough' supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='hostPassthroughMigratable'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>on</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>off</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </mode>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <mode name='maximum' supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='maximumMigratable'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>on</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>off</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </mode>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <mode name='host-model' supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <vendor>AMD</vendor>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='x2apic'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='tsc-deadline'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='hypervisor'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='tsc_adjust'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='spec-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='stibp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='ssbd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='cmp_legacy'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='overflow-recov'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='succor'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='amd-ssbd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='virt-ssbd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='lbrv'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='tsc-scale'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='vmcb-clean'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='flushbyasid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='pause-filter'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='pfthreshold'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='svme-addr-chk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <feature policy='disable' name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </mode>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <mode name='custom' supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-noTSX'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Broadwell-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cascadelake-Server-v5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='ClearwaterForest'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ddpd-u'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='intel-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ipred-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='lam'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rrsba-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sha512'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sm3'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sm4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='ClearwaterForest-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ddpd-u'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='intel-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ipred-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='lam'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rrsba-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sha512'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sm3'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sm4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cooperlake'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cooperlake-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Cooperlake-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Denverton'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mpx'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Denverton-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mpx'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Denverton-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Denverton-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Dhyana-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Genoa'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Genoa-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Genoa-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fs-gs-base-ns'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='perfmon-v2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Milan'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Milan-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Milan-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Milan-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Rome'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Rome-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Rome-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Rome-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Turin'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vp2intersect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fs-gs-base-ns'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibpb-brtype'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='perfmon-v2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbpb'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='srso-user-kernel-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-Turin-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amd-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='auto-ibrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vp2intersect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fs-gs-base-ns'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibpb-brtype'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='no-nested-data-bp'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='null-sel-clr-base'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='perfmon-v2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbpb'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='srso-user-kernel-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='stibp-always-on'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='EPYC-v5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='GraniteRapids'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='GraniteRapids-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='GraniteRapids-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-128'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-256'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-512'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='GraniteRapids-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-128'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-256'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx10-512'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='prefetchiti'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-noTSX'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Haswell-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-noTSX'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v6'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Icelake-Server-v7'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='IvyBridge'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='IvyBridge-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='IvyBridge-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='IvyBridge-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='KnightsMill'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-4fmaps'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-4vnniw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512er'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512pf'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='KnightsMill-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-4fmaps'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-4vnniw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512er'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512pf'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Opteron_G4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fma4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xop'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Opteron_G4-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fma4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xop'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Opteron_G5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fma4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tbm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xop'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Opteron_G5-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fma4'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tbm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xop'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SapphireRapids-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='amx-tile'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-bf16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-fp16'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512-vpopcntdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bitalg'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vbmi2'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrc'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fzrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='la57'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='taa-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='tsx-ldtrk'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SierraForest'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SierraForest-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SierraForest-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='intel-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ipred-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='lam'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rrsba-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='SierraForest-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ifma'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-ne-convert'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx-vnni-int8'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bhi-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='bus-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cmpccxadd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fbsdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='fsrs'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ibrs-all'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='intel-psfd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ipred-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='lam'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mcdt-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pbrsb-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='psdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rrsba-ctrl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='sbdr-ssdp-no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='serialize'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vaes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='vpclmulqdq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Client-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='hle'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='rtm'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Skylake-Server-v5'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512bw'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512cd'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512dq'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512f'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='avx512vl'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='invpcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pcid'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='pku'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='core-capability'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mpx'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='split-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='core-capability'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='mpx'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='split-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge-v2'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='core-capability'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='split-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge-v3'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='core-capability'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='split-lock-detect'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='Snowridge-v4'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='cldemote'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='erms'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='gfni'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdir64b'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='movdiri'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='xsaves'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='athlon'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnow'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnowext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='athlon-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnow'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnowext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='core2duo'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='core2duo-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='coreduo'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='coreduo-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='n270'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='n270-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='ss'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='phenom'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnow'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnowext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <blockers model='phenom-v1'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnow'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <feature name='3dnowext'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </blockers>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </mode>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <memoryBacking supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <enum name='sourceType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>file</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>anonymous</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <value>memfd</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </memoryBacking>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <devices>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <disk supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='diskDevice'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>disk</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>cdrom</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>floppy</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>lun</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='bus'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>ide</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>fdc</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>scsi</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>usb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>sata</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio-transitional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio-non-transitional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <graphics supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vnc</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>egl-headless</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>dbus</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </graphics>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <video supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='modelType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vga</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>cirrus</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>none</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>bochs</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>ramfb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </video>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <hostdev supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='mode'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>subsystem</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='startupPolicy'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>default</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>mandatory</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>requisite</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>optional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='subsysType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>usb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pci</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>scsi</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='capsType'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='pciBackend'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </hostdev>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <rng supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio-transitional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtio-non-transitional</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendModel'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>random</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>egd</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>builtin</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </rng>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <filesystem supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='driverType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>path</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>handle</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>virtiofs</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </filesystem>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <tpm supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>tpm-tis</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>tpm-crb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendModel'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>emulator</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>external</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendVersion'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>2.0</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </tpm>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <redirdev supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='bus'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>usb</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </redirdev>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <channel supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pty</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>unix</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </channel>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <crypto supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>qemu</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendModel'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>builtin</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </crypto>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <interface supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='backendType'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>default</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>passt</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </interface>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <panic supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='model'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>isa</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>hyperv</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </panic>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <console supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='type'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>null</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vc</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pty</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>dev</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>file</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>pipe</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>stdio</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>udp</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>tcp</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>unix</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>qemu-vdagent</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>dbus</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </console>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </devices>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  <features>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <gic supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <vmcoreinfo supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <genid supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <backingStoreInput supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <backup supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <async-teardown supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <s390-pv supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <ps2 supported='yes'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <tdx supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <sev supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <sgx supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <hyperv supported='yes'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <enum name='features'>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>relaxed</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vapic</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>spinlocks</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vpindex</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>runtime</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>synic</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>stimer</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>reset</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>vendor_id</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>frequencies</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>reenlightenment</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>tlbflush</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>ipi</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>avic</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>emsr_bitmap</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <value>xmm_input</value>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </enum>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      <defaults>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <spinlocks>4095</spinlocks>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <stimer_direct>on</stimer_direct>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <tlbflush_direct>on</tlbflush_direct>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <tlbflush_extended>on</tlbflush_extended>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:      </defaults>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    </hyperv>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:    <launchSecurity supported='no'/>
Feb  2 06:47:21 np0005604943 nova_compute[238883]:  </features>
Feb  2 06:47:21 np0005604943 nova_compute[238883]: </domainCapabilities>
Feb  2 06:47:21 np0005604943 nova_compute[238883]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.727 238887 DEBUG nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.728 238887 INFO nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Secure Boot support detected#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.732 238887 INFO nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.733 238887 INFO nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.741 238887 DEBUG nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.788 238887 INFO nova.virt.node [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Determined node identity 30401227-b88f-415d-9c2d-3119bd1baf61 from /var/lib/nova/compute_id#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.811 238887 WARNING nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Compute nodes ['30401227-b88f-415d-9c2d-3119bd1baf61'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.861 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.909 238887 WARNING nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.909 238887 DEBUG oslo_concurrency.lockutils [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.910 238887 DEBUG oslo_concurrency.lockutils [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.910 238887 DEBUG oslo_concurrency.lockutils [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.910 238887 DEBUG nova.compute.resource_tracker [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 06:47:21 np0005604943 nova_compute[238883]: 2026-02-02 11:47:21.910 238887 DEBUG oslo_concurrency.processutils [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:47:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:47:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:47:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/608584506' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:47:22 np0005604943 nova_compute[238883]: 2026-02-02 11:47:22.428 238887 DEBUG oslo_concurrency.processutils [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:47:22 np0005604943 systemd[1]: Starting libvirt nodedev daemon...
Feb  2 06:47:22 np0005604943 systemd[1]: Started libvirt nodedev daemon.
Feb  2 06:47:22 np0005604943 nova_compute[238883]: 2026-02-02 11:47:22.701 238887 WARNING nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:47:22 np0005604943 nova_compute[238883]: 2026-02-02 11:47:22.703 238887 DEBUG nova.compute.resource_tracker [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5080MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 06:47:22 np0005604943 nova_compute[238883]: 2026-02-02 11:47:22.703 238887 DEBUG oslo_concurrency.lockutils [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:47:22 np0005604943 nova_compute[238883]: 2026-02-02 11:47:22.704 238887 DEBUG oslo_concurrency.lockutils [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:47:22 np0005604943 nova_compute[238883]: 2026-02-02 11:47:22.725 238887 WARNING nova.compute.resource_tracker [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] No compute node record for compute-0.ctlplane.example.com:30401227-b88f-415d-9c2d-3119bd1baf61: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 30401227-b88f-415d-9c2d-3119bd1baf61 could not be found.#033[00m
Feb  2 06:47:22 np0005604943 nova_compute[238883]: 2026-02-02 11:47:22.754 238887 INFO nova.compute.resource_tracker [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 30401227-b88f-415d-9c2d-3119bd1baf61#033[00m
Feb  2 06:47:22 np0005604943 nova_compute[238883]: 2026-02-02 11:47:22.821 238887 DEBUG nova.compute.resource_tracker [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 06:47:22 np0005604943 nova_compute[238883]: 2026-02-02 11:47:22.821 238887 DEBUG nova.compute.resource_tracker [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 06:47:23 np0005604943 nova_compute[238883]: 2026-02-02 11:47:23.806 238887 INFO nova.scheduler.client.report [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [req-6d82253b-9367-4f25-8896-d7d4f108ebf1] Created resource provider record via placement API for resource provider with UUID 30401227-b88f-415d-9c2d-3119bd1baf61 and name compute-0.ctlplane.example.com.#033[00m
Feb  2 06:47:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:24 np0005604943 nova_compute[238883]: 2026-02-02 11:47:24.171 238887 DEBUG oslo_concurrency.processutils [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:47:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:47:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2424030620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:47:24 np0005604943 nova_compute[238883]: 2026-02-02 11:47:24.670 238887 DEBUG oslo_concurrency.processutils [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:47:24 np0005604943 nova_compute[238883]: 2026-02-02 11:47:24.675 238887 DEBUG nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb  2 06:47:24 np0005604943 nova_compute[238883]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Feb  2 06:47:24 np0005604943 nova_compute[238883]: 2026-02-02 11:47:24.675 238887 INFO nova.virt.libvirt.host [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] kernel doesn't support AMD SEV#033[00m
Feb  2 06:47:24 np0005604943 nova_compute[238883]: 2026-02-02 11:47:24.677 238887 DEBUG nova.compute.provider_tree [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Updating inventory in ProviderTree for provider 30401227-b88f-415d-9c2d-3119bd1baf61 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 06:47:24 np0005604943 nova_compute[238883]: 2026-02-02 11:47:24.677 238887 DEBUG nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 06:47:24 np0005604943 nova_compute[238883]: 2026-02-02 11:47:24.743 238887 DEBUG nova.scheduler.client.report [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Updated inventory for provider 30401227-b88f-415d-9c2d-3119bd1baf61 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Feb  2 06:47:24 np0005604943 nova_compute[238883]: 2026-02-02 11:47:24.743 238887 DEBUG nova.compute.provider_tree [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Updating resource provider 30401227-b88f-415d-9c2d-3119bd1baf61 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Feb  2 06:47:24 np0005604943 nova_compute[238883]: 2026-02-02 11:47:24.744 238887 DEBUG nova.compute.provider_tree [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Updating inventory in ProviderTree for provider 30401227-b88f-415d-9c2d-3119bd1baf61 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 06:47:24 np0005604943 nova_compute[238883]: 2026-02-02 11:47:24.835 238887 DEBUG nova.compute.provider_tree [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Updating resource provider 30401227-b88f-415d-9c2d-3119bd1baf61 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Feb  2 06:47:24 np0005604943 nova_compute[238883]: 2026-02-02 11:47:24.861 238887 DEBUG nova.compute.resource_tracker [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 06:47:24 np0005604943 nova_compute[238883]: 2026-02-02 11:47:24.861 238887 DEBUG oslo_concurrency.lockutils [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:47:24 np0005604943 nova_compute[238883]: 2026-02-02 11:47:24.862 238887 DEBUG nova.service [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Feb  2 06:47:24 np0005604943 nova_compute[238883]: 2026-02-02 11:47:24.974 238887 DEBUG nova.service [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Feb  2 06:47:24 np0005604943 nova_compute[238883]: 2026-02-02 11:47:24.974 238887 DEBUG nova.servicegroup.drivers.db [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Feb  2 06:47:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:47:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:47:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:47:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:47:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:47:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:47:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:47:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:47:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:47:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:47:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:47:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/631093886' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:47:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:47:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/631093886' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:47:44 np0005604943 podman[239253]: 2026-02-02 11:47:44.0660315 +0000 UTC m=+0.084980262 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:47:44 np0005604943 podman[239252]: 2026-02-02 11:47:44.092967309 +0000 UTC m=+0.111897881 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 06:47:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:47:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/926205566' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:47:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:47:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/926205566' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:47:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:47:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3905432609' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:47:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:47:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3905432609' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:47:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:47:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:47:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:47:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:47:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb  2 06:47:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 06:48:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 06:48:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:48:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 06:48:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 06:48:05 np0005604943 nova_compute[238883]: 2026-02-02 11:48:05.976 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:48:05 np0005604943 nova_compute[238883]: 2026-02-02 11:48:05.999 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:48:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:48:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb  2 06:48:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:48:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:48:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:48:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:48:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:48:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:48:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:48:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:48:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:48:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:48:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:48:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:48:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:48:09
Feb  2 06:48:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:48:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:48:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['.rgw.root', 'vms', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', '.mgr', 'default.rgw.meta']
Feb  2 06:48:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:48:09 np0005604943 podman[239440]: 2026-02-02 11:48:09.80537246 +0000 UTC m=+0.048535382 container create e792d681f5393b170ba0d34c4ab19b6aecd71eb55cea20e9feec55ef1fffbd63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ride, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:48:09 np0005604943 systemd[1]: Started libpod-conmon-e792d681f5393b170ba0d34c4ab19b6aecd71eb55cea20e9feec55ef1fffbd63.scope.
Feb  2 06:48:09 np0005604943 podman[239440]: 2026-02-02 11:48:09.775008038 +0000 UTC m=+0.018170990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:48:09 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:48:09 np0005604943 podman[239440]: 2026-02-02 11:48:09.923562394 +0000 UTC m=+0.166725416 container init e792d681f5393b170ba0d34c4ab19b6aecd71eb55cea20e9feec55ef1fffbd63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ride, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:48:09 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:48:09 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:48:09 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:48:09 np0005604943 podman[239440]: 2026-02-02 11:48:09.928293453 +0000 UTC m=+0.171456365 container start e792d681f5393b170ba0d34c4ab19b6aecd71eb55cea20e9feec55ef1fffbd63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ride, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:48:09 np0005604943 dazzling_ride[239456]: 167 167
Feb  2 06:48:09 np0005604943 systemd[1]: libpod-e792d681f5393b170ba0d34c4ab19b6aecd71eb55cea20e9feec55ef1fffbd63.scope: Deactivated successfully.
Feb  2 06:48:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Feb  2 06:48:09 np0005604943 podman[239440]: 2026-02-02 11:48:09.957785002 +0000 UTC m=+0.200947964 container attach e792d681f5393b170ba0d34c4ab19b6aecd71eb55cea20e9feec55ef1fffbd63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ride, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:48:09 np0005604943 podman[239440]: 2026-02-02 11:48:09.958848972 +0000 UTC m=+0.202011934 container died e792d681f5393b170ba0d34c4ab19b6aecd71eb55cea20e9feec55ef1fffbd63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ride, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  2 06:48:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:48:10.012 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:48:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:48:10.013 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:48:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:48:10.013 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:48:10 np0005604943 systemd[1]: var-lib-containers-storage-overlay-14bd8d23a0e001f0e575b778c9fb6de232f297b12e75c4465d90c949c81562ff-merged.mount: Deactivated successfully.
Feb  2 06:48:10 np0005604943 podman[239440]: 2026-02-02 11:48:10.161107581 +0000 UTC m=+0.404270533 container remove e792d681f5393b170ba0d34c4ab19b6aecd71eb55cea20e9feec55ef1fffbd63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ride, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 06:48:10 np0005604943 systemd[1]: libpod-conmon-e792d681f5393b170ba0d34c4ab19b6aecd71eb55cea20e9feec55ef1fffbd63.scope: Deactivated successfully.
Feb  2 06:48:10 np0005604943 podman[239480]: 2026-02-02 11:48:10.290948764 +0000 UTC m=+0.042578889 container create 4078543b290095d822063aae4443bf30447a226b42c37cf69084fa8e4ad0a1dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 06:48:10 np0005604943 systemd[1]: Started libpod-conmon-4078543b290095d822063aae4443bf30447a226b42c37cf69084fa8e4ad0a1dc.scope.
Feb  2 06:48:10 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:48:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85f379fb446ee532dee9c816df683159745da9a9a56906745d9388f8cb60a7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:48:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85f379fb446ee532dee9c816df683159745da9a9a56906745d9388f8cb60a7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:48:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85f379fb446ee532dee9c816df683159745da9a9a56906745d9388f8cb60a7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:48:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85f379fb446ee532dee9c816df683159745da9a9a56906745d9388f8cb60a7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:48:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85f379fb446ee532dee9c816df683159745da9a9a56906745d9388f8cb60a7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:48:10 np0005604943 podman[239480]: 2026-02-02 11:48:10.368497861 +0000 UTC m=+0.120127986 container init 4078543b290095d822063aae4443bf30447a226b42c37cf69084fa8e4ad0a1dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_morse, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:48:10 np0005604943 podman[239480]: 2026-02-02 11:48:10.272915439 +0000 UTC m=+0.024545564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:48:10 np0005604943 podman[239480]: 2026-02-02 11:48:10.373898879 +0000 UTC m=+0.125529004 container start 4078543b290095d822063aae4443bf30447a226b42c37cf69084fa8e4ad0a1dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 06:48:10 np0005604943 podman[239480]: 2026-02-02 11:48:10.381054365 +0000 UTC m=+0.132684490 container attach 4078543b290095d822063aae4443bf30447a226b42c37cf69084fa8e4ad0a1dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 06:48:10 np0005604943 clever_morse[239496]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:48:10 np0005604943 clever_morse[239496]: --> All data devices are unavailable
Feb  2 06:48:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:48:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:48:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:48:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:48:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:48:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:48:10 np0005604943 systemd[1]: libpod-4078543b290095d822063aae4443bf30447a226b42c37cf69084fa8e4ad0a1dc.scope: Deactivated successfully.
Feb  2 06:48:10 np0005604943 conmon[239496]: conmon 4078543b290095d82206 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4078543b290095d822063aae4443bf30447a226b42c37cf69084fa8e4ad0a1dc.scope/container/memory.events
Feb  2 06:48:10 np0005604943 podman[239480]: 2026-02-02 11:48:10.764024124 +0000 UTC m=+0.515654249 container died 4078543b290095d822063aae4443bf30447a226b42c37cf69084fa8e4ad0a1dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 06:48:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:48:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:48:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:48:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:48:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:48:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:48:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:48:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:48:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:48:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:48:10 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b85f379fb446ee532dee9c816df683159745da9a9a56906745d9388f8cb60a7f-merged.mount: Deactivated successfully.
Feb  2 06:48:11 np0005604943 podman[239480]: 2026-02-02 11:48:11.252384655 +0000 UTC m=+1.004014800 container remove 4078543b290095d822063aae4443bf30447a226b42c37cf69084fa8e4ad0a1dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 06:48:11 np0005604943 systemd[1]: libpod-conmon-4078543b290095d822063aae4443bf30447a226b42c37cf69084fa8e4ad0a1dc.scope: Deactivated successfully.
Feb  2 06:48:11 np0005604943 podman[239593]: 2026-02-02 11:48:11.612126035 +0000 UTC m=+0.020568965 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:48:11 np0005604943 podman[239593]: 2026-02-02 11:48:11.71218163 +0000 UTC m=+0.120624550 container create 96ccabb16271f01f47b04a261363c113f0c8498bea445b14d58cd347a1deac33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_matsumoto, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Feb  2 06:48:11 np0005604943 systemd[1]: Started libpod-conmon-96ccabb16271f01f47b04a261363c113f0c8498bea445b14d58cd347a1deac33.scope.
Feb  2 06:48:11 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:48:11 np0005604943 podman[239593]: 2026-02-02 11:48:11.868343495 +0000 UTC m=+0.276786435 container init 96ccabb16271f01f47b04a261363c113f0c8498bea445b14d58cd347a1deac33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 06:48:11 np0005604943 podman[239593]: 2026-02-02 11:48:11.874502924 +0000 UTC m=+0.282945844 container start 96ccabb16271f01f47b04a261363c113f0c8498bea445b14d58cd347a1deac33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_matsumoto, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:48:11 np0005604943 hardcore_matsumoto[239609]: 167 167
Feb  2 06:48:11 np0005604943 systemd[1]: libpod-96ccabb16271f01f47b04a261363c113f0c8498bea445b14d58cd347a1deac33.scope: Deactivated successfully.
Feb  2 06:48:11 np0005604943 podman[239593]: 2026-02-02 11:48:11.906679007 +0000 UTC m=+0.315121957 container attach 96ccabb16271f01f47b04a261363c113f0c8498bea445b14d58cd347a1deac33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:48:11 np0005604943 podman[239593]: 2026-02-02 11:48:11.907480919 +0000 UTC m=+0.315923849 container died 96ccabb16271f01f47b04a261363c113f0c8498bea445b14d58cd347a1deac33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_matsumoto, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:48:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:11 np0005604943 systemd[1]: var-lib-containers-storage-overlay-3e186b2c5b6548c75189eaf61bc37cd3225e8ca9bb842ab47d7731a4dc214162-merged.mount: Deactivated successfully.
Feb  2 06:48:12 np0005604943 podman[239593]: 2026-02-02 11:48:12.212099697 +0000 UTC m=+0.620542657 container remove 96ccabb16271f01f47b04a261363c113f0c8498bea445b14d58cd347a1deac33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 06:48:12 np0005604943 systemd[1]: libpod-conmon-96ccabb16271f01f47b04a261363c113f0c8498bea445b14d58cd347a1deac33.scope: Deactivated successfully.
Feb  2 06:48:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:48:12 np0005604943 podman[239633]: 2026-02-02 11:48:12.369757453 +0000 UTC m=+0.072261874 container create 70a8219a7d1abf9f622f677571c2728988f97be30a331db2a0ba2e0741644051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_herschel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:48:12 np0005604943 systemd[1]: Started libpod-conmon-70a8219a7d1abf9f622f677571c2728988f97be30a331db2a0ba2e0741644051.scope.
Feb  2 06:48:12 np0005604943 podman[239633]: 2026-02-02 11:48:12.318858136 +0000 UTC m=+0.021362577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:48:12 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:48:12 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfef004bac63885f505a09f3b990d640714b359ec97838998a638685dee66e2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:48:12 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfef004bac63885f505a09f3b990d640714b359ec97838998a638685dee66e2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:48:12 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfef004bac63885f505a09f3b990d640714b359ec97838998a638685dee66e2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:48:12 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfef004bac63885f505a09f3b990d640714b359ec97838998a638685dee66e2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:48:12 np0005604943 podman[239633]: 2026-02-02 11:48:12.452457862 +0000 UTC m=+0.154962303 container init 70a8219a7d1abf9f622f677571c2728988f97be30a331db2a0ba2e0741644051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:48:12 np0005604943 podman[239633]: 2026-02-02 11:48:12.458111337 +0000 UTC m=+0.160615748 container start 70a8219a7d1abf9f622f677571c2728988f97be30a331db2a0ba2e0741644051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_herschel, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:48:12 np0005604943 podman[239633]: 2026-02-02 11:48:12.464335698 +0000 UTC m=+0.166840119 container attach 70a8219a7d1abf9f622f677571c2728988f97be30a331db2a0ba2e0741644051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_herschel, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:48:12 np0005604943 festive_herschel[239650]: {
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:    "0": [
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:        {
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "devices": [
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "/dev/loop3"
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            ],
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "lv_name": "ceph_lv0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "lv_size": "21470642176",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "name": "ceph_lv0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "tags": {
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.cluster_name": "ceph",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.crush_device_class": "",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.encrypted": "0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.objectstore": "bluestore",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.osd_id": "0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.type": "block",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.vdo": "0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.with_tpm": "0"
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            },
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "type": "block",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "vg_name": "ceph_vg0"
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:        }
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:    ],
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:    "1": [
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:        {
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "devices": [
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "/dev/loop4"
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            ],
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "lv_name": "ceph_lv1",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "lv_size": "21470642176",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "name": "ceph_lv1",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "tags": {
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.cluster_name": "ceph",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.crush_device_class": "",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.encrypted": "0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.objectstore": "bluestore",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.osd_id": "1",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.type": "block",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.vdo": "0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.with_tpm": "0"
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            },
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "type": "block",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "vg_name": "ceph_vg1"
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:        }
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:    ],
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:    "2": [
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:        {
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "devices": [
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "/dev/loop5"
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            ],
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "lv_name": "ceph_lv2",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "lv_size": "21470642176",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "name": "ceph_lv2",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "tags": {
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.cluster_name": "ceph",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.crush_device_class": "",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.encrypted": "0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.objectstore": "bluestore",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.osd_id": "2",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.type": "block",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.vdo": "0",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:                "ceph.with_tpm": "0"
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            },
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "type": "block",
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:            "vg_name": "ceph_vg2"
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:        }
Feb  2 06:48:12 np0005604943 festive_herschel[239650]:    ]
Feb  2 06:48:12 np0005604943 festive_herschel[239650]: }
Feb  2 06:48:12 np0005604943 systemd[1]: libpod-70a8219a7d1abf9f622f677571c2728988f97be30a331db2a0ba2e0741644051.scope: Deactivated successfully.
Feb  2 06:48:12 np0005604943 podman[239633]: 2026-02-02 11:48:12.711925251 +0000 UTC m=+0.414429692 container died 70a8219a7d1abf9f622f677571c2728988f97be30a331db2a0ba2e0741644051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 06:48:12 np0005604943 systemd[1]: var-lib-containers-storage-overlay-cfef004bac63885f505a09f3b990d640714b359ec97838998a638685dee66e2e-merged.mount: Deactivated successfully.
Feb  2 06:48:12 np0005604943 podman[239633]: 2026-02-02 11:48:12.785036087 +0000 UTC m=+0.487540508 container remove 70a8219a7d1abf9f622f677571c2728988f97be30a331db2a0ba2e0741644051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:48:12 np0005604943 systemd[1]: libpod-conmon-70a8219a7d1abf9f622f677571c2728988f97be30a331db2a0ba2e0741644051.scope: Deactivated successfully.
Feb  2 06:48:13 np0005604943 podman[239731]: 2026-02-02 11:48:13.177678251 +0000 UTC m=+0.035853105 container create a844b64ae1ce2ad3bb87261196925ce2e66d1e2344340a33c674cbed568b91ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 06:48:13 np0005604943 systemd[1]: Started libpod-conmon-a844b64ae1ce2ad3bb87261196925ce2e66d1e2344340a33c674cbed568b91ea.scope.
Feb  2 06:48:13 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:48:13 np0005604943 podman[239731]: 2026-02-02 11:48:13.245910183 +0000 UTC m=+0.104085067 container init a844b64ae1ce2ad3bb87261196925ce2e66d1e2344340a33c674cbed568b91ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_kilby, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:48:13 np0005604943 podman[239731]: 2026-02-02 11:48:13.250212131 +0000 UTC m=+0.108386995 container start a844b64ae1ce2ad3bb87261196925ce2e66d1e2344340a33c674cbed568b91ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:48:13 np0005604943 gracious_kilby[239747]: 167 167
Feb  2 06:48:13 np0005604943 systemd[1]: libpod-a844b64ae1ce2ad3bb87261196925ce2e66d1e2344340a33c674cbed568b91ea.scope: Deactivated successfully.
Feb  2 06:48:13 np0005604943 podman[239731]: 2026-02-02 11:48:13.254838508 +0000 UTC m=+0.113013392 container attach a844b64ae1ce2ad3bb87261196925ce2e66d1e2344340a33c674cbed568b91ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_kilby, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 06:48:13 np0005604943 podman[239731]: 2026-02-02 11:48:13.255141206 +0000 UTC m=+0.113316080 container died a844b64ae1ce2ad3bb87261196925ce2e66d1e2344340a33c674cbed568b91ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:48:13 np0005604943 podman[239731]: 2026-02-02 11:48:13.158507715 +0000 UTC m=+0.016682599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:48:13 np0005604943 systemd[1]: var-lib-containers-storage-overlay-e722e3f967c2ec49f816075de773142d616408be0e5bccfd71f5ed0a71b0b849-merged.mount: Deactivated successfully.
Feb  2 06:48:13 np0005604943 podman[239731]: 2026-02-02 11:48:13.33144364 +0000 UTC m=+0.189618504 container remove a844b64ae1ce2ad3bb87261196925ce2e66d1e2344340a33c674cbed568b91ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_kilby, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:48:13 np0005604943 systemd[1]: libpod-conmon-a844b64ae1ce2ad3bb87261196925ce2e66d1e2344340a33c674cbed568b91ea.scope: Deactivated successfully.
Feb  2 06:48:13 np0005604943 podman[239771]: 2026-02-02 11:48:13.467079721 +0000 UTC m=+0.039589417 container create f0bd7b80592b5816af0d5ad880cea4292d40661ec0674ce857f42da6cd9c6df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_tharp, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:48:13 np0005604943 systemd[1]: Started libpod-conmon-f0bd7b80592b5816af0d5ad880cea4292d40661ec0674ce857f42da6cd9c6df0.scope.
Feb  2 06:48:13 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:48:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f05ae770a13e3039a09ad7425bbc1e09f801e0f725c8315a7877dcc330b13c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:48:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f05ae770a13e3039a09ad7425bbc1e09f801e0f725c8315a7877dcc330b13c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:48:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f05ae770a13e3039a09ad7425bbc1e09f801e0f725c8315a7877dcc330b13c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:48:13 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f05ae770a13e3039a09ad7425bbc1e09f801e0f725c8315a7877dcc330b13c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:48:13 np0005604943 podman[239771]: 2026-02-02 11:48:13.5413733 +0000 UTC m=+0.113883026 container init f0bd7b80592b5816af0d5ad880cea4292d40661ec0674ce857f42da6cd9c6df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:48:13 np0005604943 podman[239771]: 2026-02-02 11:48:13.445065368 +0000 UTC m=+0.017575084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:48:13 np0005604943 podman[239771]: 2026-02-02 11:48:13.546426218 +0000 UTC m=+0.118935914 container start f0bd7b80592b5816af0d5ad880cea4292d40661ec0674ce857f42da6cd9c6df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_tharp, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 06:48:13 np0005604943 podman[239771]: 2026-02-02 11:48:13.574442727 +0000 UTC m=+0.146952413 container attach f0bd7b80592b5816af0d5ad880cea4292d40661ec0674ce857f42da6cd9c6df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_tharp, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:48:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:14 np0005604943 lvm[239881]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:48:14 np0005604943 lvm[239881]: VG ceph_vg0 finished
Feb  2 06:48:14 np0005604943 lvm[239887]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:48:14 np0005604943 lvm[239887]: VG ceph_vg1 finished
Feb  2 06:48:14 np0005604943 lvm[239888]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:48:14 np0005604943 lvm[239888]: VG ceph_vg2 finished
Feb  2 06:48:14 np0005604943 lvm[239907]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:48:14 np0005604943 lvm[239907]: VG ceph_vg0 finished
Feb  2 06:48:14 np0005604943 podman[239865]: 2026-02-02 11:48:14.173085893 +0000 UTC m=+0.066661680 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:48:14 np0005604943 podman[239863]: 2026-02-02 11:48:14.173248348 +0000 UTC m=+0.072821110 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb  2 06:48:14 np0005604943 condescending_tharp[239788]: {}
Feb  2 06:48:14 np0005604943 systemd[1]: libpod-f0bd7b80592b5816af0d5ad880cea4292d40661ec0674ce857f42da6cd9c6df0.scope: Deactivated successfully.
Feb  2 06:48:14 np0005604943 systemd[1]: libpod-f0bd7b80592b5816af0d5ad880cea4292d40661ec0674ce857f42da6cd9c6df0.scope: Consumed 1.038s CPU time.
Feb  2 06:48:14 np0005604943 podman[239771]: 2026-02-02 11:48:14.233740217 +0000 UTC m=+0.806249913 container died f0bd7b80592b5816af0d5ad880cea4292d40661ec0674ce857f42da6cd9c6df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_tharp, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 06:48:14 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b5f05ae770a13e3039a09ad7425bbc1e09f801e0f725c8315a7877dcc330b13c-merged.mount: Deactivated successfully.
Feb  2 06:48:14 np0005604943 podman[239771]: 2026-02-02 11:48:14.344580598 +0000 UTC m=+0.917090294 container remove f0bd7b80592b5816af0d5ad880cea4292d40661ec0674ce857f42da6cd9c6df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_tharp, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 06:48:14 np0005604943 systemd[1]: libpod-conmon-f0bd7b80592b5816af0d5ad880cea4292d40661ec0674ce857f42da6cd9c6df0.scope: Deactivated successfully.
Feb  2 06:48:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:48:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:48:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:48:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:48:15 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:48:15 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:48:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:48:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.644 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.645 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.645 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.645 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.675 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.676 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.676 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.676 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.677 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.677 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.677 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.677 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.677 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.725 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.726 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.726 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.726 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 06:48:20 np0005604943 nova_compute[238883]: 2026-02-02 11:48:20.727 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:48:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:48:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/994813954' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:48:21 np0005604943 nova_compute[238883]: 2026-02-02 11:48:21.302 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:48:21 np0005604943 nova_compute[238883]: 2026-02-02 11:48:21.446 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:48:21 np0005604943 nova_compute[238883]: 2026-02-02 11:48:21.447 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5098MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 06:48:21 np0005604943 nova_compute[238883]: 2026-02-02 11:48:21.447 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:48:21 np0005604943 nova_compute[238883]: 2026-02-02 11:48:21.447 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:48:21 np0005604943 nova_compute[238883]: 2026-02-02 11:48:21.533 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 06:48:21 np0005604943 nova_compute[238883]: 2026-02-02 11:48:21.534 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 06:48:21 np0005604943 nova_compute[238883]: 2026-02-02 11:48:21.563 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:48:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:48:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/387826482' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:48:22 np0005604943 nova_compute[238883]: 2026-02-02 11:48:22.157 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:48:22 np0005604943 nova_compute[238883]: 2026-02-02 11:48:22.163 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:48:22 np0005604943 nova_compute[238883]: 2026-02-02 11:48:22.212 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:48:22 np0005604943 nova_compute[238883]: 2026-02-02 11:48:22.262 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 06:48:22 np0005604943 nova_compute[238883]: 2026-02-02 11:48:22.262 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:48:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:48:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:48:27.310480) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032907310536, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1781, "num_deletes": 251, "total_data_size": 3004439, "memory_usage": 3038120, "flush_reason": "Manual Compaction"}
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032907336656, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1701378, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11753, "largest_seqno": 13533, "table_properties": {"data_size": 1695536, "index_size": 2917, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14778, "raw_average_key_size": 20, "raw_value_size": 1682604, "raw_average_value_size": 2298, "num_data_blocks": 135, "num_entries": 732, "num_filter_entries": 732, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770032707, "oldest_key_time": 1770032707, "file_creation_time": 1770032907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 26306 microseconds, and 6155 cpu microseconds.
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:48:27.336777) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1701378 bytes OK
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:48:27.336816) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:48:27.358287) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:48:27.358315) EVENT_LOG_v1 {"time_micros": 1770032907358310, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:48:27.358335) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2996857, prev total WAL file size 2996857, number of live WAL files 2.
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:48:27.359118) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1661KB)], [29(7972KB)]
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032907359199, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9864713, "oldest_snapshot_seqno": -1}
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4015 keys, 7761466 bytes, temperature: kUnknown
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032907462913, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7761466, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7732661, "index_size": 17687, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95545, "raw_average_key_size": 23, "raw_value_size": 7658293, "raw_average_value_size": 1907, "num_data_blocks": 771, "num_entries": 4015, "num_filter_entries": 4015, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770032907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:48:27.463163) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7761466 bytes
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:48:27.466620) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.1 rd, 74.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.8 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(10.4) write-amplify(4.6) OK, records in: 4435, records dropped: 420 output_compression: NoCompression
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:48:27.466661) EVENT_LOG_v1 {"time_micros": 1770032907466646, "job": 12, "event": "compaction_finished", "compaction_time_micros": 103768, "compaction_time_cpu_micros": 26612, "output_level": 6, "num_output_files": 1, "total_output_size": 7761466, "num_input_records": 4435, "num_output_records": 4015, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032907467056, "job": 12, "event": "table_file_deletion", "file_number": 31}
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770032907468153, "job": 12, "event": "table_file_deletion", "file_number": 29}
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:48:27.358937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:48:27.468292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:48:27.468305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:48:27.468308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:48:27.468310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:48:27.468313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:48:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Feb  2 06:48:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2675708449' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb  2 06:48:27 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14340 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb  2 06:48:27 np0005604943 ceph-mgr[75558]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb  2 06:48:27 np0005604943 ceph-mgr[75558]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb  2 06:48:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:48:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:48:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:48:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:48:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:48:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:48:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:48:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:48:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:48:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:48:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861215678' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:48:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:48:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861215678' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:48:45 np0005604943 podman[240001]: 2026-02-02 11:48:45.029049547 +0000 UTC m=+0.049735472 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Feb  2 06:48:45 np0005604943 podman[240000]: 2026-02-02 11:48:45.054005491 +0000 UTC m=+0.075273333 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb  2 06:48:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:48:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Feb  2 06:48:51 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3820106200' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Feb  2 06:48:51 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.14346 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Feb  2 06:48:51 np0005604943 ceph-mgr[75558]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb  2 06:48:51 np0005604943 ceph-mgr[75558]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Feb  2 06:48:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:48:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:48:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:48:59 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:01 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:49:03 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:05 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:49:07 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:49:09
Feb  2 06:49:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:49:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:49:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.meta', '.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'backups']
Feb  2 06:49:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:49:09 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:49:10.013 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:49:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:49:10.013 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:49:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:49:10.013 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:49:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:49:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:49:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:49:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:49:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:49:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:49:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:49:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:49:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:49:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:49:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:49:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:49:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:49:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:49:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:49:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:49:11 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:49:13 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:49:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:49:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:49:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:49:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:49:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:49:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:49:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:49:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:49:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:49:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:49:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:49:15 np0005604943 podman[240150]: 2026-02-02 11:49:15.189384163 +0000 UTC m=+0.077060851 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 06:49:15 np0005604943 podman[240151]: 2026-02-02 11:49:15.189470356 +0000 UTC m=+0.076857907 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent)
Feb  2 06:49:15 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:49:15 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:49:15 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:49:15 np0005604943 podman[240233]: 2026-02-02 11:49:15.459501957 +0000 UTC m=+0.047423538 container create f030938ea844d05718db852d8e684f9eb08efe2ac5addf9a0a63ff51b42b4e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 06:49:15 np0005604943 systemd[1]: Started libpod-conmon-f030938ea844d05718db852d8e684f9eb08efe2ac5addf9a0a63ff51b42b4e13.scope.
Feb  2 06:49:15 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:49:15 np0005604943 podman[240233]: 2026-02-02 11:49:15.43799339 +0000 UTC m=+0.025915011 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:49:15 np0005604943 podman[240233]: 2026-02-02 11:49:15.540372334 +0000 UTC m=+0.128293945 container init f030938ea844d05718db852d8e684f9eb08efe2ac5addf9a0a63ff51b42b4e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:49:15 np0005604943 podman[240233]: 2026-02-02 11:49:15.547142762 +0000 UTC m=+0.135064343 container start f030938ea844d05718db852d8e684f9eb08efe2ac5addf9a0a63ff51b42b4e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:49:15 np0005604943 podman[240233]: 2026-02-02 11:49:15.550764012 +0000 UTC m=+0.138685613 container attach f030938ea844d05718db852d8e684f9eb08efe2ac5addf9a0a63ff51b42b4e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 06:49:15 np0005604943 optimistic_feynman[240249]: 167 167
Feb  2 06:49:15 np0005604943 systemd[1]: libpod-f030938ea844d05718db852d8e684f9eb08efe2ac5addf9a0a63ff51b42b4e13.scope: Deactivated successfully.
Feb  2 06:49:15 np0005604943 podman[240233]: 2026-02-02 11:49:15.552358128 +0000 UTC m=+0.140279709 container died f030938ea844d05718db852d8e684f9eb08efe2ac5addf9a0a63ff51b42b4e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Feb  2 06:49:15 np0005604943 systemd[1]: var-lib-containers-storage-overlay-797c08bce4fc9eb738af2478fed1518b8ea7248d1f0c758b2e6b5368213a2f8e-merged.mount: Deactivated successfully.
Feb  2 06:49:15 np0005604943 podman[240233]: 2026-02-02 11:49:15.587576326 +0000 UTC m=+0.175497907 container remove f030938ea844d05718db852d8e684f9eb08efe2ac5addf9a0a63ff51b42b4e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:49:15 np0005604943 systemd[1]: libpod-conmon-f030938ea844d05718db852d8e684f9eb08efe2ac5addf9a0a63ff51b42b4e13.scope: Deactivated successfully.
Feb  2 06:49:15 np0005604943 podman[240271]: 2026-02-02 11:49:15.704022321 +0000 UTC m=+0.042016149 container create 22d71451457076d518d44b18d3fc615e8477fdfab70a718ad28fe530eb42e3ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 06:49:15 np0005604943 systemd[1]: Started libpod-conmon-22d71451457076d518d44b18d3fc615e8477fdfab70a718ad28fe530eb42e3ce.scope.
Feb  2 06:49:15 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:49:15 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6873912259fc224360561caff0588ccb00c27207529356eebfd8520218ee8e8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:49:15 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6873912259fc224360561caff0588ccb00c27207529356eebfd8520218ee8e8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:49:15 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6873912259fc224360561caff0588ccb00c27207529356eebfd8520218ee8e8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:49:15 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6873912259fc224360561caff0588ccb00c27207529356eebfd8520218ee8e8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:49:15 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6873912259fc224360561caff0588ccb00c27207529356eebfd8520218ee8e8b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:49:15 np0005604943 podman[240271]: 2026-02-02 11:49:15.775171808 +0000 UTC m=+0.113165666 container init 22d71451457076d518d44b18d3fc615e8477fdfab70a718ad28fe530eb42e3ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mcclintock, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:49:15 np0005604943 podman[240271]: 2026-02-02 11:49:15.681727401 +0000 UTC m=+0.019721249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:49:15 np0005604943 podman[240271]: 2026-02-02 11:49:15.780971358 +0000 UTC m=+0.118965186 container start 22d71451457076d518d44b18d3fc615e8477fdfab70a718ad28fe530eb42e3ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mcclintock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:49:15 np0005604943 podman[240271]: 2026-02-02 11:49:15.785649498 +0000 UTC m=+0.123643346 container attach 22d71451457076d518d44b18d3fc615e8477fdfab70a718ad28fe530eb42e3ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 06:49:15 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:16 np0005604943 serene_mcclintock[240287]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:49:16 np0005604943 serene_mcclintock[240287]: --> All data devices are unavailable
Feb  2 06:49:16 np0005604943 systemd[1]: libpod-22d71451457076d518d44b18d3fc615e8477fdfab70a718ad28fe530eb42e3ce.scope: Deactivated successfully.
Feb  2 06:49:16 np0005604943 podman[240271]: 2026-02-02 11:49:16.17197665 +0000 UTC m=+0.509970478 container died 22d71451457076d518d44b18d3fc615e8477fdfab70a718ad28fe530eb42e3ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mcclintock, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:49:16 np0005604943 systemd[1]: var-lib-containers-storage-overlay-6873912259fc224360561caff0588ccb00c27207529356eebfd8520218ee8e8b-merged.mount: Deactivated successfully.
Feb  2 06:49:16 np0005604943 podman[240271]: 2026-02-02 11:49:16.216347593 +0000 UTC m=+0.554341421 container remove 22d71451457076d518d44b18d3fc615e8477fdfab70a718ad28fe530eb42e3ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_mcclintock, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 06:49:16 np0005604943 systemd[1]: libpod-conmon-22d71451457076d518d44b18d3fc615e8477fdfab70a718ad28fe530eb42e3ce.scope: Deactivated successfully.
Feb  2 06:49:16 np0005604943 podman[240385]: 2026-02-02 11:49:16.651736319 +0000 UTC m=+0.041010781 container create 144dc6bc06f37dd655104cbccfee874a3b12f59aca6cbcecacff43cd5a353321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 06:49:16 np0005604943 systemd[1]: Started libpod-conmon-144dc6bc06f37dd655104cbccfee874a3b12f59aca6cbcecacff43cd5a353321.scope.
Feb  2 06:49:16 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:49:16 np0005604943 podman[240385]: 2026-02-02 11:49:16.633905023 +0000 UTC m=+0.023179515 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:49:16 np0005604943 podman[240385]: 2026-02-02 11:49:16.732187423 +0000 UTC m=+0.121461905 container init 144dc6bc06f37dd655104cbccfee874a3b12f59aca6cbcecacff43cd5a353321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Feb  2 06:49:16 np0005604943 podman[240385]: 2026-02-02 11:49:16.737716597 +0000 UTC m=+0.126991059 container start 144dc6bc06f37dd655104cbccfee874a3b12f59aca6cbcecacff43cd5a353321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 06:49:16 np0005604943 podman[240385]: 2026-02-02 11:49:16.740550126 +0000 UTC m=+0.129824588 container attach 144dc6bc06f37dd655104cbccfee874a3b12f59aca6cbcecacff43cd5a353321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 06:49:16 np0005604943 vibrant_mahavira[240401]: 167 167
Feb  2 06:49:16 np0005604943 systemd[1]: libpod-144dc6bc06f37dd655104cbccfee874a3b12f59aca6cbcecacff43cd5a353321.scope: Deactivated successfully.
Feb  2 06:49:16 np0005604943 podman[240385]: 2026-02-02 11:49:16.742236442 +0000 UTC m=+0.131510894 container died 144dc6bc06f37dd655104cbccfee874a3b12f59aca6cbcecacff43cd5a353321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:49:16 np0005604943 systemd[1]: var-lib-containers-storage-overlay-87f60d7951a89c1b1e55fbac92283410739210c7928e02bb0e42b4b383474075-merged.mount: Deactivated successfully.
Feb  2 06:49:16 np0005604943 podman[240385]: 2026-02-02 11:49:16.779340193 +0000 UTC m=+0.168614685 container remove 144dc6bc06f37dd655104cbccfee874a3b12f59aca6cbcecacff43cd5a353321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:49:16 np0005604943 systemd[1]: libpod-conmon-144dc6bc06f37dd655104cbccfee874a3b12f59aca6cbcecacff43cd5a353321.scope: Deactivated successfully.
Feb  2 06:49:16 np0005604943 podman[240424]: 2026-02-02 11:49:16.918960012 +0000 UTC m=+0.046010409 container create b6a13a018690910c8a4ecd25d5fb37a0f94f9a1f2e7519fac356c58ba06f97a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 06:49:16 np0005604943 systemd[1]: Started libpod-conmon-b6a13a018690910c8a4ecd25d5fb37a0f94f9a1f2e7519fac356c58ba06f97a6.scope.
Feb  2 06:49:16 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:49:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ad6d6e58d69086d2d2f6b3760441ddede62af217daadcdb55362f448911d08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:49:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ad6d6e58d69086d2d2f6b3760441ddede62af217daadcdb55362f448911d08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:49:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ad6d6e58d69086d2d2f6b3760441ddede62af217daadcdb55362f448911d08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:49:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ad6d6e58d69086d2d2f6b3760441ddede62af217daadcdb55362f448911d08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:49:16 np0005604943 podman[240424]: 2026-02-02 11:49:16.900002005 +0000 UTC m=+0.027052422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:49:17 np0005604943 podman[240424]: 2026-02-02 11:49:17.001957728 +0000 UTC m=+0.129008155 container init b6a13a018690910c8a4ecd25d5fb37a0f94f9a1f2e7519fac356c58ba06f97a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wilson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  2 06:49:17 np0005604943 podman[240424]: 2026-02-02 11:49:17.007252205 +0000 UTC m=+0.134302612 container start b6a13a018690910c8a4ecd25d5fb37a0f94f9a1f2e7519fac356c58ba06f97a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wilson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 06:49:17 np0005604943 podman[240424]: 2026-02-02 11:49:17.01067185 +0000 UTC m=+0.137722337 container attach b6a13a018690910c8a4ecd25d5fb37a0f94f9a1f2e7519fac356c58ba06f97a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wilson, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]: {
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:    "0": [
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:        {
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "devices": [
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "/dev/loop3"
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            ],
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "lv_name": "ceph_lv0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "lv_size": "21470642176",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "name": "ceph_lv0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "tags": {
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.cluster_name": "ceph",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.crush_device_class": "",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.encrypted": "0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.objectstore": "bluestore",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.osd_id": "0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.type": "block",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.vdo": "0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.with_tpm": "0"
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            },
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "type": "block",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "vg_name": "ceph_vg0"
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:        }
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:    ],
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:    "1": [
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:        {
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "devices": [
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "/dev/loop4"
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            ],
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "lv_name": "ceph_lv1",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "lv_size": "21470642176",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "name": "ceph_lv1",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "tags": {
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.cluster_name": "ceph",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.crush_device_class": "",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.encrypted": "0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.objectstore": "bluestore",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.osd_id": "1",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.type": "block",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.vdo": "0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.with_tpm": "0"
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            },
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "type": "block",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "vg_name": "ceph_vg1"
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:        }
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:    ],
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:    "2": [
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:        {
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "devices": [
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "/dev/loop5"
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            ],
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "lv_name": "ceph_lv2",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "lv_size": "21470642176",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "name": "ceph_lv2",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "tags": {
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.cluster_name": "ceph",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.crush_device_class": "",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.encrypted": "0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.objectstore": "bluestore",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.osd_id": "2",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.type": "block",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.vdo": "0",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:                "ceph.with_tpm": "0"
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            },
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "type": "block",
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:            "vg_name": "ceph_vg2"
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:        }
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]:    ]
Feb  2 06:49:17 np0005604943 gracious_wilson[240440]: }
Feb  2 06:49:17 np0005604943 systemd[1]: libpod-b6a13a018690910c8a4ecd25d5fb37a0f94f9a1f2e7519fac356c58ba06f97a6.scope: Deactivated successfully.
Feb  2 06:49:17 np0005604943 podman[240424]: 2026-02-02 11:49:17.290760711 +0000 UTC m=+0.417811098 container died b6a13a018690910c8a4ecd25d5fb37a0f94f9a1f2e7519fac356c58ba06f97a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wilson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 06:49:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:49:17 np0005604943 systemd[1]: var-lib-containers-storage-overlay-59ad6d6e58d69086d2d2f6b3760441ddede62af217daadcdb55362f448911d08-merged.mount: Deactivated successfully.
Feb  2 06:49:17 np0005604943 podman[240424]: 2026-02-02 11:49:17.330560087 +0000 UTC m=+0.457610484 container remove b6a13a018690910c8a4ecd25d5fb37a0f94f9a1f2e7519fac356c58ba06f97a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 06:49:17 np0005604943 systemd[1]: libpod-conmon-b6a13a018690910c8a4ecd25d5fb37a0f94f9a1f2e7519fac356c58ba06f97a6.scope: Deactivated successfully.
Feb  2 06:49:17 np0005604943 podman[240524]: 2026-02-02 11:49:17.720064017 +0000 UTC m=+0.046645947 container create 91b74beb1adaa87ef76146a29f33b43b487a44b6c721d657bd913141393a5648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_khayyam, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:49:17 np0005604943 systemd[1]: Started libpod-conmon-91b74beb1adaa87ef76146a29f33b43b487a44b6c721d657bd913141393a5648.scope.
Feb  2 06:49:17 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:49:17 np0005604943 podman[240524]: 2026-02-02 11:49:17.790268508 +0000 UTC m=+0.116850468 container init 91b74beb1adaa87ef76146a29f33b43b487a44b6c721d657bd913141393a5648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 06:49:17 np0005604943 podman[240524]: 2026-02-02 11:49:17.698896109 +0000 UTC m=+0.025478049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:49:17 np0005604943 podman[240524]: 2026-02-02 11:49:17.797061456 +0000 UTC m=+0.123643376 container start 91b74beb1adaa87ef76146a29f33b43b487a44b6c721d657bd913141393a5648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_khayyam, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:49:17 np0005604943 podman[240524]: 2026-02-02 11:49:17.800359338 +0000 UTC m=+0.126941298 container attach 91b74beb1adaa87ef76146a29f33b43b487a44b6c721d657bd913141393a5648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 06:49:17 np0005604943 gallant_khayyam[240540]: 167 167
Feb  2 06:49:17 np0005604943 systemd[1]: libpod-91b74beb1adaa87ef76146a29f33b43b487a44b6c721d657bd913141393a5648.scope: Deactivated successfully.
Feb  2 06:49:17 np0005604943 podman[240524]: 2026-02-02 11:49:17.802517518 +0000 UTC m=+0.129099438 container died 91b74beb1adaa87ef76146a29f33b43b487a44b6c721d657bd913141393a5648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 06:49:17 np0005604943 systemd[1]: var-lib-containers-storage-overlay-aad55a40120f98fea2157f7cf3981eaaeb110892a3e88937c95e2686c6bb4150-merged.mount: Deactivated successfully.
Feb  2 06:49:17 np0005604943 podman[240524]: 2026-02-02 11:49:17.839066013 +0000 UTC m=+0.165647943 container remove 91b74beb1adaa87ef76146a29f33b43b487a44b6c721d657bd913141393a5648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_khayyam, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:49:17 np0005604943 systemd[1]: libpod-conmon-91b74beb1adaa87ef76146a29f33b43b487a44b6c721d657bd913141393a5648.scope: Deactivated successfully.
Feb  2 06:49:17 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:17 np0005604943 podman[240563]: 2026-02-02 11:49:17.99156146 +0000 UTC m=+0.058767104 container create 55b148fdb0178ef590bed17121eda9bb7f5b93f1cf2244a3836c3a3debcdb5f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_leakey, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:49:18 np0005604943 systemd[1]: Started libpod-conmon-55b148fdb0178ef590bed17121eda9bb7f5b93f1cf2244a3836c3a3debcdb5f7.scope.
Feb  2 06:49:18 np0005604943 podman[240563]: 2026-02-02 11:49:17.966998007 +0000 UTC m=+0.034203731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:49:18 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:49:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d22a58c5fba75942d3d863a3b1411abdd78eadd90a59abe1829fb346e21451/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:49:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d22a58c5fba75942d3d863a3b1411abdd78eadd90a59abe1829fb346e21451/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:49:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d22a58c5fba75942d3d863a3b1411abdd78eadd90a59abe1829fb346e21451/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:49:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d22a58c5fba75942d3d863a3b1411abdd78eadd90a59abe1829fb346e21451/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:49:18 np0005604943 podman[240563]: 2026-02-02 11:49:18.109379593 +0000 UTC m=+0.176585257 container init 55b148fdb0178ef590bed17121eda9bb7f5b93f1cf2244a3836c3a3debcdb5f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_leakey, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 06:49:18 np0005604943 podman[240563]: 2026-02-02 11:49:18.116904191 +0000 UTC m=+0.184109835 container start 55b148fdb0178ef590bed17121eda9bb7f5b93f1cf2244a3836c3a3debcdb5f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_leakey, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:49:18 np0005604943 podman[240563]: 2026-02-02 11:49:18.121130119 +0000 UTC m=+0.188335763 container attach 55b148fdb0178ef590bed17121eda9bb7f5b93f1cf2244a3836c3a3debcdb5f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_leakey, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:49:18 np0005604943 lvm[240656]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:49:18 np0005604943 lvm[240656]: VG ceph_vg0 finished
Feb  2 06:49:18 np0005604943 lvm[240659]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:49:18 np0005604943 lvm[240659]: VG ceph_vg1 finished
Feb  2 06:49:18 np0005604943 lvm[240661]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:49:18 np0005604943 lvm[240661]: VG ceph_vg2 finished
Feb  2 06:49:18 np0005604943 dazzling_leakey[240580]: {}
Feb  2 06:49:18 np0005604943 systemd[1]: libpod-55b148fdb0178ef590bed17121eda9bb7f5b93f1cf2244a3836c3a3debcdb5f7.scope: Deactivated successfully.
Feb  2 06:49:18 np0005604943 systemd[1]: libpod-55b148fdb0178ef590bed17121eda9bb7f5b93f1cf2244a3836c3a3debcdb5f7.scope: Consumed 1.103s CPU time.
Feb  2 06:49:18 np0005604943 podman[240563]: 2026-02-02 11:49:18.884735273 +0000 UTC m=+0.951940927 container died 55b148fdb0178ef590bed17121eda9bb7f5b93f1cf2244a3836c3a3debcdb5f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_leakey, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 06:49:18 np0005604943 systemd[1]: var-lib-containers-storage-overlay-07d22a58c5fba75942d3d863a3b1411abdd78eadd90a59abe1829fb346e21451-merged.mount: Deactivated successfully.
Feb  2 06:49:18 np0005604943 podman[240563]: 2026-02-02 11:49:18.915829286 +0000 UTC m=+0.983034930 container remove 55b148fdb0178ef590bed17121eda9bb7f5b93f1cf2244a3836c3a3debcdb5f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:49:18 np0005604943 systemd[1]: libpod-conmon-55b148fdb0178ef590bed17121eda9bb7f5b93f1cf2244a3836c3a3debcdb5f7.scope: Deactivated successfully.
Feb  2 06:49:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:49:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:49:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:49:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:49:19 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:49:19 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:49:19 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:49:21 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.252 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.253 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.272 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.272 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.272 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.291 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.291 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.291 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.291 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.291 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:49:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.319 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.319 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.319 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.319 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.320 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:49:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:49:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3719227476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.868 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.982 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.983 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5101MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.984 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:49:22 np0005604943 nova_compute[238883]: 2026-02-02 11:49:22.984 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:49:23 np0005604943 nova_compute[238883]: 2026-02-02 11:49:23.053 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 06:49:23 np0005604943 nova_compute[238883]: 2026-02-02 11:49:23.054 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 06:49:23 np0005604943 nova_compute[238883]: 2026-02-02 11:49:23.070 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:49:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:49:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/651855476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:49:23 np0005604943 nova_compute[238883]: 2026-02-02 11:49:23.579 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:49:23 np0005604943 nova_compute[238883]: 2026-02-02 11:49:23.584 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:49:23 np0005604943 nova_compute[238883]: 2026-02-02 11:49:23.600 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:49:23 np0005604943 nova_compute[238883]: 2026-02-02 11:49:23.602 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 06:49:23 np0005604943 nova_compute[238883]: 2026-02-02 11:49:23.603 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:49:23 np0005604943 nova_compute[238883]: 2026-02-02 11:49:23.954 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:49:23 np0005604943 nova_compute[238883]: 2026-02-02 11:49:23.954 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:49:23 np0005604943 nova_compute[238883]: 2026-02-02 11:49:23.955 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:49:23 np0005604943 nova_compute[238883]: 2026-02-02 11:49:23.955 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 06:49:23 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:25 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:49:27 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:29 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:31 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:49:33 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:35 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:49:37 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:39 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:49:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:49:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:49:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:49:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:49:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:49:41 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:49:43 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:49:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/340078116' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:49:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:49:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/340078116' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:49:45 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:46 np0005604943 podman[240746]: 2026-02-02 11:49:46.060026624 +0000 UTC m=+0.077858054 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 06:49:46 np0005604943 podman[240745]: 2026-02-02 11:49:46.069106566 +0000 UTC m=+0.086187756 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 06:49:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:49:47 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:49 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:49:51.366 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:49:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:49:51.367 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 06:49:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:49:51.367 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:49:51 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:49:53 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:55 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:49:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:49:57 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:50:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:50:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:50:09
Feb  2 06:50:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:50:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:50:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'volumes', '.mgr', 'backups', '.rgw.root', 'default.rgw.meta']
Feb  2 06:50:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:50:10.014 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:50:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:50:10.015 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:50:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:50:10.015 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:50:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:50:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:50:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:17 np0005604943 podman[240789]: 2026-02-02 11:50:17.017825006 +0000 UTC m=+0.041754121 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  2 06:50:17 np0005604943 podman[240788]: 2026-02-02 11:50:17.038021616 +0000 UTC m=+0.062543837 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 06:50:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:50:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:50:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:50:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:50:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:50:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:50:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:50:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:50:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:50:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:50:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:50:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:50:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:50:19 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:50:19 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:50:19 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:50:19 np0005604943 podman[240973]: 2026-02-02 11:50:19.966349666 +0000 UTC m=+0.041212866 container create b98a2cbf0f303e457151c254bc9883b9091f4645e689ea29539ba9bd46aa8582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_booth, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:50:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:20 np0005604943 systemd[1]: Started libpod-conmon-b98a2cbf0f303e457151c254bc9883b9091f4645e689ea29539ba9bd46aa8582.scope.
Feb  2 06:50:20 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:50:20 np0005604943 podman[240973]: 2026-02-02 11:50:19.945080876 +0000 UTC m=+0.019944086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:50:20 np0005604943 podman[240973]: 2026-02-02 11:50:20.058008722 +0000 UTC m=+0.132871942 container init b98a2cbf0f303e457151c254bc9883b9091f4645e689ea29539ba9bd46aa8582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_booth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:50:20 np0005604943 podman[240973]: 2026-02-02 11:50:20.067313381 +0000 UTC m=+0.142176581 container start b98a2cbf0f303e457151c254bc9883b9091f4645e689ea29539ba9bd46aa8582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_booth, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:50:20 np0005604943 podman[240973]: 2026-02-02 11:50:20.070433917 +0000 UTC m=+0.145297117 container attach b98a2cbf0f303e457151c254bc9883b9091f4645e689ea29539ba9bd46aa8582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:50:20 np0005604943 lucid_booth[240989]: 167 167
Feb  2 06:50:20 np0005604943 systemd[1]: libpod-b98a2cbf0f303e457151c254bc9883b9091f4645e689ea29539ba9bd46aa8582.scope: Deactivated successfully.
Feb  2 06:50:20 np0005604943 conmon[240989]: conmon b98a2cbf0f303e457151 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b98a2cbf0f303e457151c254bc9883b9091f4645e689ea29539ba9bd46aa8582.scope/container/memory.events
Feb  2 06:50:20 np0005604943 podman[240973]: 2026-02-02 11:50:20.074321246 +0000 UTC m=+0.149184466 container died b98a2cbf0f303e457151c254bc9883b9091f4645e689ea29539ba9bd46aa8582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_booth, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 06:50:20 np0005604943 systemd[1]: var-lib-containers-storage-overlay-af623a45f3a8969c309859c0870baf71dd36e783d7cfcb7e53efe79954f7466e-merged.mount: Deactivated successfully.
Feb  2 06:50:20 np0005604943 podman[240973]: 2026-02-02 11:50:20.112796334 +0000 UTC m=+0.187659534 container remove b98a2cbf0f303e457151c254bc9883b9091f4645e689ea29539ba9bd46aa8582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 06:50:20 np0005604943 systemd[1]: libpod-conmon-b98a2cbf0f303e457151c254bc9883b9091f4645e689ea29539ba9bd46aa8582.scope: Deactivated successfully.
Feb  2 06:50:20 np0005604943 podman[241013]: 2026-02-02 11:50:20.246969322 +0000 UTC m=+0.040468006 container create 1b466668fc54e626ba836c12bf875b3e83fdff2a9c0ffff66095c2abf82fe8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 06:50:20 np0005604943 systemd[1]: Started libpod-conmon-1b466668fc54e626ba836c12bf875b3e83fdff2a9c0ffff66095c2abf82fe8e6.scope.
Feb  2 06:50:20 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:50:20 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eb265cd4a8a9aa8ebd00bbf29e58791ce9ff33a4ab4414dd58797d46496bca7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:50:20 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eb265cd4a8a9aa8ebd00bbf29e58791ce9ff33a4ab4414dd58797d46496bca7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:50:20 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eb265cd4a8a9aa8ebd00bbf29e58791ce9ff33a4ab4414dd58797d46496bca7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:50:20 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eb265cd4a8a9aa8ebd00bbf29e58791ce9ff33a4ab4414dd58797d46496bca7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:50:20 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eb265cd4a8a9aa8ebd00bbf29e58791ce9ff33a4ab4414dd58797d46496bca7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:50:20 np0005604943 podman[241013]: 2026-02-02 11:50:20.324938438 +0000 UTC m=+0.118437142 container init 1b466668fc54e626ba836c12bf875b3e83fdff2a9c0ffff66095c2abf82fe8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ishizaka, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:50:20 np0005604943 podman[241013]: 2026-02-02 11:50:20.22893057 +0000 UTC m=+0.022429274 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:50:20 np0005604943 podman[241013]: 2026-02-02 11:50:20.328857107 +0000 UTC m=+0.122355791 container start 1b466668fc54e626ba836c12bf875b3e83fdff2a9c0ffff66095c2abf82fe8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 06:50:20 np0005604943 podman[241013]: 2026-02-02 11:50:20.332710973 +0000 UTC m=+0.126209677 container attach 1b466668fc54e626ba836c12bf875b3e83fdff2a9c0ffff66095c2abf82fe8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ishizaka, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 06:50:20 np0005604943 angry_ishizaka[241030]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:50:20 np0005604943 angry_ishizaka[241030]: --> All data devices are unavailable
Feb  2 06:50:20 np0005604943 systemd[1]: libpod-1b466668fc54e626ba836c12bf875b3e83fdff2a9c0ffff66095c2abf82fe8e6.scope: Deactivated successfully.
Feb  2 06:50:20 np0005604943 podman[241013]: 2026-02-02 11:50:20.702768705 +0000 UTC m=+0.496267389 container died 1b466668fc54e626ba836c12bf875b3e83fdff2a9c0ffff66095c2abf82fe8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 06:50:20 np0005604943 systemd[1]: var-lib-containers-storage-overlay-1eb265cd4a8a9aa8ebd00bbf29e58791ce9ff33a4ab4414dd58797d46496bca7-merged.mount: Deactivated successfully.
Feb  2 06:50:20 np0005604943 podman[241013]: 2026-02-02 11:50:20.743669791 +0000 UTC m=+0.537168465 container remove 1b466668fc54e626ba836c12bf875b3e83fdff2a9c0ffff66095c2abf82fe8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:50:20 np0005604943 systemd[1]: libpod-conmon-1b466668fc54e626ba836c12bf875b3e83fdff2a9c0ffff66095c2abf82fe8e6.scope: Deactivated successfully.
Feb  2 06:50:21 np0005604943 podman[241123]: 2026-02-02 11:50:21.123616215 +0000 UTC m=+0.036651059 container create 3e4334aa97b1953668c682fa5872309f0c29ea8f9502cf204e5b823b8bbad11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 06:50:21 np0005604943 systemd[1]: Started libpod-conmon-3e4334aa97b1953668c682fa5872309f0c29ea8f9502cf204e5b823b8bbad11e.scope.
Feb  2 06:50:21 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:50:21 np0005604943 podman[241123]: 2026-02-02 11:50:21.191494022 +0000 UTC m=+0.104528946 container init 3e4334aa97b1953668c682fa5872309f0c29ea8f9502cf204e5b823b8bbad11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 06:50:21 np0005604943 podman[241123]: 2026-02-02 11:50:21.196564152 +0000 UTC m=+0.109598986 container start 3e4334aa97b1953668c682fa5872309f0c29ea8f9502cf204e5b823b8bbad11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:50:21 np0005604943 podman[241123]: 2026-02-02 11:50:21.199260537 +0000 UTC m=+0.112295561 container attach 3e4334aa97b1953668c682fa5872309f0c29ea8f9502cf204e5b823b8bbad11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Feb  2 06:50:21 np0005604943 heuristic_villani[241140]: 167 167
Feb  2 06:50:21 np0005604943 podman[241123]: 2026-02-02 11:50:21.200620295 +0000 UTC m=+0.113655319 container died 3e4334aa97b1953668c682fa5872309f0c29ea8f9502cf204e5b823b8bbad11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_villani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 06:50:21 np0005604943 systemd[1]: libpod-3e4334aa97b1953668c682fa5872309f0c29ea8f9502cf204e5b823b8bbad11e.scope: Deactivated successfully.
Feb  2 06:50:21 np0005604943 podman[241123]: 2026-02-02 11:50:21.110482441 +0000 UTC m=+0.023517295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:50:21 np0005604943 systemd[1]: var-lib-containers-storage-overlay-cf2cdf2b7ff8c4a671d12e5c7044c10c1c7739e902c75a1c1929d467f856dd03-merged.mount: Deactivated successfully.
Feb  2 06:50:21 np0005604943 podman[241123]: 2026-02-02 11:50:21.231522143 +0000 UTC m=+0.144556977 container remove 3e4334aa97b1953668c682fa5872309f0c29ea8f9502cf204e5b823b8bbad11e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb  2 06:50:21 np0005604943 systemd[1]: libpod-conmon-3e4334aa97b1953668c682fa5872309f0c29ea8f9502cf204e5b823b8bbad11e.scope: Deactivated successfully.
Feb  2 06:50:21 np0005604943 podman[241164]: 2026-02-02 11:50:21.364856908 +0000 UTC m=+0.048886419 container create 9a3dd78d2f1d9c93f2cd02cc0d587bddb9478b7642eff3358363f789b47db997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chaplygin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:50:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:50:21 np0005604943 systemd[1]: Started libpod-conmon-9a3dd78d2f1d9c93f2cd02cc0d587bddb9478b7642eff3358363f789b47db997.scope.
Feb  2 06:50:21 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:50:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2097df863e1159625d23b5e8c790145b335f4c3a7bf9ad29572dce61885a2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:50:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2097df863e1159625d23b5e8c790145b335f4c3a7bf9ad29572dce61885a2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:50:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2097df863e1159625d23b5e8c790145b335f4c3a7bf9ad29572dce61885a2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:50:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2097df863e1159625d23b5e8c790145b335f4c3a7bf9ad29572dce61885a2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:50:21 np0005604943 podman[241164]: 2026-02-02 11:50:21.346260071 +0000 UTC m=+0.030289552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:50:21 np0005604943 podman[241164]: 2026-02-02 11:50:21.440962272 +0000 UTC m=+0.124991733 container init 9a3dd78d2f1d9c93f2cd02cc0d587bddb9478b7642eff3358363f789b47db997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chaplygin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:50:21 np0005604943 podman[241164]: 2026-02-02 11:50:21.449260072 +0000 UTC m=+0.133289533 container start 9a3dd78d2f1d9c93f2cd02cc0d587bddb9478b7642eff3358363f789b47db997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chaplygin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Feb  2 06:50:21 np0005604943 podman[241164]: 2026-02-02 11:50:21.452691547 +0000 UTC m=+0.136721008 container attach 9a3dd78d2f1d9c93f2cd02cc0d587bddb9478b7642eff3358363f789b47db997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chaplygin, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:50:21 np0005604943 nova_compute[238883]: 2026-02-02 11:50:21.636 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:50:21 np0005604943 nova_compute[238883]: 2026-02-02 11:50:21.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:50:21 np0005604943 nova_compute[238883]: 2026-02-02 11:50:21.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:50:21 np0005604943 nova_compute[238883]: 2026-02-02 11:50:21.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]: {
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:    "0": [
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:        {
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "devices": [
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "/dev/loop3"
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            ],
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "lv_name": "ceph_lv0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "lv_size": "21470642176",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "name": "ceph_lv0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "tags": {
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.cluster_name": "ceph",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.crush_device_class": "",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.encrypted": "0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.objectstore": "bluestore",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.osd_id": "0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.type": "block",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.vdo": "0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.with_tpm": "0"
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            },
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "type": "block",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "vg_name": "ceph_vg0"
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:        }
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:    ],
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:    "1": [
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:        {
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "devices": [
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "/dev/loop4"
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            ],
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "lv_name": "ceph_lv1",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "lv_size": "21470642176",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "name": "ceph_lv1",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "tags": {
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.cluster_name": "ceph",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.crush_device_class": "",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.encrypted": "0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.objectstore": "bluestore",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.osd_id": "1",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.type": "block",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.vdo": "0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.with_tpm": "0"
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            },
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "type": "block",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "vg_name": "ceph_vg1"
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:        }
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:    ],
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:    "2": [
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:        {
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "devices": [
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "/dev/loop5"
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            ],
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "lv_name": "ceph_lv2",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "lv_size": "21470642176",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "name": "ceph_lv2",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "tags": {
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.cluster_name": "ceph",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.crush_device_class": "",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.encrypted": "0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.objectstore": "bluestore",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.osd_id": "2",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.type": "block",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.vdo": "0",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:                "ceph.with_tpm": "0"
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            },
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "type": "block",
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:            "vg_name": "ceph_vg2"
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:        }
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]:    ]
Feb  2 06:50:21 np0005604943 heuristic_chaplygin[241180]: }
Feb  2 06:50:21 np0005604943 systemd[1]: libpod-9a3dd78d2f1d9c93f2cd02cc0d587bddb9478b7642eff3358363f789b47db997.scope: Deactivated successfully.
Feb  2 06:50:21 np0005604943 podman[241164]: 2026-02-02 11:50:21.709769609 +0000 UTC m=+0.393799080 container died 9a3dd78d2f1d9c93f2cd02cc0d587bddb9478b7642eff3358363f789b47db997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chaplygin, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:50:21 np0005604943 nova_compute[238883]: 2026-02-02 11:50:21.711 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:50:21 np0005604943 nova_compute[238883]: 2026-02-02 11:50:21.711 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:50:21 np0005604943 nova_compute[238883]: 2026-02-02 11:50:21.712 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:50:21 np0005604943 nova_compute[238883]: 2026-02-02 11:50:21.712 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 06:50:21 np0005604943 nova_compute[238883]: 2026-02-02 11:50:21.712 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:50:21 np0005604943 systemd[1]: var-lib-containers-storage-overlay-ea2097df863e1159625d23b5e8c790145b335f4c3a7bf9ad29572dce61885a2d-merged.mount: Deactivated successfully.
Feb  2 06:50:21 np0005604943 podman[241164]: 2026-02-02 11:50:21.749624056 +0000 UTC m=+0.433653517 container remove 9a3dd78d2f1d9c93f2cd02cc0d587bddb9478b7642eff3358363f789b47db997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_chaplygin, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Feb  2 06:50:21 np0005604943 systemd[1]: libpod-conmon-9a3dd78d2f1d9c93f2cd02cc0d587bddb9478b7642eff3358363f789b47db997.scope: Deactivated successfully.
Feb  2 06:50:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:22 np0005604943 podman[241283]: 2026-02-02 11:50:22.126671421 +0000 UTC m=+0.028273457 container create 5d0d214c09976342ebd2a9d3e5b08338bc527aa8094a0161a1c94dc5c1361e5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 06:50:22 np0005604943 systemd[1]: Started libpod-conmon-5d0d214c09976342ebd2a9d3e5b08338bc527aa8094a0161a1c94dc5c1361e5d.scope.
Feb  2 06:50:22 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:50:22 np0005604943 podman[241283]: 2026-02-02 11:50:22.189676192 +0000 UTC m=+0.091278248 container init 5d0d214c09976342ebd2a9d3e5b08338bc527aa8094a0161a1c94dc5c1361e5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  2 06:50:22 np0005604943 podman[241283]: 2026-02-02 11:50:22.194893176 +0000 UTC m=+0.096495212 container start 5d0d214c09976342ebd2a9d3e5b08338bc527aa8094a0161a1c94dc5c1361e5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_archimedes, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:50:22 np0005604943 systemd[1]: libpod-5d0d214c09976342ebd2a9d3e5b08338bc527aa8094a0161a1c94dc5c1361e5d.scope: Deactivated successfully.
Feb  2 06:50:22 np0005604943 vigilant_archimedes[241300]: 167 167
Feb  2 06:50:22 np0005604943 conmon[241300]: conmon 5d0d214c09976342ebd2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d0d214c09976342ebd2a9d3e5b08338bc527aa8094a0161a1c94dc5c1361e5d.scope/container/memory.events
Feb  2 06:50:22 np0005604943 podman[241283]: 2026-02-02 11:50:22.199093853 +0000 UTC m=+0.100695889 container attach 5d0d214c09976342ebd2a9d3e5b08338bc527aa8094a0161a1c94dc5c1361e5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_archimedes, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:50:22 np0005604943 podman[241283]: 2026-02-02 11:50:22.199469704 +0000 UTC m=+0.101071760 container died 5d0d214c09976342ebd2a9d3e5b08338bc527aa8094a0161a1c94dc5c1361e5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_archimedes, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:50:22 np0005604943 podman[241283]: 2026-02-02 11:50:22.114885823 +0000 UTC m=+0.016487879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:50:22 np0005604943 systemd[1]: var-lib-containers-storage-overlay-093adef339a0e279dbe7fa42461dfca1558827cdd48b60ebf10d74fd3593463e-merged.mount: Deactivated successfully.
Feb  2 06:50:22 np0005604943 podman[241283]: 2026-02-02 11:50:22.231804532 +0000 UTC m=+0.133406568 container remove 5d0d214c09976342ebd2a9d3e5b08338bc527aa8094a0161a1c94dc5c1361e5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_archimedes, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:50:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:50:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1498361481' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:50:22 np0005604943 systemd[1]: libpod-conmon-5d0d214c09976342ebd2a9d3e5b08338bc527aa8094a0161a1c94dc5c1361e5d.scope: Deactivated successfully.
Feb  2 06:50:22 np0005604943 nova_compute[238883]: 2026-02-02 11:50:22.251 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:50:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:50:22 np0005604943 podman[241325]: 2026-02-02 11:50:22.38579866 +0000 UTC m=+0.044253981 container create b46c9c64fbd3cb3013d9def944f67ae2beff5ddd8ae7a427e54275dfe8951d5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 06:50:22 np0005604943 nova_compute[238883]: 2026-02-02 11:50:22.401 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:50:22 np0005604943 nova_compute[238883]: 2026-02-02 11:50:22.403 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5094MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 06:50:22 np0005604943 nova_compute[238883]: 2026-02-02 11:50:22.403 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:50:22 np0005604943 nova_compute[238883]: 2026-02-02 11:50:22.403 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:50:22 np0005604943 systemd[1]: Started libpod-conmon-b46c9c64fbd3cb3013d9def944f67ae2beff5ddd8ae7a427e54275dfe8951d5c.scope.
Feb  2 06:50:22 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:50:22 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7118eb271ffc3ddc55cc596846c5df89934679801342173b65f2e0be5dbeb908/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:50:22 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7118eb271ffc3ddc55cc596846c5df89934679801342173b65f2e0be5dbeb908/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:50:22 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7118eb271ffc3ddc55cc596846c5df89934679801342173b65f2e0be5dbeb908/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:50:22 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7118eb271ffc3ddc55cc596846c5df89934679801342173b65f2e0be5dbeb908/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:50:22 np0005604943 podman[241325]: 2026-02-02 11:50:22.452875024 +0000 UTC m=+0.111330365 container init b46c9c64fbd3cb3013d9def944f67ae2beff5ddd8ae7a427e54275dfe8951d5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:50:22 np0005604943 podman[241325]: 2026-02-02 11:50:22.458945622 +0000 UTC m=+0.117400933 container start b46c9c64fbd3cb3013d9def944f67ae2beff5ddd8ae7a427e54275dfe8951d5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_haslett, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:50:22 np0005604943 podman[241325]: 2026-02-02 11:50:22.463153119 +0000 UTC m=+0.121608710 container attach b46c9c64fbd3cb3013d9def944f67ae2beff5ddd8ae7a427e54275dfe8951d5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:50:22 np0005604943 podman[241325]: 2026-02-02 11:50:22.368451938 +0000 UTC m=+0.026907249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:50:22 np0005604943 nova_compute[238883]: 2026-02-02 11:50:22.513 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 06:50:22 np0005604943 nova_compute[238883]: 2026-02-02 11:50:22.515 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 06:50:22 np0005604943 nova_compute[238883]: 2026-02-02 11:50:22.532 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:50:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:50:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1084161787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:50:23 np0005604943 nova_compute[238883]: 2026-02-02 11:50:23.089 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:50:23 np0005604943 nova_compute[238883]: 2026-02-02 11:50:23.095 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:50:23 np0005604943 lvm[241442]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:50:23 np0005604943 lvm[241442]: VG ceph_vg1 finished
Feb  2 06:50:23 np0005604943 lvm[241441]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:50:23 np0005604943 lvm[241441]: VG ceph_vg0 finished
Feb  2 06:50:23 np0005604943 lvm[241444]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:50:23 np0005604943 lvm[241444]: VG ceph_vg2 finished
Feb  2 06:50:23 np0005604943 nova_compute[238883]: 2026-02-02 11:50:23.162 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:50:23 np0005604943 nova_compute[238883]: 2026-02-02 11:50:23.164 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 06:50:23 np0005604943 nova_compute[238883]: 2026-02-02 11:50:23.164 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:50:23 np0005604943 beautiful_haslett[241341]: {}
Feb  2 06:50:23 np0005604943 systemd[1]: libpod-b46c9c64fbd3cb3013d9def944f67ae2beff5ddd8ae7a427e54275dfe8951d5c.scope: Deactivated successfully.
Feb  2 06:50:23 np0005604943 systemd[1]: libpod-b46c9c64fbd3cb3013d9def944f67ae2beff5ddd8ae7a427e54275dfe8951d5c.scope: Consumed 1.253s CPU time.
Feb  2 06:50:23 np0005604943 podman[241325]: 2026-02-02 11:50:23.285998828 +0000 UTC m=+0.944454159 container died b46c9c64fbd3cb3013d9def944f67ae2beff5ddd8ae7a427e54275dfe8951d5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:50:23 np0005604943 systemd[1]: var-lib-containers-storage-overlay-7118eb271ffc3ddc55cc596846c5df89934679801342173b65f2e0be5dbeb908-merged.mount: Deactivated successfully.
Feb  2 06:50:23 np0005604943 podman[241325]: 2026-02-02 11:50:23.322988175 +0000 UTC m=+0.981443486 container remove b46c9c64fbd3cb3013d9def944f67ae2beff5ddd8ae7a427e54275dfe8951d5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_haslett, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb  2 06:50:23 np0005604943 systemd[1]: libpod-conmon-b46c9c64fbd3cb3013d9def944f67ae2beff5ddd8ae7a427e54275dfe8951d5c.scope: Deactivated successfully.
Feb  2 06:50:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:50:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:50:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:50:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:50:23 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:50:23 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:50:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:24 np0005604943 nova_compute[238883]: 2026-02-02 11:50:24.164 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:50:24 np0005604943 nova_compute[238883]: 2026-02-02 11:50:24.164 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 06:50:24 np0005604943 nova_compute[238883]: 2026-02-02 11:50:24.165 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 06:50:24 np0005604943 nova_compute[238883]: 2026-02-02 11:50:24.393 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 06:50:24 np0005604943 nova_compute[238883]: 2026-02-02 11:50:24.394 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:50:24 np0005604943 nova_compute[238883]: 2026-02-02 11:50:24.394 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:50:24 np0005604943 nova_compute[238883]: 2026-02-02 11:50:24.395 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:50:24 np0005604943 nova_compute[238883]: 2026-02-02 11:50:24.396 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:50:24 np0005604943 nova_compute[238883]: 2026-02-02 11:50:24.396 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 06:50:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:50:27.330705) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033027330753, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1465, "num_deletes": 506, "total_data_size": 1834629, "memory_usage": 1861456, "flush_reason": "Manual Compaction"}
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033027341047, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1805851, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13534, "largest_seqno": 14998, "table_properties": {"data_size": 1799466, "index_size": 3076, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 15920, "raw_average_key_size": 18, "raw_value_size": 1784684, "raw_average_value_size": 2039, "num_data_blocks": 141, "num_entries": 875, "num_filter_entries": 875, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770032907, "oldest_key_time": 1770032907, "file_creation_time": 1770033027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 10406 microseconds, and 5748 cpu microseconds.
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:50:27.341115) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1805851 bytes OK
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:50:27.341134) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:50:27.342469) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:50:27.342482) EVENT_LOG_v1 {"time_micros": 1770033027342479, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:50:27.342498) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1827084, prev total WAL file size 1827084, number of live WAL files 2.
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:50:27.343034) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1763KB)], [32(7579KB)]
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033027343093, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9567317, "oldest_snapshot_seqno": -1}
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3865 keys, 7570593 bytes, temperature: kUnknown
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033027382561, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7570593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7542766, "index_size": 17093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 94520, "raw_average_key_size": 24, "raw_value_size": 7470843, "raw_average_value_size": 1932, "num_data_blocks": 726, "num_entries": 3865, "num_filter_entries": 3865, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770033027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:50:27.382916) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7570593 bytes
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:50:27.384196) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 241.7 rd, 191.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.4 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(9.5) write-amplify(4.2) OK, records in: 4890, records dropped: 1025 output_compression: NoCompression
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:50:27.384231) EVENT_LOG_v1 {"time_micros": 1770033027384219, "job": 14, "event": "compaction_finished", "compaction_time_micros": 39577, "compaction_time_cpu_micros": 22616, "output_level": 6, "num_output_files": 1, "total_output_size": 7570593, "num_input_records": 4890, "num_output_records": 3865, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033027384633, "job": 14, "event": "table_file_deletion", "file_number": 34}
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033027385729, "job": 14, "event": "table_file_deletion", "file_number": 32}
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:50:27.342838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:50:27.385860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:50:27.385866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:50:27.385868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:50:27.385870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:50:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:50:27.385872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:50:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:50:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:50:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:50:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:50:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:50:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:50:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:50:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:50:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:50:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:50:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2088037182' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:50:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:50:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2088037182' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:50:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:50:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:48 np0005604943 podman[241484]: 2026-02-02 11:50:48.053354926 +0000 UTC m=+0.075395745 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Feb  2 06:50:48 np0005604943 podman[241483]: 2026-02-02 11:50:48.069744211 +0000 UTC m=+0.091302317 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 06:50:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:51 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 06:50:51 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3361 writes, 15K keys, 3361 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3361 writes, 3361 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1292 writes, 5873 keys, 1292 commit groups, 1.0 writes per commit group, ingest: 8.63 MB, 0.01 MB/s#012Interval WAL: 1292 writes, 1292 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    146.4      0.11              0.04         7    0.016       0      0       0.0       0.0#012  L6      1/0    7.22 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.6    162.7    133.7      0.32              0.12         6    0.053     24K   3207       0.0       0.0#012 Sum      1/0    7.22 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6    120.2    137.0      0.43              0.16        13    0.033     24K   3207       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    123.9    125.5      0.29              0.11         8    0.036     17K   2473       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    162.7    133.7      0.32              0.12         6    0.053     24K   3207       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    152.3      0.11              0.04         6    0.018       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.1      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.016, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.4 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cd5e4c78d0#2 capacity: 308.00 MB usage: 1.86 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(104,1.64 MB,0.533562%) FilterBlock(14,75.86 KB,0.0240524%) IndexBlock(14,149.80 KB,0.0474955%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 06:50:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:50:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:50:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:50:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:51:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:51:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:51:09
Feb  2 06:51:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:51:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:51:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'images', 'default.rgw.control']
Feb  2 06:51:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:51:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:51:10.015 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:51:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:51:10.016 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:51:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:51:10.016 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:51:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:51:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:51:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:51:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:19 np0005604943 podman[241532]: 2026-02-02 11:51:19.049999034 +0000 UTC m=+0.072866440 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller)
Feb  2 06:51:19 np0005604943 podman[241533]: 2026-02-02 11:51:19.055329105 +0000 UTC m=+0.078194431 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 06:51:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:51:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:51:21 np0005604943 nova_compute[238883]: 2026-02-02 11:51:21.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:51:21 np0005604943 nova_compute[238883]: 2026-02-02 11:51:21.673 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:51:21 np0005604943 nova_compute[238883]: 2026-02-02 11:51:21.673 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:51:21 np0005604943 nova_compute[238883]: 2026-02-02 11:51:21.674 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:51:21 np0005604943 nova_compute[238883]: 2026-02-02 11:51:21.674 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 06:51:21 np0005604943 nova_compute[238883]: 2026-02-02 11:51:21.674 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:51:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:51:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4159433729' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:51:22 np0005604943 nova_compute[238883]: 2026-02-02 11:51:22.239 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:51:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:51:22 np0005604943 nova_compute[238883]: 2026-02-02 11:51:22.392 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:51:22 np0005604943 nova_compute[238883]: 2026-02-02 11:51:22.393 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5177MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 06:51:22 np0005604943 nova_compute[238883]: 2026-02-02 11:51:22.393 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:51:22 np0005604943 nova_compute[238883]: 2026-02-02 11:51:22.393 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:51:22 np0005604943 nova_compute[238883]: 2026-02-02 11:51:22.454 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 06:51:22 np0005604943 nova_compute[238883]: 2026-02-02 11:51:22.454 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 06:51:22 np0005604943 nova_compute[238883]: 2026-02-02 11:51:22.468 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:51:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:51:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3613942866' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:51:23 np0005604943 nova_compute[238883]: 2026-02-02 11:51:23.025 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:51:23 np0005604943 nova_compute[238883]: 2026-02-02 11:51:23.031 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:51:23 np0005604943 nova_compute[238883]: 2026-02-02 11:51:23.055 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:51:23 np0005604943 nova_compute[238883]: 2026-02-02 11:51:23.056 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 06:51:23 np0005604943 nova_compute[238883]: 2026-02-02 11:51:23.056 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:51:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:51:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:51:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:51:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:51:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:51:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:51:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:51:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:51:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:51:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:51:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:51:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:51:24 np0005604943 podman[241763]: 2026-02-02 11:51:24.414422729 +0000 UTC m=+0.045917926 container create 2d3391e03db1d3a409f5ec8c8ddc2a032a9b0335c1d7f938084683b2ee499a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bouman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:51:24 np0005604943 systemd[1]: Started libpod-conmon-2d3391e03db1d3a409f5ec8c8ddc2a032a9b0335c1d7f938084683b2ee499a1c.scope.
Feb  2 06:51:24 np0005604943 podman[241763]: 2026-02-02 11:51:24.38840148 +0000 UTC m=+0.019896707 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:51:24 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:51:24 np0005604943 podman[241763]: 2026-02-02 11:51:24.520531087 +0000 UTC m=+0.152026284 container init 2d3391e03db1d3a409f5ec8c8ddc2a032a9b0335c1d7f938084683b2ee499a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bouman, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Feb  2 06:51:24 np0005604943 podman[241763]: 2026-02-02 11:51:24.525831908 +0000 UTC m=+0.157327115 container start 2d3391e03db1d3a409f5ec8c8ddc2a032a9b0335c1d7f938084683b2ee499a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 06:51:24 np0005604943 interesting_bouman[241779]: 167 167
Feb  2 06:51:24 np0005604943 systemd[1]: libpod-2d3391e03db1d3a409f5ec8c8ddc2a032a9b0335c1d7f938084683b2ee499a1c.scope: Deactivated successfully.
Feb  2 06:51:24 np0005604943 podman[241763]: 2026-02-02 11:51:24.541708508 +0000 UTC m=+0.173203735 container attach 2d3391e03db1d3a409f5ec8c8ddc2a032a9b0335c1d7f938084683b2ee499a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bouman, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 06:51:24 np0005604943 podman[241763]: 2026-02-02 11:51:24.542078037 +0000 UTC m=+0.173573234 container died 2d3391e03db1d3a409f5ec8c8ddc2a032a9b0335c1d7f938084683b2ee499a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:51:24 np0005604943 systemd[1]: var-lib-containers-storage-overlay-79960fd51444fe73d5d3941f58b30bbe283e1b6f5bdca6d21f5d988ffbf3ee0f-merged.mount: Deactivated successfully.
Feb  2 06:51:24 np0005604943 podman[241763]: 2026-02-02 11:51:24.638717815 +0000 UTC m=+0.270213012 container remove 2d3391e03db1d3a409f5ec8c8ddc2a032a9b0335c1d7f938084683b2ee499a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bouman, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 06:51:24 np0005604943 systemd[1]: libpod-conmon-2d3391e03db1d3a409f5ec8c8ddc2a032a9b0335c1d7f938084683b2ee499a1c.scope: Deactivated successfully.
Feb  2 06:51:24 np0005604943 podman[241805]: 2026-02-02 11:51:24.785082389 +0000 UTC m=+0.048260028 container create 7cc340e7ed3dbfd89ed37711c62945b541f1dc89b691ce00e5e9d835a786edef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_bhabha, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 06:51:24 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:51:24 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:51:24 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:51:24 np0005604943 systemd[1]: Started libpod-conmon-7cc340e7ed3dbfd89ed37711c62945b541f1dc89b691ce00e5e9d835a786edef.scope.
Feb  2 06:51:24 np0005604943 podman[241805]: 2026-02-02 11:51:24.758217868 +0000 UTC m=+0.021395537 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:51:24 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:51:24 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab55794691855ea91f39876cebf1d640a94ccc621214664169f9718725a68ef5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:51:24 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab55794691855ea91f39876cebf1d640a94ccc621214664169f9718725a68ef5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:51:24 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab55794691855ea91f39876cebf1d640a94ccc621214664169f9718725a68ef5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:51:24 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab55794691855ea91f39876cebf1d640a94ccc621214664169f9718725a68ef5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:51:24 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab55794691855ea91f39876cebf1d640a94ccc621214664169f9718725a68ef5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:51:24 np0005604943 podman[241805]: 2026-02-02 11:51:24.881618674 +0000 UTC m=+0.144796343 container init 7cc340e7ed3dbfd89ed37711c62945b541f1dc89b691ce00e5e9d835a786edef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_bhabha, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:51:24 np0005604943 podman[241805]: 2026-02-02 11:51:24.888311912 +0000 UTC m=+0.151489561 container start 7cc340e7ed3dbfd89ed37711c62945b541f1dc89b691ce00e5e9d835a786edef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_bhabha, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:51:24 np0005604943 podman[241805]: 2026-02-02 11:51:24.901347076 +0000 UTC m=+0.164524875 container attach 7cc340e7ed3dbfd89ed37711c62945b541f1dc89b691ce00e5e9d835a786edef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_bhabha, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:51:25 np0005604943 nova_compute[238883]: 2026-02-02 11:51:25.049 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:51:25 np0005604943 nova_compute[238883]: 2026-02-02 11:51:25.050 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:51:25 np0005604943 nova_compute[238883]: 2026-02-02 11:51:25.051 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 06:51:25 np0005604943 nova_compute[238883]: 2026-02-02 11:51:25.051 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 06:51:25 np0005604943 nova_compute[238883]: 2026-02-02 11:51:25.064 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 06:51:25 np0005604943 nova_compute[238883]: 2026-02-02 11:51:25.065 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:51:25 np0005604943 nova_compute[238883]: 2026-02-02 11:51:25.065 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:51:25 np0005604943 nova_compute[238883]: 2026-02-02 11:51:25.066 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:51:25 np0005604943 nova_compute[238883]: 2026-02-02 11:51:25.066 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:51:25 np0005604943 nova_compute[238883]: 2026-02-02 11:51:25.066 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 06:51:25 np0005604943 amazing_bhabha[241821]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:51:25 np0005604943 amazing_bhabha[241821]: --> All data devices are unavailable
Feb  2 06:51:25 np0005604943 systemd[1]: libpod-7cc340e7ed3dbfd89ed37711c62945b541f1dc89b691ce00e5e9d835a786edef.scope: Deactivated successfully.
Feb  2 06:51:25 np0005604943 podman[241805]: 2026-02-02 11:51:25.262399283 +0000 UTC m=+0.525576922 container died 7cc340e7ed3dbfd89ed37711c62945b541f1dc89b691ce00e5e9d835a786edef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_bhabha, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:51:25 np0005604943 systemd[1]: var-lib-containers-storage-overlay-ab55794691855ea91f39876cebf1d640a94ccc621214664169f9718725a68ef5-merged.mount: Deactivated successfully.
Feb  2 06:51:25 np0005604943 podman[241805]: 2026-02-02 11:51:25.359633436 +0000 UTC m=+0.622811075 container remove 7cc340e7ed3dbfd89ed37711c62945b541f1dc89b691ce00e5e9d835a786edef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_bhabha, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:51:25 np0005604943 systemd[1]: libpod-conmon-7cc340e7ed3dbfd89ed37711c62945b541f1dc89b691ce00e5e9d835a786edef.scope: Deactivated successfully.
Feb  2 06:51:25 np0005604943 nova_compute[238883]: 2026-02-02 11:51:25.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:51:25 np0005604943 nova_compute[238883]: 2026-02-02 11:51:25.662 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:51:25 np0005604943 nova_compute[238883]: 2026-02-02 11:51:25.662 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:51:25 np0005604943 podman[241916]: 2026-02-02 11:51:25.736252754 +0000 UTC m=+0.032033069 container create 25d7148c4dce7581b4683577ff3d3f6c01cd27c1a60102e532e61f7452d1d0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:51:25 np0005604943 systemd[1]: Started libpod-conmon-25d7148c4dce7581b4683577ff3d3f6c01cd27c1a60102e532e61f7452d1d0d9.scope.
Feb  2 06:51:25 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:51:25 np0005604943 podman[241916]: 2026-02-02 11:51:25.720939389 +0000 UTC m=+0.016719714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:51:25 np0005604943 podman[241916]: 2026-02-02 11:51:25.834732562 +0000 UTC m=+0.130512897 container init 25d7148c4dce7581b4683577ff3d3f6c01cd27c1a60102e532e61f7452d1d0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_shamir, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:51:25 np0005604943 podman[241916]: 2026-02-02 11:51:25.83922829 +0000 UTC m=+0.135008605 container start 25d7148c4dce7581b4683577ff3d3f6c01cd27c1a60102e532e61f7452d1d0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 06:51:25 np0005604943 systemd[1]: libpod-25d7148c4dce7581b4683577ff3d3f6c01cd27c1a60102e532e61f7452d1d0d9.scope: Deactivated successfully.
Feb  2 06:51:25 np0005604943 adoring_shamir[241933]: 167 167
Feb  2 06:51:25 np0005604943 conmon[241933]: conmon 25d7148c4dce7581b468 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25d7148c4dce7581b4683577ff3d3f6c01cd27c1a60102e532e61f7452d1d0d9.scope/container/memory.events
Feb  2 06:51:25 np0005604943 podman[241916]: 2026-02-02 11:51:25.863172674 +0000 UTC m=+0.158952999 container attach 25d7148c4dce7581b4683577ff3d3f6c01cd27c1a60102e532e61f7452d1d0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_shamir, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 06:51:25 np0005604943 podman[241916]: 2026-02-02 11:51:25.863580094 +0000 UTC m=+0.159360399 container died 25d7148c4dce7581b4683577ff3d3f6c01cd27c1a60102e532e61f7452d1d0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 06:51:25 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f92f29cbba1d863f593a21139f60ef1f45a8a96d243a64ec39f3b7892d028ba7-merged.mount: Deactivated successfully.
Feb  2 06:51:25 np0005604943 podman[241916]: 2026-02-02 11:51:25.929453738 +0000 UTC m=+0.225234053 container remove 25d7148c4dce7581b4683577ff3d3f6c01cd27c1a60102e532e61f7452d1d0d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_shamir, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:51:25 np0005604943 systemd[1]: libpod-conmon-25d7148c4dce7581b4683577ff3d3f6c01cd27c1a60102e532e61f7452d1d0d9.scope: Deactivated successfully.
Feb  2 06:51:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:26 np0005604943 podman[241959]: 2026-02-02 11:51:26.045019477 +0000 UTC m=+0.038894561 container create c0cbacc9f5ac6dd22640fe87f2f0febf5edb7bd78fb1154dce351d75a229b847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gates, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:51:26 np0005604943 systemd[1]: Started libpod-conmon-c0cbacc9f5ac6dd22640fe87f2f0febf5edb7bd78fb1154dce351d75a229b847.scope.
Feb  2 06:51:26 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:51:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d515dfa4e40edf5a77e67ac0335f23a79b71c450fb4328fd5ad9832c267d4af1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:51:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d515dfa4e40edf5a77e67ac0335f23a79b71c450fb4328fd5ad9832c267d4af1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:51:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d515dfa4e40edf5a77e67ac0335f23a79b71c450fb4328fd5ad9832c267d4af1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:51:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d515dfa4e40edf5a77e67ac0335f23a79b71c450fb4328fd5ad9832c267d4af1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:51:26 np0005604943 podman[241959]: 2026-02-02 11:51:26.02585962 +0000 UTC m=+0.019734734 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:51:26 np0005604943 podman[241959]: 2026-02-02 11:51:26.124315956 +0000 UTC m=+0.118191070 container init c0cbacc9f5ac6dd22640fe87f2f0febf5edb7bd78fb1154dce351d75a229b847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:51:26 np0005604943 podman[241959]: 2026-02-02 11:51:26.131026893 +0000 UTC m=+0.124901977 container start c0cbacc9f5ac6dd22640fe87f2f0febf5edb7bd78fb1154dce351d75a229b847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 06:51:26 np0005604943 podman[241959]: 2026-02-02 11:51:26.152920883 +0000 UTC m=+0.146795967 container attach c0cbacc9f5ac6dd22640fe87f2f0febf5edb7bd78fb1154dce351d75a229b847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]: {
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:    "0": [
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:        {
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "devices": [
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "/dev/loop3"
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            ],
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "lv_name": "ceph_lv0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "lv_size": "21470642176",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "name": "ceph_lv0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "tags": {
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.cluster_name": "ceph",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.crush_device_class": "",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.encrypted": "0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.objectstore": "bluestore",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.osd_id": "0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.type": "block",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.vdo": "0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.with_tpm": "0"
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            },
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "type": "block",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "vg_name": "ceph_vg0"
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:        }
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:    ],
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:    "1": [
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:        {
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "devices": [
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "/dev/loop4"
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            ],
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "lv_name": "ceph_lv1",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "lv_size": "21470642176",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "name": "ceph_lv1",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "tags": {
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.cluster_name": "ceph",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.crush_device_class": "",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.encrypted": "0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.objectstore": "bluestore",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.osd_id": "1",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.type": "block",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.vdo": "0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.with_tpm": "0"
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            },
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "type": "block",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "vg_name": "ceph_vg1"
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:        }
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:    ],
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:    "2": [
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:        {
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "devices": [
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "/dev/loop5"
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            ],
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "lv_name": "ceph_lv2",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "lv_size": "21470642176",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "name": "ceph_lv2",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "tags": {
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.cluster_name": "ceph",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.crush_device_class": "",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.encrypted": "0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.objectstore": "bluestore",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.osd_id": "2",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.type": "block",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.vdo": "0",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:                "ceph.with_tpm": "0"
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            },
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "type": "block",
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:            "vg_name": "ceph_vg2"
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:        }
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]:    ]
Feb  2 06:51:26 np0005604943 beautiful_gates[241975]: }
Feb  2 06:51:26 np0005604943 systemd[1]: libpod-c0cbacc9f5ac6dd22640fe87f2f0febf5edb7bd78fb1154dce351d75a229b847.scope: Deactivated successfully.
Feb  2 06:51:26 np0005604943 podman[241959]: 2026-02-02 11:51:26.384348008 +0000 UTC m=+0.378223092 container died c0cbacc9f5ac6dd22640fe87f2f0febf5edb7bd78fb1154dce351d75a229b847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gates, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Feb  2 06:51:26 np0005604943 systemd[1]: var-lib-containers-storage-overlay-d515dfa4e40edf5a77e67ac0335f23a79b71c450fb4328fd5ad9832c267d4af1-merged.mount: Deactivated successfully.
Feb  2 06:51:26 np0005604943 podman[241959]: 2026-02-02 11:51:26.465339122 +0000 UTC m=+0.459214206 container remove c0cbacc9f5ac6dd22640fe87f2f0febf5edb7bd78fb1154dce351d75a229b847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:51:26 np0005604943 systemd[1]: libpod-conmon-c0cbacc9f5ac6dd22640fe87f2f0febf5edb7bd78fb1154dce351d75a229b847.scope: Deactivated successfully.
Feb  2 06:51:26 np0005604943 podman[242058]: 2026-02-02 11:51:26.866371637 +0000 UTC m=+0.038522941 container create 574bbeb0ebe60781c9bf5ae3c86defbfc7c69bb742abc0c23abe5f8a42beee27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mendel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:51:26 np0005604943 systemd[1]: Started libpod-conmon-574bbeb0ebe60781c9bf5ae3c86defbfc7c69bb742abc0c23abe5f8a42beee27.scope.
Feb  2 06:51:26 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:51:26 np0005604943 podman[242058]: 2026-02-02 11:51:26.933843443 +0000 UTC m=+0.105994757 container init 574bbeb0ebe60781c9bf5ae3c86defbfc7c69bb742abc0c23abe5f8a42beee27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:51:26 np0005604943 podman[242058]: 2026-02-02 11:51:26.939124152 +0000 UTC m=+0.111275456 container start 574bbeb0ebe60781c9bf5ae3c86defbfc7c69bb742abc0c23abe5f8a42beee27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mendel, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 06:51:26 np0005604943 podman[242058]: 2026-02-02 11:51:26.845009811 +0000 UTC m=+0.017161145 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:51:26 np0005604943 amazing_mendel[242074]: 167 167
Feb  2 06:51:26 np0005604943 systemd[1]: libpod-574bbeb0ebe60781c9bf5ae3c86defbfc7c69bb742abc0c23abe5f8a42beee27.scope: Deactivated successfully.
Feb  2 06:51:26 np0005604943 podman[242058]: 2026-02-02 11:51:26.943674333 +0000 UTC m=+0.115825637 container attach 574bbeb0ebe60781c9bf5ae3c86defbfc7c69bb742abc0c23abe5f8a42beee27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mendel, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 06:51:26 np0005604943 podman[242058]: 2026-02-02 11:51:26.944031682 +0000 UTC m=+0.116182986 container died 574bbeb0ebe60781c9bf5ae3c86defbfc7c69bb742abc0c23abe5f8a42beee27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:51:26 np0005604943 systemd[1]: var-lib-containers-storage-overlay-482a49a7929953fc89a8f1c6f0ac535e109977bdb6379ec1cd0fc15ae8bf5143-merged.mount: Deactivated successfully.
Feb  2 06:51:26 np0005604943 podman[242058]: 2026-02-02 11:51:26.976428939 +0000 UTC m=+0.148580233 container remove 574bbeb0ebe60781c9bf5ae3c86defbfc7c69bb742abc0c23abe5f8a42beee27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 06:51:26 np0005604943 systemd[1]: libpod-conmon-574bbeb0ebe60781c9bf5ae3c86defbfc7c69bb742abc0c23abe5f8a42beee27.scope: Deactivated successfully.
Feb  2 06:51:27 np0005604943 podman[242098]: 2026-02-02 11:51:27.087755596 +0000 UTC m=+0.034177996 container create f3271abe7b1a571bb5fcf407bd54f2967dad71e61b6776b50bb1b063ff3a0ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bose, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:51:27 np0005604943 systemd[1]: Started libpod-conmon-f3271abe7b1a571bb5fcf407bd54f2967dad71e61b6776b50bb1b063ff3a0ab8.scope.
Feb  2 06:51:27 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:51:27 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f513c0ff38a329cecc60c5f5465ed4fa43a495e36a2b9648af48c76b7ffa057/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:51:27 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f513c0ff38a329cecc60c5f5465ed4fa43a495e36a2b9648af48c76b7ffa057/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:51:27 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f513c0ff38a329cecc60c5f5465ed4fa43a495e36a2b9648af48c76b7ffa057/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:51:27 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f513c0ff38a329cecc60c5f5465ed4fa43a495e36a2b9648af48c76b7ffa057/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:51:27 np0005604943 podman[242098]: 2026-02-02 11:51:27.165091393 +0000 UTC m=+0.111513803 container init f3271abe7b1a571bb5fcf407bd54f2967dad71e61b6776b50bb1b063ff3a0ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:51:27 np0005604943 podman[242098]: 2026-02-02 11:51:27.073957441 +0000 UTC m=+0.020379851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:51:27 np0005604943 podman[242098]: 2026-02-02 11:51:27.170715872 +0000 UTC m=+0.117138262 container start f3271abe7b1a571bb5fcf407bd54f2967dad71e61b6776b50bb1b063ff3a0ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bose, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:51:27 np0005604943 podman[242098]: 2026-02-02 11:51:27.17632049 +0000 UTC m=+0.122742910 container attach f3271abe7b1a571bb5fcf407bd54f2967dad71e61b6776b50bb1b063ff3a0ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bose, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:51:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:51:27 np0005604943 lvm[242191]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:51:27 np0005604943 lvm[242191]: VG ceph_vg0 finished
Feb  2 06:51:27 np0005604943 lvm[242194]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:51:27 np0005604943 lvm[242194]: VG ceph_vg1 finished
Feb  2 06:51:27 np0005604943 lvm[242195]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:51:27 np0005604943 lvm[242195]: VG ceph_vg2 finished
Feb  2 06:51:27 np0005604943 lvm[242196]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:51:27 np0005604943 lvm[242196]: VG ceph_vg2 finished
Feb  2 06:51:27 np0005604943 lvm[242199]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:51:27 np0005604943 lvm[242199]: VG ceph_vg2 finished
Feb  2 06:51:27 np0005604943 laughing_bose[242115]: {}
Feb  2 06:51:27 np0005604943 systemd[1]: libpod-f3271abe7b1a571bb5fcf407bd54f2967dad71e61b6776b50bb1b063ff3a0ab8.scope: Deactivated successfully.
Feb  2 06:51:27 np0005604943 podman[242098]: 2026-02-02 11:51:27.88700294 +0000 UTC m=+0.833425330 container died f3271abe7b1a571bb5fcf407bd54f2967dad71e61b6776b50bb1b063ff3a0ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bose, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:51:27 np0005604943 systemd[1]: libpod-f3271abe7b1a571bb5fcf407bd54f2967dad71e61b6776b50bb1b063ff3a0ab8.scope: Consumed 1.017s CPU time.
Feb  2 06:51:27 np0005604943 systemd[1]: var-lib-containers-storage-overlay-7f513c0ff38a329cecc60c5f5465ed4fa43a495e36a2b9648af48c76b7ffa057-merged.mount: Deactivated successfully.
Feb  2 06:51:27 np0005604943 podman[242098]: 2026-02-02 11:51:27.9440692 +0000 UTC m=+0.890491590 container remove f3271abe7b1a571bb5fcf407bd54f2967dad71e61b6776b50bb1b063ff3a0ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bose, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 06:51:27 np0005604943 systemd[1]: libpod-conmon-f3271abe7b1a571bb5fcf407bd54f2967dad71e61b6776b50bb1b063ff3a0ab8.scope: Deactivated successfully.
Feb  2 06:51:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:51:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:51:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:51:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:51:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:29 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:51:29 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:51:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:51:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:51:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:51:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:51:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:51:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:51:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:51:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:51:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:51:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:51:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:50 np0005604943 podman[242240]: 2026-02-02 11:51:50.04075673 +0000 UTC m=+0.060196084 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb  2 06:51:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:50 np0005604943 podman[242239]: 2026-02-02 11:51:50.070047736 +0000 UTC m=+0.089879600 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  2 06:51:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:51:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 06:51:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5838 writes, 24K keys, 5838 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5838 writes, 989 syncs, 5.90 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d69e545a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d69e545a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Feb  2 06:51:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:51:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:51:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 06:51:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7161 writes, 29K keys, 7161 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7161 writes, 1397 syncs, 5.13 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 337 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0e3f1a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558f0e3f1a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Feb  2 06:51:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 06:52:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5680 writes, 24K keys, 5680 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5680 writes, 902 syncs, 6.30 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560f677dd8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560f677dd8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Feb  2 06:52:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:52:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:04 np0005604943 ceph-mgr[75558]: [devicehealth INFO root] Check health
Feb  2 06:52:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:52:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:52:09
Feb  2 06:52:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:52:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:52:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'images', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.log', '.rgw.root']
Feb  2 06:52:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:52:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:52:10.016 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:52:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:52:10.017 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:52:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:52:10.017 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:52:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:52:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:52:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:52:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:20 np0005604943 nova_compute[238883]: 2026-02-02 11:52:20.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:52:20 np0005604943 nova_compute[238883]: 2026-02-02 11:52:20.642 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  2 06:52:20 np0005604943 nova_compute[238883]: 2026-02-02 11:52:20.801 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  2 06:52:20 np0005604943 nova_compute[238883]: 2026-02-02 11:52:20.801 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:52:20 np0005604943 nova_compute[238883]: 2026-02-02 11:52:20.801 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  2 06:52:20 np0005604943 nova_compute[238883]: 2026-02-02 11:52:20.872 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:52:21 np0005604943 podman[242287]: 2026-02-02 11:52:21.028689742 +0000 UTC m=+0.047955260 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Feb  2 06:52:21 np0005604943 podman[242286]: 2026-02-02 11:52:21.051439014 +0000 UTC m=+0.072252583 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.9136828634743115e-06 of space, bias 4.0, pg target 0.0022964194361691738 quantized to 16 (current 16)
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:52:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:52:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:52:23 np0005604943 nova_compute[238883]: 2026-02-02 11:52:23.899 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:52:23 np0005604943 nova_compute[238883]: 2026-02-02 11:52:23.899 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 06:52:23 np0005604943 nova_compute[238883]: 2026-02-02 11:52:23.899 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 06:52:23 np0005604943 nova_compute[238883]: 2026-02-02 11:52:23.916 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 06:52:23 np0005604943 nova_compute[238883]: 2026-02-02 11:52:23.917 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:52:23 np0005604943 nova_compute[238883]: 2026-02-02 11:52:23.917 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:52:23 np0005604943 nova_compute[238883]: 2026-02-02 11:52:23.917 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 06:52:23 np0005604943 nova_compute[238883]: 2026-02-02 11:52:23.917 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:52:23 np0005604943 nova_compute[238883]: 2026-02-02 11:52:23.939 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:52:23 np0005604943 nova_compute[238883]: 2026-02-02 11:52:23.940 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:52:23 np0005604943 nova_compute[238883]: 2026-02-02 11:52:23.940 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:52:23 np0005604943 nova_compute[238883]: 2026-02-02 11:52:23.940 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 06:52:23 np0005604943 nova_compute[238883]: 2026-02-02 11:52:23.940 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:52:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:52:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3404460741' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:52:24 np0005604943 nova_compute[238883]: 2026-02-02 11:52:24.501 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:52:24 np0005604943 nova_compute[238883]: 2026-02-02 11:52:24.624 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:52:24 np0005604943 nova_compute[238883]: 2026-02-02 11:52:24.625 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5160MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 06:52:24 np0005604943 nova_compute[238883]: 2026-02-02 11:52:24.625 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:52:24 np0005604943 nova_compute[238883]: 2026-02-02 11:52:24.625 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:52:24 np0005604943 nova_compute[238883]: 2026-02-02 11:52:24.894 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 06:52:24 np0005604943 nova_compute[238883]: 2026-02-02 11:52:24.895 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 06:52:24 np0005604943 nova_compute[238883]: 2026-02-02 11:52:24.948 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Refreshing inventories for resource provider 30401227-b88f-415d-9c2d-3119bd1baf61 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  2 06:52:25 np0005604943 nova_compute[238883]: 2026-02-02 11:52:25.036 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Updating ProviderTree inventory for provider 30401227-b88f-415d-9c2d-3119bd1baf61 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  2 06:52:25 np0005604943 nova_compute[238883]: 2026-02-02 11:52:25.037 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Updating inventory in ProviderTree for provider 30401227-b88f-415d-9c2d-3119bd1baf61 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 06:52:25 np0005604943 nova_compute[238883]: 2026-02-02 11:52:25.053 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Refreshing aggregate associations for resource provider 30401227-b88f-415d-9c2d-3119bd1baf61, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  2 06:52:25 np0005604943 nova_compute[238883]: 2026-02-02 11:52:25.074 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Refreshing trait associations for resource provider 30401227-b88f-415d-9c2d-3119bd1baf61, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  2 06:52:25 np0005604943 nova_compute[238883]: 2026-02-02 11:52:25.091 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:52:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:52:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2417989769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:52:25 np0005604943 nova_compute[238883]: 2026-02-02 11:52:25.650 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:52:25 np0005604943 nova_compute[238883]: 2026-02-02 11:52:25.656 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:52:25 np0005604943 nova_compute[238883]: 2026-02-02 11:52:25.675 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:52:25 np0005604943 nova_compute[238883]: 2026-02-02 11:52:25.676 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 06:52:25 np0005604943 nova_compute[238883]: 2026-02-02 11:52:25.677 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:52:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:26 np0005604943 nova_compute[238883]: 2026-02-02 11:52:26.402 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:52:26 np0005604943 nova_compute[238883]: 2026-02-02 11:52:26.402 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:52:26 np0005604943 nova_compute[238883]: 2026-02-02 11:52:26.403 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:52:26 np0005604943 nova_compute[238883]: 2026-02-02 11:52:26.403 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:52:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:52:27 np0005604943 nova_compute[238883]: 2026-02-02 11:52:27.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:52:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:52:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:52:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:52:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:52:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:52:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:52:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:52:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:52:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:52:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:52:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:52:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:52:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:52:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:52:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:52:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:52:29 np0005604943 podman[242584]: 2026-02-02 11:52:29.286331584 +0000 UTC m=+0.034103874 container create d1d521a9b23a809c46e956a0bfaf7053df74b014ce5230f9ad478969147293dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 06:52:29 np0005604943 systemd[1]: Started libpod-conmon-d1d521a9b23a809c46e956a0bfaf7053df74b014ce5230f9ad478969147293dd.scope.
Feb  2 06:52:29 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:52:29 np0005604943 podman[242584]: 2026-02-02 11:52:29.268610925 +0000 UTC m=+0.016383235 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:52:29 np0005604943 podman[242584]: 2026-02-02 11:52:29.36704553 +0000 UTC m=+0.114817820 container init d1d521a9b23a809c46e956a0bfaf7053df74b014ce5230f9ad478969147293dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 06:52:29 np0005604943 podman[242584]: 2026-02-02 11:52:29.377560959 +0000 UTC m=+0.125333249 container start d1d521a9b23a809c46e956a0bfaf7053df74b014ce5230f9ad478969147293dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lederberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 06:52:29 np0005604943 podman[242584]: 2026-02-02 11:52:29.381543574 +0000 UTC m=+0.129315894 container attach d1d521a9b23a809c46e956a0bfaf7053df74b014ce5230f9ad478969147293dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lederberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:52:29 np0005604943 affectionate_lederberg[242600]: 167 167
Feb  2 06:52:29 np0005604943 systemd[1]: libpod-d1d521a9b23a809c46e956a0bfaf7053df74b014ce5230f9ad478969147293dd.scope: Deactivated successfully.
Feb  2 06:52:29 np0005604943 conmon[242600]: conmon d1d521a9b23a809c46e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1d521a9b23a809c46e956a0bfaf7053df74b014ce5230f9ad478969147293dd.scope/container/memory.events
Feb  2 06:52:29 np0005604943 podman[242584]: 2026-02-02 11:52:29.386510935 +0000 UTC m=+0.134283235 container died d1d521a9b23a809c46e956a0bfaf7053df74b014ce5230f9ad478969147293dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lederberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 06:52:29 np0005604943 systemd[1]: var-lib-containers-storage-overlay-dbff66adb1895d6339e06ad45c6c104fa16a065f7ab55f118fa0b8281fcc83ff-merged.mount: Deactivated successfully.
Feb  2 06:52:29 np0005604943 podman[242584]: 2026-02-02 11:52:29.432725128 +0000 UTC m=+0.180497418 container remove d1d521a9b23a809c46e956a0bfaf7053df74b014ce5230f9ad478969147293dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lederberg, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb  2 06:52:29 np0005604943 systemd[1]: libpod-conmon-d1d521a9b23a809c46e956a0bfaf7053df74b014ce5230f9ad478969147293dd.scope: Deactivated successfully.
Feb  2 06:52:29 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:52:29 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:52:29 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:52:29 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:52:29 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:52:29 np0005604943 podman[242625]: 2026-02-02 11:52:29.552682854 +0000 UTC m=+0.039715803 container create 650f90527da374bec5157c5cdf8746a46f2ab75220a5e4402310b008fcc3e7d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:52:29 np0005604943 systemd[1]: Started libpod-conmon-650f90527da374bec5157c5cdf8746a46f2ab75220a5e4402310b008fcc3e7d3.scope.
Feb  2 06:52:29 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:52:29 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/984859c62839103e2be26e9f90280c6769527f41fbb7fa37647442f590def4ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:52:29 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/984859c62839103e2be26e9f90280c6769527f41fbb7fa37647442f590def4ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:52:29 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/984859c62839103e2be26e9f90280c6769527f41fbb7fa37647442f590def4ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:52:29 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/984859c62839103e2be26e9f90280c6769527f41fbb7fa37647442f590def4ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:52:29 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/984859c62839103e2be26e9f90280c6769527f41fbb7fa37647442f590def4ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:52:29 np0005604943 podman[242625]: 2026-02-02 11:52:29.631770406 +0000 UTC m=+0.118803375 container init 650f90527da374bec5157c5cdf8746a46f2ab75220a5e4402310b008fcc3e7d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 06:52:29 np0005604943 podman[242625]: 2026-02-02 11:52:29.535648543 +0000 UTC m=+0.022681512 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:52:29 np0005604943 podman[242625]: 2026-02-02 11:52:29.640427446 +0000 UTC m=+0.127460395 container start 650f90527da374bec5157c5cdf8746a46f2ab75220a5e4402310b008fcc3e7d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 06:52:29 np0005604943 podman[242625]: 2026-02-02 11:52:29.643664472 +0000 UTC m=+0.130697421 container attach 650f90527da374bec5157c5cdf8746a46f2ab75220a5e4402310b008fcc3e7d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:52:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:30 np0005604943 romantic_taussig[242641]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:52:30 np0005604943 romantic_taussig[242641]: --> All data devices are unavailable
Feb  2 06:52:30 np0005604943 systemd[1]: libpod-650f90527da374bec5157c5cdf8746a46f2ab75220a5e4402310b008fcc3e7d3.scope: Deactivated successfully.
Feb  2 06:52:30 np0005604943 podman[242625]: 2026-02-02 11:52:30.157085241 +0000 UTC m=+0.644118200 container died 650f90527da374bec5157c5cdf8746a46f2ab75220a5e4402310b008fcc3e7d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 06:52:30 np0005604943 systemd[1]: var-lib-containers-storage-overlay-984859c62839103e2be26e9f90280c6769527f41fbb7fa37647442f590def4ba-merged.mount: Deactivated successfully.
Feb  2 06:52:30 np0005604943 podman[242625]: 2026-02-02 11:52:30.207434493 +0000 UTC m=+0.694467442 container remove 650f90527da374bec5157c5cdf8746a46f2ab75220a5e4402310b008fcc3e7d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:52:30 np0005604943 systemd[1]: libpod-conmon-650f90527da374bec5157c5cdf8746a46f2ab75220a5e4402310b008fcc3e7d3.scope: Deactivated successfully.
Feb  2 06:52:30 np0005604943 podman[242735]: 2026-02-02 11:52:30.660803703 +0000 UTC m=+0.040258296 container create 5790a129a75a8702880c190e2df2b613c89a9a735f44683e74562af1aacc4e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 06:52:30 np0005604943 systemd[1]: Started libpod-conmon-5790a129a75a8702880c190e2df2b613c89a9a735f44683e74562af1aacc4e1b.scope.
Feb  2 06:52:30 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:52:30 np0005604943 podman[242735]: 2026-02-02 11:52:30.722513556 +0000 UTC m=+0.101968179 container init 5790a129a75a8702880c190e2df2b613c89a9a735f44683e74562af1aacc4e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:52:30 np0005604943 podman[242735]: 2026-02-02 11:52:30.728244908 +0000 UTC m=+0.107699501 container start 5790a129a75a8702880c190e2df2b613c89a9a735f44683e74562af1aacc4e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 06:52:30 np0005604943 angry_sanderson[242751]: 167 167
Feb  2 06:52:30 np0005604943 systemd[1]: libpod-5790a129a75a8702880c190e2df2b613c89a9a735f44683e74562af1aacc4e1b.scope: Deactivated successfully.
Feb  2 06:52:30 np0005604943 podman[242735]: 2026-02-02 11:52:30.733034505 +0000 UTC m=+0.112489128 container attach 5790a129a75a8702880c190e2df2b613c89a9a735f44683e74562af1aacc4e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:52:30 np0005604943 podman[242735]: 2026-02-02 11:52:30.733501247 +0000 UTC m=+0.112955860 container died 5790a129a75a8702880c190e2df2b613c89a9a735f44683e74562af1aacc4e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 06:52:30 np0005604943 podman[242735]: 2026-02-02 11:52:30.642854528 +0000 UTC m=+0.022309151 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:52:30 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b4d556f24382c0b73001f228332561bf94bf06d72c9d8b11aa3664b028768765-merged.mount: Deactivated successfully.
Feb  2 06:52:30 np0005604943 podman[242735]: 2026-02-02 11:52:30.774152703 +0000 UTC m=+0.153607296 container remove 5790a129a75a8702880c190e2df2b613c89a9a735f44683e74562af1aacc4e1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:52:30 np0005604943 systemd[1]: libpod-conmon-5790a129a75a8702880c190e2df2b613c89a9a735f44683e74562af1aacc4e1b.scope: Deactivated successfully.
Feb  2 06:52:30 np0005604943 podman[242774]: 2026-02-02 11:52:30.89377138 +0000 UTC m=+0.035616805 container create a65b52528a524125970916fafa9fa45bb37895bf9491762f32f3b29c1385fc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 06:52:30 np0005604943 systemd[1]: Started libpod-conmon-a65b52528a524125970916fafa9fa45bb37895bf9491762f32f3b29c1385fc75.scope.
Feb  2 06:52:30 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:52:30 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed713d0add2417dd74818ace91206ec1be121d4a23372966b62553e31d00625/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:52:30 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed713d0add2417dd74818ace91206ec1be121d4a23372966b62553e31d00625/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:52:30 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed713d0add2417dd74818ace91206ec1be121d4a23372966b62553e31d00625/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:52:30 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed713d0add2417dd74818ace91206ec1be121d4a23372966b62553e31d00625/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:52:30 np0005604943 podman[242774]: 2026-02-02 11:52:30.967589633 +0000 UTC m=+0.109435088 container init a65b52528a524125970916fafa9fa45bb37895bf9491762f32f3b29c1385fc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_khayyam, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:52:30 np0005604943 podman[242774]: 2026-02-02 11:52:30.97316019 +0000 UTC m=+0.115005625 container start a65b52528a524125970916fafa9fa45bb37895bf9491762f32f3b29c1385fc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 06:52:30 np0005604943 podman[242774]: 2026-02-02 11:52:30.877733225 +0000 UTC m=+0.019578670 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:52:30 np0005604943 podman[242774]: 2026-02-02 11:52:30.976294624 +0000 UTC m=+0.118140079 container attach a65b52528a524125970916fafa9fa45bb37895bf9491762f32f3b29c1385fc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]: {
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:    "0": [
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:        {
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "devices": [
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "/dev/loop3"
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            ],
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "lv_name": "ceph_lv0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "lv_size": "21470642176",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "name": "ceph_lv0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "tags": {
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.cluster_name": "ceph",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.crush_device_class": "",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.encrypted": "0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.objectstore": "bluestore",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.osd_id": "0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.type": "block",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.vdo": "0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.with_tpm": "0"
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            },
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "type": "block",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "vg_name": "ceph_vg0"
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:        }
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:    ],
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:    "1": [
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:        {
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "devices": [
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "/dev/loop4"
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            ],
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "lv_name": "ceph_lv1",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "lv_size": "21470642176",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "name": "ceph_lv1",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "tags": {
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.cluster_name": "ceph",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.crush_device_class": "",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.encrypted": "0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.objectstore": "bluestore",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.osd_id": "1",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.type": "block",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.vdo": "0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.with_tpm": "0"
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            },
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "type": "block",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "vg_name": "ceph_vg1"
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:        }
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:    ],
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:    "2": [
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:        {
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "devices": [
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "/dev/loop5"
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            ],
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "lv_name": "ceph_lv2",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "lv_size": "21470642176",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "name": "ceph_lv2",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "tags": {
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.cluster_name": "ceph",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.crush_device_class": "",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.encrypted": "0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.objectstore": "bluestore",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.osd_id": "2",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.type": "block",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.vdo": "0",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:                "ceph.with_tpm": "0"
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            },
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "type": "block",
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:            "vg_name": "ceph_vg2"
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:        }
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]:    ]
Feb  2 06:52:31 np0005604943 xenodochial_khayyam[242791]: }
Feb  2 06:52:31 np0005604943 systemd[1]: libpod-a65b52528a524125970916fafa9fa45bb37895bf9491762f32f3b29c1385fc75.scope: Deactivated successfully.
Feb  2 06:52:31 np0005604943 podman[242774]: 2026-02-02 11:52:31.269306929 +0000 UTC m=+0.411152364 container died a65b52528a524125970916fafa9fa45bb37895bf9491762f32f3b29c1385fc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:52:31 np0005604943 systemd[1]: var-lib-containers-storage-overlay-9ed713d0add2417dd74818ace91206ec1be121d4a23372966b62553e31d00625-merged.mount: Deactivated successfully.
Feb  2 06:52:31 np0005604943 podman[242774]: 2026-02-02 11:52:31.314353121 +0000 UTC m=+0.456198546 container remove a65b52528a524125970916fafa9fa45bb37895bf9491762f32f3b29c1385fc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_khayyam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 06:52:31 np0005604943 systemd[1]: libpod-conmon-a65b52528a524125970916fafa9fa45bb37895bf9491762f32f3b29c1385fc75.scope: Deactivated successfully.
Feb  2 06:52:31 np0005604943 podman[242874]: 2026-02-02 11:52:31.702445723 +0000 UTC m=+0.038594582 container create 1909a2a1efed948ffcb58f728580531af5b031f1ddc5a407da42eed09b625096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_easley, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:52:31 np0005604943 systemd[1]: Started libpod-conmon-1909a2a1efed948ffcb58f728580531af5b031f1ddc5a407da42eed09b625096.scope.
Feb  2 06:52:31 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:52:31 np0005604943 podman[242874]: 2026-02-02 11:52:31.768307006 +0000 UTC m=+0.104455895 container init 1909a2a1efed948ffcb58f728580531af5b031f1ddc5a407da42eed09b625096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_easley, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 06:52:31 np0005604943 podman[242874]: 2026-02-02 11:52:31.776117683 +0000 UTC m=+0.112266552 container start 1909a2a1efed948ffcb58f728580531af5b031f1ddc5a407da42eed09b625096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_easley, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb  2 06:52:31 np0005604943 podman[242874]: 2026-02-02 11:52:31.68191746 +0000 UTC m=+0.018066339 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:52:31 np0005604943 podman[242874]: 2026-02-02 11:52:31.779263706 +0000 UTC m=+0.115412595 container attach 1909a2a1efed948ffcb58f728580531af5b031f1ddc5a407da42eed09b625096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_easley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 06:52:31 np0005604943 systemd[1]: libpod-1909a2a1efed948ffcb58f728580531af5b031f1ddc5a407da42eed09b625096.scope: Deactivated successfully.
Feb  2 06:52:31 np0005604943 hopeful_easley[242890]: 167 167
Feb  2 06:52:31 np0005604943 conmon[242890]: conmon 1909a2a1efed948ffcb5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1909a2a1efed948ffcb58f728580531af5b031f1ddc5a407da42eed09b625096.scope/container/memory.events
Feb  2 06:52:31 np0005604943 podman[242874]: 2026-02-02 11:52:31.781774963 +0000 UTC m=+0.117923852 container died 1909a2a1efed948ffcb58f728580531af5b031f1ddc5a407da42eed09b625096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_easley, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Feb  2 06:52:31 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b51a98a8a26f4faad11f8bb392422148af54dbab10f416412027d8b766f585cf-merged.mount: Deactivated successfully.
Feb  2 06:52:31 np0005604943 podman[242874]: 2026-02-02 11:52:31.809717342 +0000 UTC m=+0.145866201 container remove 1909a2a1efed948ffcb58f728580531af5b031f1ddc5a407da42eed09b625096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_easley, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 06:52:31 np0005604943 systemd[1]: libpod-conmon-1909a2a1efed948ffcb58f728580531af5b031f1ddc5a407da42eed09b625096.scope: Deactivated successfully.
Feb  2 06:52:31 np0005604943 podman[242913]: 2026-02-02 11:52:31.949986615 +0000 UTC m=+0.046228205 container create d2d712f790ad2e66cf25153437783bd3391d29ffe258b82dc97f3e87c16cca32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_poincare, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 06:52:31 np0005604943 systemd[1]: Started libpod-conmon-d2d712f790ad2e66cf25153437783bd3391d29ffe258b82dc97f3e87c16cca32.scope.
Feb  2 06:52:32 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:52:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d3d253fe34f0b0a5146b95dbc3a6929c5f0c15cdc0ebc5e9534cc95059f09d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:52:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d3d253fe34f0b0a5146b95dbc3a6929c5f0c15cdc0ebc5e9534cc95059f09d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:52:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d3d253fe34f0b0a5146b95dbc3a6929c5f0c15cdc0ebc5e9534cc95059f09d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:52:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d3d253fe34f0b0a5146b95dbc3a6929c5f0c15cdc0ebc5e9534cc95059f09d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:52:32 np0005604943 podman[242913]: 2026-02-02 11:52:31.932184163 +0000 UTC m=+0.028425773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:52:32 np0005604943 podman[242913]: 2026-02-02 11:52:32.042819662 +0000 UTC m=+0.139061272 container init d2d712f790ad2e66cf25153437783bd3391d29ffe258b82dc97f3e87c16cca32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_poincare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  2 06:52:32 np0005604943 podman[242913]: 2026-02-02 11:52:32.047911657 +0000 UTC m=+0.144153247 container start d2d712f790ad2e66cf25153437783bd3391d29ffe258b82dc97f3e87c16cca32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_poincare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Feb  2 06:52:32 np0005604943 podman[242913]: 2026-02-02 11:52:32.051196823 +0000 UTC m=+0.147438653 container attach d2d712f790ad2e66cf25153437783bd3391d29ffe258b82dc97f3e87c16cca32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 06:52:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:52:32 np0005604943 lvm[243005]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:52:32 np0005604943 lvm[243005]: VG ceph_vg0 finished
Feb  2 06:52:32 np0005604943 lvm[243008]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:52:32 np0005604943 lvm[243008]: VG ceph_vg1 finished
Feb  2 06:52:32 np0005604943 lvm[243010]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:52:32 np0005604943 lvm[243010]: VG ceph_vg2 finished
Feb  2 06:52:32 np0005604943 keen_poincare[242929]: {}
Feb  2 06:52:32 np0005604943 systemd[1]: libpod-d2d712f790ad2e66cf25153437783bd3391d29ffe258b82dc97f3e87c16cca32.scope: Deactivated successfully.
Feb  2 06:52:32 np0005604943 podman[242913]: 2026-02-02 11:52:32.794134278 +0000 UTC m=+0.890375868 container died d2d712f790ad2e66cf25153437783bd3391d29ffe258b82dc97f3e87c16cca32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 06:52:32 np0005604943 systemd[1]: libpod-d2d712f790ad2e66cf25153437783bd3391d29ffe258b82dc97f3e87c16cca32.scope: Consumed 1.046s CPU time.
Feb  2 06:52:32 np0005604943 systemd[1]: var-lib-containers-storage-overlay-75d3d253fe34f0b0a5146b95dbc3a6929c5f0c15cdc0ebc5e9534cc95059f09d-merged.mount: Deactivated successfully.
Feb  2 06:52:32 np0005604943 podman[242913]: 2026-02-02 11:52:32.832952155 +0000 UTC m=+0.929193745 container remove d2d712f790ad2e66cf25153437783bd3391d29ffe258b82dc97f3e87c16cca32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 06:52:32 np0005604943 systemd[1]: libpod-conmon-d2d712f790ad2e66cf25153437783bd3391d29ffe258b82dc97f3e87c16cca32.scope: Deactivated successfully.
Feb  2 06:52:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:52:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:52:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:52:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:52:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:52:33 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:52:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:34 np0005604943 ceph-osd[88236]: bluestore.MempoolThread fragmentation_score=0.000140 took=0.000027s
Feb  2 06:52:34 np0005604943 ceph-osd[87192]: bluestore.MempoolThread fragmentation_score=0.000135 took=0.000022s
Feb  2 06:52:34 np0005604943 ceph-osd[86144]: bluestore.MempoolThread fragmentation_score=0.000133 took=0.000067s
Feb  2 06:52:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:52:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:52:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:52:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:52:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:52:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:52:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:52:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:52:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:52:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/296002275' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:52:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:52:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/296002275' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:52:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:52:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:52:48.209797) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033168209911, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1353, "num_deletes": 251, "total_data_size": 2170944, "memory_usage": 2216976, "flush_reason": "Manual Compaction"}
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033168222007, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2129052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14999, "largest_seqno": 16351, "table_properties": {"data_size": 2122663, "index_size": 3588, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13100, "raw_average_key_size": 19, "raw_value_size": 2109918, "raw_average_value_size": 3158, "num_data_blocks": 165, "num_entries": 668, "num_filter_entries": 668, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770033028, "oldest_key_time": 1770033028, "file_creation_time": 1770033168, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 12278 microseconds, and 5245 cpu microseconds.
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:52:48.222086) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2129052 bytes OK
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:52:48.222121) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:52:48.223356) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:52:48.223371) EVENT_LOG_v1 {"time_micros": 1770033168223367, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:52:48.223399) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2164918, prev total WAL file size 2164918, number of live WAL files 2.
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:52:48.224274) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2079KB)], [35(7393KB)]
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033168224351, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9699645, "oldest_snapshot_seqno": -1}
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4019 keys, 7891075 bytes, temperature: kUnknown
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033168253666, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7891075, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7861974, "index_size": 17983, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 98193, "raw_average_key_size": 24, "raw_value_size": 7787005, "raw_average_value_size": 1937, "num_data_blocks": 761, "num_entries": 4019, "num_filter_entries": 4019, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770033168, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:52:48.253869) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7891075 bytes
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:52:48.254879) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 330.2 rd, 268.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.2 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(8.3) write-amplify(3.7) OK, records in: 4533, records dropped: 514 output_compression: NoCompression
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:52:48.254896) EVENT_LOG_v1 {"time_micros": 1770033168254886, "job": 16, "event": "compaction_finished", "compaction_time_micros": 29378, "compaction_time_cpu_micros": 11542, "output_level": 6, "num_output_files": 1, "total_output_size": 7891075, "num_input_records": 4533, "num_output_records": 4019, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033168255161, "job": 16, "event": "table_file_deletion", "file_number": 37}
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033168255636, "job": 16, "event": "table_file_deletion", "file_number": 35}
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:52:48.223996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:52:48.255870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:52:48.255880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:52:48.255883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:52:48.255885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:52:48 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:52:48.255887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:52:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:52 np0005604943 podman[243050]: 2026-02-02 11:52:52.040401374 +0000 UTC m=+0.057306978 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:52:52 np0005604943 podman[243049]: 2026-02-02 11:52:52.070795099 +0000 UTC m=+0.087628271 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb  2 06:52:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:52:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:52:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 0 B/s wr, 9 op/s
Feb  2 06:52:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 0 B/s wr, 9 op/s
Feb  2 06:52:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:52:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Feb  2 06:53:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 06:53:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 06:53:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:53:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb  2 06:53:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Feb  2 06:53:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:53:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Feb  2 06:53:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:53:09
Feb  2 06:53:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:53:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:53:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'images', '.mgr', 'default.rgw.meta', 'vms', 'default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Feb  2 06:53:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:53:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:53:10.018 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:53:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:53:10.018 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:53:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:53:10.018 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 12 op/s
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:53:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:53:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:53:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:53:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:53:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Feb  2 06:53:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Feb  2 06:53:15 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Feb  2 06:53:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:53:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Feb  2 06:53:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Feb  2 06:53:16 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Feb  2 06:53:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:53:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Feb  2 06:53:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Feb  2 06:53:17 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Feb  2 06:53:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 13 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.1 MiB/s wr, 16 op/s
Feb  2 06:53:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.4 MiB/s wr, 26 op/s
Feb  2 06:53:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Feb  2 06:53:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Feb  2 06:53:20 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0003329781013563787 of space, bias 1.0, pg target 0.0998934304069136 quantized to 32 (current 32)
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.917750433164004e-06 of space, bias 4.0, pg target 0.0023013005197968046 quantized to 16 (current 16)
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:53:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:53:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.4 MiB/s wr, 26 op/s
Feb  2 06:53:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:53:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Feb  2 06:53:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Feb  2 06:53:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Feb  2 06:53:23 np0005604943 podman[243097]: 2026-02-02 11:53:23.032071379 +0000 UTC m=+0.052005292 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent)
Feb  2 06:53:23 np0005604943 podman[243096]: 2026-02-02 11:53:23.072671158 +0000 UTC m=+0.097231487 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 06:53:23 np0005604943 nova_compute[238883]: 2026-02-02 11:53:23.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:53:23 np0005604943 nova_compute[238883]: 2026-02-02 11:53:23.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 06:53:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 4.9 MiB/s wr, 44 op/s
Feb  2 06:53:24 np0005604943 nova_compute[238883]: 2026-02-02 11:53:24.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:53:24 np0005604943 nova_compute[238883]: 2026-02-02 11:53:24.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 06:53:24 np0005604943 nova_compute[238883]: 2026-02-02 11:53:24.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 06:53:24 np0005604943 nova_compute[238883]: 2026-02-02 11:53:24.658 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 06:53:24 np0005604943 nova_compute[238883]: 2026-02-02 11:53:24.658 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:53:24 np0005604943 nova_compute[238883]: 2026-02-02 11:53:24.659 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:53:24 np0005604943 nova_compute[238883]: 2026-02-02 11:53:24.699 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:53:24 np0005604943 nova_compute[238883]: 2026-02-02 11:53:24.699 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:53:24 np0005604943 nova_compute[238883]: 2026-02-02 11:53:24.699 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:53:24 np0005604943 nova_compute[238883]: 2026-02-02 11:53:24.700 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 06:53:24 np0005604943 nova_compute[238883]: 2026-02-02 11:53:24.700 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:53:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:53:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/66876195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:53:25 np0005604943 nova_compute[238883]: 2026-02-02 11:53:25.251 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:53:25 np0005604943 nova_compute[238883]: 2026-02-02 11:53:25.407 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:53:25 np0005604943 nova_compute[238883]: 2026-02-02 11:53:25.408 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5135MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 06:53:25 np0005604943 nova_compute[238883]: 2026-02-02 11:53:25.408 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:53:25 np0005604943 nova_compute[238883]: 2026-02-02 11:53:25.409 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:53:25 np0005604943 nova_compute[238883]: 2026-02-02 11:53:25.467 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 06:53:25 np0005604943 nova_compute[238883]: 2026-02-02 11:53:25.467 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 06:53:25 np0005604943 nova_compute[238883]: 2026-02-02 11:53:25.483 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:53:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:53:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3672096144' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:53:26 np0005604943 nova_compute[238883]: 2026-02-02 11:53:26.004 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:53:26 np0005604943 nova_compute[238883]: 2026-02-02 11:53:26.008 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:53:26 np0005604943 nova_compute[238883]: 2026-02-02 11:53:26.023 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:53:26 np0005604943 nova_compute[238883]: 2026-02-02 11:53:26.024 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 06:53:26 np0005604943 nova_compute[238883]: 2026-02-02 11:53:26.024 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:53:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.6 MiB/s wr, 35 op/s
Feb  2 06:53:27 np0005604943 nova_compute[238883]: 2026-02-02 11:53:27.009 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:53:27 np0005604943 nova_compute[238883]: 2026-02-02 11:53:27.009 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:53:27 np0005604943 nova_compute[238883]: 2026-02-02 11:53:27.009 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:53:27 np0005604943 nova_compute[238883]: 2026-02-02 11:53:27.010 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:53:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:53:27 np0005604943 nova_compute[238883]: 2026-02-02 11:53:27.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:53:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.6 MiB/s wr, 27 op/s
Feb  2 06:53:28 np0005604943 nova_compute[238883]: 2026-02-02 11:53:28.635 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:53:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 23 op/s
Feb  2 06:53:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.0 MiB/s wr, 22 op/s
Feb  2 06:53:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:53:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  2 06:53:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  2 06:53:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:53:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:53:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:53:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:53:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:53:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:53:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:53:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:53:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:53:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:53:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:53:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:53:33 np0005604943 podman[243325]: 2026-02-02 11:53:33.878110275 +0000 UTC m=+0.042279926 container create af86205d5b5989d2b435a4725778c3b5472095338a1a3a215d73cd0d7ae75774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:53:33 np0005604943 systemd[1]: Started libpod-conmon-af86205d5b5989d2b435a4725778c3b5472095338a1a3a215d73cd0d7ae75774.scope.
Feb  2 06:53:33 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:53:33 np0005604943 podman[243325]: 2026-02-02 11:53:33.934804404 +0000 UTC m=+0.098974085 container init af86205d5b5989d2b435a4725778c3b5472095338a1a3a215d73cd0d7ae75774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_borg, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 06:53:33 np0005604943 podman[243325]: 2026-02-02 11:53:33.940606932 +0000 UTC m=+0.104776583 container start af86205d5b5989d2b435a4725778c3b5472095338a1a3a215d73cd0d7ae75774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:53:33 np0005604943 podman[243325]: 2026-02-02 11:53:33.943893933 +0000 UTC m=+0.108063614 container attach af86205d5b5989d2b435a4725778c3b5472095338a1a3a215d73cd0d7ae75774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:53:33 np0005604943 agitated_borg[243341]: 167 167
Feb  2 06:53:33 np0005604943 systemd[1]: libpod-af86205d5b5989d2b435a4725778c3b5472095338a1a3a215d73cd0d7ae75774.scope: Deactivated successfully.
Feb  2 06:53:33 np0005604943 podman[243325]: 2026-02-02 11:53:33.945719782 +0000 UTC m=+0.109889433 container died af86205d5b5989d2b435a4725778c3b5472095338a1a3a215d73cd0d7ae75774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 06:53:33 np0005604943 podman[243325]: 2026-02-02 11:53:33.859087465 +0000 UTC m=+0.023257146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:53:33 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a311caebeba4097921e98d339cbce1b72f3123bfc0928fe94d37eb01a1a5abbd-merged.mount: Deactivated successfully.
Feb  2 06:53:33 np0005604943 podman[243325]: 2026-02-02 11:53:33.980850062 +0000 UTC m=+0.145019733 container remove af86205d5b5989d2b435a4725778c3b5472095338a1a3a215d73cd0d7ae75774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_borg, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:53:33 np0005604943 systemd[1]: libpod-conmon-af86205d5b5989d2b435a4725778c3b5472095338a1a3a215d73cd0d7ae75774.scope: Deactivated successfully.
Feb  2 06:53:34 np0005604943 podman[243364]: 2026-02-02 11:53:34.087354162 +0000 UTC m=+0.033758623 container create ba62e6e3978daf9616ab59993629c629a53bc1f14feda32f30854f882d579b7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  2 06:53:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 611 B/s rd, 1.7 MiB/s wr, 1 op/s
Feb  2 06:53:34 np0005604943 systemd[1]: Started libpod-conmon-ba62e6e3978daf9616ab59993629c629a53bc1f14feda32f30854f882d579b7f.scope.
Feb  2 06:53:34 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:53:34 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab396a87cc96c5e2968028bbf96f0abe928436ea36162f13b14d433f07a4799/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:53:34 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab396a87cc96c5e2968028bbf96f0abe928436ea36162f13b14d433f07a4799/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:53:34 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab396a87cc96c5e2968028bbf96f0abe928436ea36162f13b14d433f07a4799/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:53:34 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab396a87cc96c5e2968028bbf96f0abe928436ea36162f13b14d433f07a4799/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:53:34 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab396a87cc96c5e2968028bbf96f0abe928436ea36162f13b14d433f07a4799/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:53:34 np0005604943 podman[243364]: 2026-02-02 11:53:34.146791577 +0000 UTC m=+0.093196068 container init ba62e6e3978daf9616ab59993629c629a53bc1f14feda32f30854f882d579b7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:53:34 np0005604943 podman[243364]: 2026-02-02 11:53:34.154456976 +0000 UTC m=+0.100861467 container start ba62e6e3978daf9616ab59993629c629a53bc1f14feda32f30854f882d579b7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Feb  2 06:53:34 np0005604943 podman[243364]: 2026-02-02 11:53:34.158077046 +0000 UTC m=+0.104481547 container attach ba62e6e3978daf9616ab59993629c629a53bc1f14feda32f30854f882d579b7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:53:34 np0005604943 podman[243364]: 2026-02-02 11:53:34.072626761 +0000 UTC m=+0.019031262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:53:34 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  2 06:53:34 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:53:34 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:53:34 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:53:34 np0005604943 youthful_lederberg[243380]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:53:34 np0005604943 youthful_lederberg[243380]: --> All data devices are unavailable
Feb  2 06:53:34 np0005604943 systemd[1]: libpod-ba62e6e3978daf9616ab59993629c629a53bc1f14feda32f30854f882d579b7f.scope: Deactivated successfully.
Feb  2 06:53:34 np0005604943 podman[243364]: 2026-02-02 11:53:34.591291583 +0000 UTC m=+0.537696094 container died ba62e6e3978daf9616ab59993629c629a53bc1f14feda32f30854f882d579b7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 06:53:34 np0005604943 systemd[1]: var-lib-containers-storage-overlay-2ab396a87cc96c5e2968028bbf96f0abe928436ea36162f13b14d433f07a4799-merged.mount: Deactivated successfully.
Feb  2 06:53:34 np0005604943 podman[243364]: 2026-02-02 11:53:34.634510964 +0000 UTC m=+0.580915435 container remove ba62e6e3978daf9616ab59993629c629a53bc1f14feda32f30854f882d579b7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:53:34 np0005604943 systemd[1]: libpod-conmon-ba62e6e3978daf9616ab59993629c629a53bc1f14feda32f30854f882d579b7f.scope: Deactivated successfully.
Feb  2 06:53:35 np0005604943 podman[243478]: 2026-02-02 11:53:35.053302998 +0000 UTC m=+0.036031966 container create efcc069508bc99eeb02b7576b9d49fd4077981e276599ad2c58c13b8d8153783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_brown, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:53:35 np0005604943 systemd[1]: Started libpod-conmon-efcc069508bc99eeb02b7576b9d49fd4077981e276599ad2c58c13b8d8153783.scope.
Feb  2 06:53:35 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:53:35 np0005604943 podman[243478]: 2026-02-02 11:53:35.122103228 +0000 UTC m=+0.104832196 container init efcc069508bc99eeb02b7576b9d49fd4077981e276599ad2c58c13b8d8153783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_brown, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 06:53:35 np0005604943 podman[243478]: 2026-02-02 11:53:35.127259699 +0000 UTC m=+0.109988667 container start efcc069508bc99eeb02b7576b9d49fd4077981e276599ad2c58c13b8d8153783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 06:53:35 np0005604943 lucid_brown[243494]: 167 167
Feb  2 06:53:35 np0005604943 podman[243478]: 2026-02-02 11:53:35.130294072 +0000 UTC m=+0.113023060 container attach efcc069508bc99eeb02b7576b9d49fd4077981e276599ad2c58c13b8d8153783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Feb  2 06:53:35 np0005604943 systemd[1]: libpod-efcc069508bc99eeb02b7576b9d49fd4077981e276599ad2c58c13b8d8153783.scope: Deactivated successfully.
Feb  2 06:53:35 np0005604943 podman[243478]: 2026-02-02 11:53:35.13060892 +0000 UTC m=+0.113337888 container died efcc069508bc99eeb02b7576b9d49fd4077981e276599ad2c58c13b8d8153783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_brown, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:53:35 np0005604943 podman[243478]: 2026-02-02 11:53:35.036591131 +0000 UTC m=+0.019320119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:53:35 np0005604943 systemd[1]: var-lib-containers-storage-overlay-e0408218cf6471d206084067716e1663b153d212087e9e358a33394cf31a6438-merged.mount: Deactivated successfully.
Feb  2 06:53:35 np0005604943 podman[243478]: 2026-02-02 11:53:35.162728688 +0000 UTC m=+0.145457656 container remove efcc069508bc99eeb02b7576b9d49fd4077981e276599ad2c58c13b8d8153783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:53:35 np0005604943 systemd[1]: libpod-conmon-efcc069508bc99eeb02b7576b9d49fd4077981e276599ad2c58c13b8d8153783.scope: Deactivated successfully.
Feb  2 06:53:35 np0005604943 podman[243517]: 2026-02-02 11:53:35.309865848 +0000 UTC m=+0.045858683 container create 69afd7a67de396c915323ba85ba62d71a8bcf8399ad879b827e7bf78180a23c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 06:53:35 np0005604943 systemd[1]: Started libpod-conmon-69afd7a67de396c915323ba85ba62d71a8bcf8399ad879b827e7bf78180a23c2.scope.
Feb  2 06:53:35 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:53:35 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb58d265d026692755d958996c50fdc3e2383eb9040bb7845cded6ab48412353/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:53:35 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb58d265d026692755d958996c50fdc3e2383eb9040bb7845cded6ab48412353/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:53:35 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb58d265d026692755d958996c50fdc3e2383eb9040bb7845cded6ab48412353/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:53:35 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb58d265d026692755d958996c50fdc3e2383eb9040bb7845cded6ab48412353/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:53:35 np0005604943 podman[243517]: 2026-02-02 11:53:35.368134621 +0000 UTC m=+0.104127486 container init 69afd7a67de396c915323ba85ba62d71a8bcf8399ad879b827e7bf78180a23c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 06:53:35 np0005604943 podman[243517]: 2026-02-02 11:53:35.372803908 +0000 UTC m=+0.108796743 container start 69afd7a67de396c915323ba85ba62d71a8bcf8399ad879b827e7bf78180a23c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 06:53:35 np0005604943 podman[243517]: 2026-02-02 11:53:35.376255143 +0000 UTC m=+0.112247988 container attach 69afd7a67de396c915323ba85ba62d71a8bcf8399ad879b827e7bf78180a23c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:53:35 np0005604943 podman[243517]: 2026-02-02 11:53:35.293010678 +0000 UTC m=+0.029003543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]: {
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:    "0": [
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:        {
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "devices": [
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "/dev/loop3"
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            ],
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "lv_name": "ceph_lv0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "lv_size": "21470642176",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "name": "ceph_lv0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "tags": {
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.cluster_name": "ceph",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.crush_device_class": "",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.encrypted": "0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.objectstore": "bluestore",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.osd_id": "0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.type": "block",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.vdo": "0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.with_tpm": "0"
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            },
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "type": "block",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "vg_name": "ceph_vg0"
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:        }
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:    ],
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:    "1": [
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:        {
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "devices": [
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "/dev/loop4"
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            ],
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "lv_name": "ceph_lv1",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "lv_size": "21470642176",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "name": "ceph_lv1",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "tags": {
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.cluster_name": "ceph",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.crush_device_class": "",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.encrypted": "0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.objectstore": "bluestore",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.osd_id": "1",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.type": "block",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.vdo": "0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.with_tpm": "0"
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            },
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "type": "block",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "vg_name": "ceph_vg1"
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:        }
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:    ],
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:    "2": [
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:        {
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "devices": [
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "/dev/loop5"
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            ],
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "lv_name": "ceph_lv2",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "lv_size": "21470642176",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "name": "ceph_lv2",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "tags": {
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.cluster_name": "ceph",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.crush_device_class": "",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.encrypted": "0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.objectstore": "bluestore",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.osd_id": "2",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.type": "block",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.vdo": "0",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:                "ceph.with_tpm": "0"
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            },
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "type": "block",
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:            "vg_name": "ceph_vg2"
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:        }
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]:    ]
Feb  2 06:53:35 np0005604943 focused_rhodes[243533]: }
Feb  2 06:53:35 np0005604943 systemd[1]: libpod-69afd7a67de396c915323ba85ba62d71a8bcf8399ad879b827e7bf78180a23c2.scope: Deactivated successfully.
Feb  2 06:53:35 np0005604943 podman[243517]: 2026-02-02 11:53:35.628998259 +0000 UTC m=+0.364991094 container died 69afd7a67de396c915323ba85ba62d71a8bcf8399ad879b827e7bf78180a23c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 06:53:35 np0005604943 systemd[1]: var-lib-containers-storage-overlay-fb58d265d026692755d958996c50fdc3e2383eb9040bb7845cded6ab48412353-merged.mount: Deactivated successfully.
Feb  2 06:53:35 np0005604943 podman[243517]: 2026-02-02 11:53:35.673150126 +0000 UTC m=+0.409142961 container remove 69afd7a67de396c915323ba85ba62d71a8bcf8399ad879b827e7bf78180a23c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 06:53:35 np0005604943 systemd[1]: libpod-conmon-69afd7a67de396c915323ba85ba62d71a8bcf8399ad879b827e7bf78180a23c2.scope: Deactivated successfully.
Feb  2 06:53:36 np0005604943 podman[243617]: 2026-02-02 11:53:36.033620556 +0000 UTC m=+0.033429265 container create c7d4b4ba00ed449c3bf7a5b7b6abc3934bfa44bd875ac802c3e1c8f627ae90d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_gates, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 06:53:36 np0005604943 systemd[1]: Started libpod-conmon-c7d4b4ba00ed449c3bf7a5b7b6abc3934bfa44bd875ac802c3e1c8f627ae90d4.scope.
Feb  2 06:53:36 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:53:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:53:36 np0005604943 podman[243617]: 2026-02-02 11:53:36.09964712 +0000 UTC m=+0.099455839 container init c7d4b4ba00ed449c3bf7a5b7b6abc3934bfa44bd875ac802c3e1c8f627ae90d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_gates, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:53:36 np0005604943 podman[243617]: 2026-02-02 11:53:36.1058749 +0000 UTC m=+0.105683609 container start c7d4b4ba00ed449c3bf7a5b7b6abc3934bfa44bd875ac802c3e1c8f627ae90d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 06:53:36 np0005604943 epic_gates[243634]: 167 167
Feb  2 06:53:36 np0005604943 systemd[1]: libpod-c7d4b4ba00ed449c3bf7a5b7b6abc3934bfa44bd875ac802c3e1c8f627ae90d4.scope: Deactivated successfully.
Feb  2 06:53:36 np0005604943 podman[243617]: 2026-02-02 11:53:36.110320362 +0000 UTC m=+0.110129091 container attach c7d4b4ba00ed449c3bf7a5b7b6abc3934bfa44bd875ac802c3e1c8f627ae90d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_gates, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 06:53:36 np0005604943 podman[243617]: 2026-02-02 11:53:36.110672851 +0000 UTC m=+0.110481580 container died c7d4b4ba00ed449c3bf7a5b7b6abc3934bfa44bd875ac802c3e1c8f627ae90d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_gates, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 06:53:36 np0005604943 podman[243617]: 2026-02-02 11:53:36.018678138 +0000 UTC m=+0.018486877 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:53:36 np0005604943 systemd[1]: var-lib-containers-storage-overlay-67e9e8fee4a4cde71787b80fbcd8c0946c0d50b971257b3f09d164c7ebdbfdba-merged.mount: Deactivated successfully.
Feb  2 06:53:36 np0005604943 podman[243617]: 2026-02-02 11:53:36.138755948 +0000 UTC m=+0.138564657 container remove c7d4b4ba00ed449c3bf7a5b7b6abc3934bfa44bd875ac802c3e1c8f627ae90d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 06:53:36 np0005604943 systemd[1]: libpod-conmon-c7d4b4ba00ed449c3bf7a5b7b6abc3934bfa44bd875ac802c3e1c8f627ae90d4.scope: Deactivated successfully.
Feb  2 06:53:36 np0005604943 podman[243659]: 2026-02-02 11:53:36.244056657 +0000 UTC m=+0.032742437 container create e409d437ce4e9b58dc4c3e525d0b3fe6467bbc44296c73ad68408f15252b0f38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_driscoll, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 06:53:36 np0005604943 systemd[1]: Started libpod-conmon-e409d437ce4e9b58dc4c3e525d0b3fe6467bbc44296c73ad68408f15252b0f38.scope.
Feb  2 06:53:36 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:53:36 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f96880bcf12e9e91f96ada74562ec1cbfedd493500d38eb252790efade0430b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:53:36 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f96880bcf12e9e91f96ada74562ec1cbfedd493500d38eb252790efade0430b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:53:36 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f96880bcf12e9e91f96ada74562ec1cbfedd493500d38eb252790efade0430b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:53:36 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f96880bcf12e9e91f96ada74562ec1cbfedd493500d38eb252790efade0430b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:53:36 np0005604943 podman[243659]: 2026-02-02 11:53:36.304329474 +0000 UTC m=+0.093015264 container init e409d437ce4e9b58dc4c3e525d0b3fe6467bbc44296c73ad68408f15252b0f38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_driscoll, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:53:36 np0005604943 podman[243659]: 2026-02-02 11:53:36.309880615 +0000 UTC m=+0.098566395 container start e409d437ce4e9b58dc4c3e525d0b3fe6467bbc44296c73ad68408f15252b0f38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:53:36 np0005604943 podman[243659]: 2026-02-02 11:53:36.3173966 +0000 UTC m=+0.106082380 container attach e409d437ce4e9b58dc4c3e525d0b3fe6467bbc44296c73ad68408f15252b0f38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_driscoll, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:53:36 np0005604943 podman[243659]: 2026-02-02 11:53:36.227948716 +0000 UTC m=+0.016634506 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:53:36 np0005604943 lvm[243751]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:53:36 np0005604943 lvm[243751]: VG ceph_vg0 finished
Feb  2 06:53:36 np0005604943 lvm[243754]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:53:36 np0005604943 lvm[243754]: VG ceph_vg1 finished
Feb  2 06:53:36 np0005604943 lvm[243756]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:53:36 np0005604943 lvm[243756]: VG ceph_vg2 finished
Feb  2 06:53:37 np0005604943 thirsty_driscoll[243675]: {}
Feb  2 06:53:37 np0005604943 systemd[1]: libpod-e409d437ce4e9b58dc4c3e525d0b3fe6467bbc44296c73ad68408f15252b0f38.scope: Deactivated successfully.
Feb  2 06:53:37 np0005604943 podman[243659]: 2026-02-02 11:53:37.058026079 +0000 UTC m=+0.846711859 container died e409d437ce4e9b58dc4c3e525d0b3fe6467bbc44296c73ad68408f15252b0f38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_driscoll, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:53:37 np0005604943 systemd[1]: var-lib-containers-storage-overlay-8f96880bcf12e9e91f96ada74562ec1cbfedd493500d38eb252790efade0430b-merged.mount: Deactivated successfully.
Feb  2 06:53:37 np0005604943 podman[243659]: 2026-02-02 11:53:37.120484475 +0000 UTC m=+0.909170255 container remove e409d437ce4e9b58dc4c3e525d0b3fe6467bbc44296c73ad68408f15252b0f38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_driscoll, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:53:37 np0005604943 systemd[1]: libpod-conmon-e409d437ce4e9b58dc4c3e525d0b3fe6467bbc44296c73ad68408f15252b0f38.scope: Deactivated successfully.
Feb  2 06:53:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:53:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:53:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:53:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:53:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:53:37 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:53:37 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:53:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:53:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:53:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Feb  2 06:53:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Feb  2 06:53:40 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Feb  2 06:53:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:53:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:53:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:53:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:53:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:53:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:53:40 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:53:40.833 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:53:40 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:53:40.835 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 06:53:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Feb  2 06:53:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Feb  2 06:53:41 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Feb  2 06:53:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Feb  2 06:53:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 06:53:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Feb  2 06:53:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Feb  2 06:53:43 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Feb  2 06:53:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 1.8 KiB/s wr, 9 op/s
Feb  2 06:53:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:53:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2762542846' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:53:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:53:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2762542846' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:53:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:53:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1246762878' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:53:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:53:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1246762878' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:53:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Feb  2 06:53:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Feb  2 06:53:45 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Feb  2 06:53:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 2.0 KiB/s wr, 10 op/s
Feb  2 06:53:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:53:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3804365349' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:53:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:53:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3804365349' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:53:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:53:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Feb  2 06:53:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Feb  2 06:53:47 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Feb  2 06:53:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:53:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2706518670' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:53:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:53:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2706518670' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:53:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 4.7 KiB/s wr, 82 op/s
Feb  2 06:53:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Feb  2 06:53:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Feb  2 06:53:48 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Feb  2 06:53:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:53:49.837 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:53:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 6.5 KiB/s wr, 141 op/s
Feb  2 06:53:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:53:51 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2414505977' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:53:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:53:51 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2414505977' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:53:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 6.2 KiB/s wr, 134 op/s
Feb  2 06:53:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:53:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Feb  2 06:53:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Feb  2 06:53:52 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Feb  2 06:53:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Feb  2 06:53:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Feb  2 06:53:53 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Feb  2 06:53:54 np0005604943 podman[243801]: 2026-02-02 11:53:54.029801737 +0000 UTC m=+0.046980985 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb  2 06:53:54 np0005604943 podman[243800]: 2026-02-02 11:53:54.079958977 +0000 UTC m=+0.095253433 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Feb  2 06:53:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 7.0 KiB/s wr, 144 op/s
Feb  2 06:53:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:53:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1781381286' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:53:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:53:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1781381286' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:53:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 5.4 KiB/s wr, 112 op/s
Feb  2 06:53:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:53:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Feb  2 06:53:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Feb  2 06:53:57 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Feb  2 06:53:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 5.8 KiB/s wr, 133 op/s
Feb  2 06:53:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Feb  2 06:53:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Feb  2 06:53:58 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Feb  2 06:54:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.9 KiB/s wr, 59 op/s
Feb  2 06:54:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Feb  2 06:54:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Feb  2 06:54:00 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Feb  2 06:54:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.7 KiB/s wr, 62 op/s
Feb  2 06:54:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:54:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Feb  2 06:54:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Feb  2 06:54:02 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Feb  2 06:54:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Feb  2 06:54:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Feb  2 06:54:03 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Feb  2 06:54:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 5.3 KiB/s wr, 85 op/s
Feb  2 06:54:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Feb  2 06:54:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Feb  2 06:54:04 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Feb  2 06:54:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:54:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2335652738' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:54:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:54:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2335652738' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:54:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Feb  2 06:54:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Feb  2 06:54:05 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Feb  2 06:54:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:54:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3148311038' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:54:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:54:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3148311038' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:54:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 5.7 KiB/s wr, 111 op/s
Feb  2 06:54:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:54:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Feb  2 06:54:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Feb  2 06:54:07 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Feb  2 06:54:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:54:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4967806' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:54:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:54:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4967806' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:54:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 7.8 KiB/s wr, 187 op/s
Feb  2 06:54:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Feb  2 06:54:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Feb  2 06:54:08 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Feb  2 06:54:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:54:09
Feb  2 06:54:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:54:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:54:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.control', '.rgw.root']
Feb  2 06:54:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:54:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:54:10.019 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:54:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:54:10.020 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:54:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:54:10.020 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 8.7 KiB/s wr, 191 op/s
Feb  2 06:54:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Feb  2 06:54:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Feb  2 06:54:10 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:54:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:54:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:54:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1623356767' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:54:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:54:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1623356767' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:54:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Feb  2 06:54:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Feb  2 06:54:11 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Feb  2 06:54:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 8.0 KiB/s wr, 151 op/s
Feb  2 06:54:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:54:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Feb  2 06:54:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Feb  2 06:54:12 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Feb  2 06:54:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:54:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3955783476' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:54:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:54:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3955783476' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:54:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Feb  2 06:54:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Feb  2 06:54:13 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Feb  2 06:54:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 147 KiB/s rd, 8.7 KiB/s wr, 203 op/s
Feb  2 06:54:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:54:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/636803982' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:54:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:54:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/636803982' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:54:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 6.4 KiB/s wr, 148 op/s
Feb  2 06:54:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:54:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Feb  2 06:54:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Feb  2 06:54:17 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Feb  2 06:54:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 147 KiB/s rd, 6.8 KiB/s wr, 196 op/s
Feb  2 06:54:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 5.4 KiB/s wr, 149 op/s
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.6806980606757733e-06 of space, bias 1.0, pg target 0.000504209418202732 quantized to 32 (current 32)
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665949713231344 of space, bias 1.0, pg target 0.1997849139694032 quantized to 32 (current 32)
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8177534241132022e-06 of space, bias 4.0, pg target 0.0021813041089358428 quantized to 16 (current 16)
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:54:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:54:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 2.3 KiB/s wr, 72 op/s
Feb  2 06:54:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:54:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Feb  2 06:54:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Feb  2 06:54:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Feb  2 06:54:23 np0005604943 nova_compute[238883]: 2026-02-02 11:54:23.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:54:23 np0005604943 nova_compute[238883]: 2026-02-02 11:54:23.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 06:54:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.1 KiB/s wr, 49 op/s
Feb  2 06:54:24 np0005604943 nova_compute[238883]: 2026-02-02 11:54:24.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:54:24 np0005604943 nova_compute[238883]: 2026-02-02 11:54:24.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 06:54:24 np0005604943 nova_compute[238883]: 2026-02-02 11:54:24.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 06:54:24 np0005604943 nova_compute[238883]: 2026-02-02 11:54:24.661 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 06:54:24 np0005604943 nova_compute[238883]: 2026-02-02 11:54:24.661 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:54:24 np0005604943 nova_compute[238883]: 2026-02-02 11:54:24.689 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:54:24 np0005604943 nova_compute[238883]: 2026-02-02 11:54:24.690 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:54:24 np0005604943 nova_compute[238883]: 2026-02-02 11:54:24.690 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:54:24 np0005604943 nova_compute[238883]: 2026-02-02 11:54:24.690 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 06:54:24 np0005604943 nova_compute[238883]: 2026-02-02 11:54:24.690 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:54:25 np0005604943 podman[243869]: 2026-02-02 11:54:25.038247086 +0000 UTC m=+0.049826673 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb  2 06:54:25 np0005604943 podman[243868]: 2026-02-02 11:54:25.0658199 +0000 UTC m=+0.081753536 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller)
Feb  2 06:54:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:54:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3014089848' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:54:25 np0005604943 nova_compute[238883]: 2026-02-02 11:54:25.260 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:54:25 np0005604943 nova_compute[238883]: 2026-02-02 11:54:25.416 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:54:25 np0005604943 nova_compute[238883]: 2026-02-02 11:54:25.418 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5151MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 06:54:25 np0005604943 nova_compute[238883]: 2026-02-02 11:54:25.419 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:54:25 np0005604943 nova_compute[238883]: 2026-02-02 11:54:25.419 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:54:25 np0005604943 nova_compute[238883]: 2026-02-02 11:54:25.492 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 06:54:25 np0005604943 nova_compute[238883]: 2026-02-02 11:54:25.492 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 06:54:25 np0005604943 nova_compute[238883]: 2026-02-02 11:54:25.507 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:54:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:54:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4208797361' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:54:26 np0005604943 nova_compute[238883]: 2026-02-02 11:54:26.012 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:54:26 np0005604943 nova_compute[238883]: 2026-02-02 11:54:26.018 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:54:26 np0005604943 nova_compute[238883]: 2026-02-02 11:54:26.044 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:54:26 np0005604943 nova_compute[238883]: 2026-02-02 11:54:26.046 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 06:54:26 np0005604943 nova_compute[238883]: 2026-02-02 11:54:26.047 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:54:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.0 KiB/s wr, 44 op/s
Feb  2 06:54:27 np0005604943 nova_compute[238883]: 2026-02-02 11:54:27.027 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:54:27 np0005604943 nova_compute[238883]: 2026-02-02 11:54:27.028 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:54:27 np0005604943 nova_compute[238883]: 2026-02-02 11:54:27.028 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:54:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:54:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Feb  2 06:54:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Feb  2 06:54:27 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Feb  2 06:54:27 np0005604943 nova_compute[238883]: 2026-02-02 11:54:27.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:54:27 np0005604943 nova_compute[238883]: 2026-02-02 11:54:27.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:54:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:54:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3592150944' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:54:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:54:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3592150944' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:54:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 127 B/s wr, 0 op/s
Feb  2 06:54:28 np0005604943 nova_compute[238883]: 2026-02-02 11:54:28.636 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:54:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 895 B/s wr, 20 op/s
Feb  2 06:54:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:54:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3774602870' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:54:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:54:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3774602870' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:54:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 737 B/s wr, 16 op/s
Feb  2 06:54:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:54:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 716 B/s wr, 17 op/s
Feb  2 06:54:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 716 B/s wr, 17 op/s
Feb  2 06:54:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:54:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2702429780' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:54:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:54:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2702429780' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:54:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:54:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2289941170' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:54:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:54:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2289941170' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:54:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:54:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:54:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1888390618' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:54:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:54:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1888390618' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:54:37 np0005604943 podman[244029]: 2026-02-02 11:54:37.810066836 +0000 UTC m=+0.098339688 container exec fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:54:37 np0005604943 podman[244029]: 2026-02-02 11:54:37.899734866 +0000 UTC m=+0.188007708 container exec_died fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Feb  2 06:54:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Feb  2 06:54:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:54:38 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:54:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:54:38 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:54:39 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:54:39 np0005604943 podman[244357]: 2026-02-02 11:54:39.644040151 +0000 UTC m=+0.045962197 container create fd65753b69e40abf6c8b581669db24629765dba1d22a53690b1a8b578a0b50b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_poincare, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 06:54:39 np0005604943 systemd[1]: Started libpod-conmon-fd65753b69e40abf6c8b581669db24629765dba1d22a53690b1a8b578a0b50b3.scope.
Feb  2 06:54:39 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:54:39 np0005604943 podman[244357]: 2026-02-02 11:54:39.622652116 +0000 UTC m=+0.024574182 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:54:39 np0005604943 podman[244357]: 2026-02-02 11:54:39.729681161 +0000 UTC m=+0.131603227 container init fd65753b69e40abf6c8b581669db24629765dba1d22a53690b1a8b578a0b50b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 06:54:39 np0005604943 podman[244357]: 2026-02-02 11:54:39.738614705 +0000 UTC m=+0.140536741 container start fd65753b69e40abf6c8b581669db24629765dba1d22a53690b1a8b578a0b50b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_poincare, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 06:54:39 np0005604943 podman[244357]: 2026-02-02 11:54:39.74244146 +0000 UTC m=+0.144363526 container attach fd65753b69e40abf6c8b581669db24629765dba1d22a53690b1a8b578a0b50b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_poincare, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 06:54:39 np0005604943 romantic_poincare[244373]: 167 167
Feb  2 06:54:39 np0005604943 systemd[1]: libpod-fd65753b69e40abf6c8b581669db24629765dba1d22a53690b1a8b578a0b50b3.scope: Deactivated successfully.
Feb  2 06:54:39 np0005604943 podman[244357]: 2026-02-02 11:54:39.745608556 +0000 UTC m=+0.147530602 container died fd65753b69e40abf6c8b581669db24629765dba1d22a53690b1a8b578a0b50b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_poincare, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:54:39 np0005604943 systemd[1]: var-lib-containers-storage-overlay-600bd0f17e5040fbf3a6cfc59e6874b091c774fb7be6c30bb4ec147c7d8bbd94-merged.mount: Deactivated successfully.
Feb  2 06:54:39 np0005604943 podman[244357]: 2026-02-02 11:54:39.791178512 +0000 UTC m=+0.193100558 container remove fd65753b69e40abf6c8b581669db24629765dba1d22a53690b1a8b578a0b50b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_poincare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 06:54:39 np0005604943 systemd[1]: libpod-conmon-fd65753b69e40abf6c8b581669db24629765dba1d22a53690b1a8b578a0b50b3.scope: Deactivated successfully.
Feb  2 06:54:39 np0005604943 podman[244396]: 2026-02-02 11:54:39.941229472 +0000 UTC m=+0.043334726 container create 0abd6fe5d37967ef6bb56d9811fb2223375e7089e5d0ce3444168022d7de452e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ptolemy, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 06:54:39 np0005604943 systemd[1]: Started libpod-conmon-0abd6fe5d37967ef6bb56d9811fb2223375e7089e5d0ce3444168022d7de452e.scope.
Feb  2 06:54:40 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:54:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd555a2ddd604443cf33ea8dd00599e3966762e4efb5ad2e039924d3d83da91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:54:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd555a2ddd604443cf33ea8dd00599e3966762e4efb5ad2e039924d3d83da91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:54:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd555a2ddd604443cf33ea8dd00599e3966762e4efb5ad2e039924d3d83da91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:54:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd555a2ddd604443cf33ea8dd00599e3966762e4efb5ad2e039924d3d83da91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:54:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd555a2ddd604443cf33ea8dd00599e3966762e4efb5ad2e039924d3d83da91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:54:40 np0005604943 podman[244396]: 2026-02-02 11:54:39.92249062 +0000 UTC m=+0.024595904 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:54:40 np0005604943 podman[244396]: 2026-02-02 11:54:40.020830097 +0000 UTC m=+0.122935371 container init 0abd6fe5d37967ef6bb56d9811fb2223375e7089e5d0ce3444168022d7de452e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 06:54:40 np0005604943 podman[244396]: 2026-02-02 11:54:40.031068426 +0000 UTC m=+0.133173680 container start 0abd6fe5d37967ef6bb56d9811fb2223375e7089e5d0ce3444168022d7de452e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 06:54:40 np0005604943 podman[244396]: 2026-02-02 11:54:40.034596153 +0000 UTC m=+0.136701427 container attach 0abd6fe5d37967ef6bb56d9811fb2223375e7089e5d0ce3444168022d7de452e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:54:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Feb  2 06:54:40 np0005604943 nervous_ptolemy[244412]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:54:40 np0005604943 nervous_ptolemy[244412]: --> All data devices are unavailable
Feb  2 06:54:40 np0005604943 systemd[1]: libpod-0abd6fe5d37967ef6bb56d9811fb2223375e7089e5d0ce3444168022d7de452e.scope: Deactivated successfully.
Feb  2 06:54:40 np0005604943 podman[244396]: 2026-02-02 11:54:40.489129233 +0000 UTC m=+0.591234507 container died 0abd6fe5d37967ef6bb56d9811fb2223375e7089e5d0ce3444168022d7de452e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ptolemy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  2 06:54:40 np0005604943 systemd[1]: var-lib-containers-storage-overlay-7fd555a2ddd604443cf33ea8dd00599e3966762e4efb5ad2e039924d3d83da91-merged.mount: Deactivated successfully.
Feb  2 06:54:40 np0005604943 podman[244396]: 2026-02-02 11:54:40.533093675 +0000 UTC m=+0.635198929 container remove 0abd6fe5d37967ef6bb56d9811fb2223375e7089e5d0ce3444168022d7de452e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ptolemy, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:54:40 np0005604943 systemd[1]: libpod-conmon-0abd6fe5d37967ef6bb56d9811fb2223375e7089e5d0ce3444168022d7de452e.scope: Deactivated successfully.
Feb  2 06:54:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:54:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:54:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:54:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:54:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:54:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:54:40 np0005604943 podman[244504]: 2026-02-02 11:54:40.964265617 +0000 UTC m=+0.037352322 container create b7844715d5e0dd09a41f59353a64ef7d2cebcf9b4c1b78b66482a55998564560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 06:54:40 np0005604943 systemd[1]: Started libpod-conmon-b7844715d5e0dd09a41f59353a64ef7d2cebcf9b4c1b78b66482a55998564560.scope.
Feb  2 06:54:41 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:54:41 np0005604943 podman[244504]: 2026-02-02 11:54:41.019023943 +0000 UTC m=+0.092110658 container init b7844715d5e0dd09a41f59353a64ef7d2cebcf9b4c1b78b66482a55998564560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_robinson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 06:54:41 np0005604943 podman[244504]: 2026-02-02 11:54:41.024306237 +0000 UTC m=+0.097392942 container start b7844715d5e0dd09a41f59353a64ef7d2cebcf9b4c1b78b66482a55998564560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_robinson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  2 06:54:41 np0005604943 adoring_robinson[244521]: 167 167
Feb  2 06:54:41 np0005604943 systemd[1]: libpod-b7844715d5e0dd09a41f59353a64ef7d2cebcf9b4c1b78b66482a55998564560.scope: Deactivated successfully.
Feb  2 06:54:41 np0005604943 podman[244504]: 2026-02-02 11:54:41.028236074 +0000 UTC m=+0.101322789 container attach b7844715d5e0dd09a41f59353a64ef7d2cebcf9b4c1b78b66482a55998564560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_robinson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb  2 06:54:41 np0005604943 podman[244504]: 2026-02-02 11:54:41.028752209 +0000 UTC m=+0.101838904 container died b7844715d5e0dd09a41f59353a64ef7d2cebcf9b4c1b78b66482a55998564560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:54:41 np0005604943 podman[244504]: 2026-02-02 11:54:40.947871879 +0000 UTC m=+0.020958594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:54:41 np0005604943 systemd[1]: var-lib-containers-storage-overlay-6b5e53ce7429d9d6a6aee36c0ccd3d52e8677d522001932db61fd18d825da750-merged.mount: Deactivated successfully.
Feb  2 06:54:41 np0005604943 podman[244504]: 2026-02-02 11:54:41.061263327 +0000 UTC m=+0.134350022 container remove b7844715d5e0dd09a41f59353a64ef7d2cebcf9b4c1b78b66482a55998564560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 06:54:41 np0005604943 systemd[1]: libpod-conmon-b7844715d5e0dd09a41f59353a64ef7d2cebcf9b4c1b78b66482a55998564560.scope: Deactivated successfully.
Feb  2 06:54:41 np0005604943 podman[244545]: 2026-02-02 11:54:41.205729535 +0000 UTC m=+0.044839176 container create 19b4c5f7b6ba81a5509f9df26a8668f9e83acf43d45f87a693c840a27ce2dc2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:54:41 np0005604943 systemd[1]: Started libpod-conmon-19b4c5f7b6ba81a5509f9df26a8668f9e83acf43d45f87a693c840a27ce2dc2e.scope.
Feb  2 06:54:41 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:54:41 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48cd3d4f0a15f4ceb5078dd422992817850e0fd5c5f4614b53f3206ad0f954d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:54:41 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48cd3d4f0a15f4ceb5078dd422992817850e0fd5c5f4614b53f3206ad0f954d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:54:41 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48cd3d4f0a15f4ceb5078dd422992817850e0fd5c5f4614b53f3206ad0f954d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:54:41 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48cd3d4f0a15f4ceb5078dd422992817850e0fd5c5f4614b53f3206ad0f954d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:54:41 np0005604943 podman[244545]: 2026-02-02 11:54:41.187150557 +0000 UTC m=+0.026260188 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:54:41 np0005604943 podman[244545]: 2026-02-02 11:54:41.28640738 +0000 UTC m=+0.125517031 container init 19b4c5f7b6ba81a5509f9df26a8668f9e83acf43d45f87a693c840a27ce2dc2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dirac, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Feb  2 06:54:41 np0005604943 podman[244545]: 2026-02-02 11:54:41.297861733 +0000 UTC m=+0.136971364 container start 19b4c5f7b6ba81a5509f9df26a8668f9e83acf43d45f87a693c840a27ce2dc2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dirac, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:54:41 np0005604943 podman[244545]: 2026-02-02 11:54:41.313003177 +0000 UTC m=+0.152112828 container attach 19b4c5f7b6ba81a5509f9df26a8668f9e83acf43d45f87a693c840a27ce2dc2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dirac, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:54:41 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:54:41.385 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:54:41 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:54:41.388 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]: {
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:    "0": [
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:        {
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "devices": [
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "/dev/loop3"
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            ],
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "lv_name": "ceph_lv0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "lv_size": "21470642176",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "name": "ceph_lv0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "tags": {
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.cluster_name": "ceph",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.crush_device_class": "",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.encrypted": "0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.objectstore": "bluestore",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.osd_id": "0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.type": "block",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.vdo": "0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.with_tpm": "0"
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            },
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "type": "block",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "vg_name": "ceph_vg0"
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:        }
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:    ],
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:    "1": [
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:        {
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "devices": [
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "/dev/loop4"
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            ],
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "lv_name": "ceph_lv1",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "lv_size": "21470642176",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "name": "ceph_lv1",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "tags": {
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.cluster_name": "ceph",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.crush_device_class": "",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.encrypted": "0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.objectstore": "bluestore",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.osd_id": "1",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.type": "block",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.vdo": "0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.with_tpm": "0"
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            },
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "type": "block",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "vg_name": "ceph_vg1"
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:        }
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:    ],
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:    "2": [
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:        {
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "devices": [
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "/dev/loop5"
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            ],
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "lv_name": "ceph_lv2",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "lv_size": "21470642176",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "name": "ceph_lv2",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "tags": {
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.cluster_name": "ceph",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.crush_device_class": "",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.encrypted": "0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.objectstore": "bluestore",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.osd_id": "2",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.type": "block",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.vdo": "0",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:                "ceph.with_tpm": "0"
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            },
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "type": "block",
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:            "vg_name": "ceph_vg2"
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:        }
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]:    ]
Feb  2 06:54:41 np0005604943 sweet_dirac[244561]: }
Feb  2 06:54:41 np0005604943 systemd[1]: libpod-19b4c5f7b6ba81a5509f9df26a8668f9e83acf43d45f87a693c840a27ce2dc2e.scope: Deactivated successfully.
Feb  2 06:54:41 np0005604943 podman[244545]: 2026-02-02 11:54:41.645484381 +0000 UTC m=+0.484594002 container died 19b4c5f7b6ba81a5509f9df26a8668f9e83acf43d45f87a693c840a27ce2dc2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dirac, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 06:54:41 np0005604943 systemd[1]: var-lib-containers-storage-overlay-48cd3d4f0a15f4ceb5078dd422992817850e0fd5c5f4614b53f3206ad0f954d0-merged.mount: Deactivated successfully.
Feb  2 06:54:41 np0005604943 podman[244545]: 2026-02-02 11:54:41.69189982 +0000 UTC m=+0.531009461 container remove 19b4c5f7b6ba81a5509f9df26a8668f9e83acf43d45f87a693c840a27ce2dc2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 06:54:41 np0005604943 systemd[1]: libpod-conmon-19b4c5f7b6ba81a5509f9df26a8668f9e83acf43d45f87a693c840a27ce2dc2e.scope: Deactivated successfully.
Feb  2 06:54:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:54:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1526396610' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:54:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:54:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1526396610' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:54:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Feb  2 06:54:42 np0005604943 podman[244643]: 2026-02-02 11:54:42.156756792 +0000 UTC m=+0.044722483 container create 55b952750077476a2bb8f4f77b3ee8e71cac0f718fa8c2f03152aed7c8da5c62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 06:54:42 np0005604943 systemd[1]: Started libpod-conmon-55b952750077476a2bb8f4f77b3ee8e71cac0f718fa8c2f03152aed7c8da5c62.scope.
Feb  2 06:54:42 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:54:42 np0005604943 podman[244643]: 2026-02-02 11:54:42.137580249 +0000 UTC m=+0.025545970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:54:42 np0005604943 podman[244643]: 2026-02-02 11:54:42.235031091 +0000 UTC m=+0.122996802 container init 55b952750077476a2bb8f4f77b3ee8e71cac0f718fa8c2f03152aed7c8da5c62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:54:42 np0005604943 podman[244643]: 2026-02-02 11:54:42.243196744 +0000 UTC m=+0.131162435 container start 55b952750077476a2bb8f4f77b3ee8e71cac0f718fa8c2f03152aed7c8da5c62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 06:54:42 np0005604943 podman[244643]: 2026-02-02 11:54:42.246936007 +0000 UTC m=+0.134901698 container attach 55b952750077476a2bb8f4f77b3ee8e71cac0f718fa8c2f03152aed7c8da5c62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 06:54:42 np0005604943 systemd[1]: libpod-55b952750077476a2bb8f4f77b3ee8e71cac0f718fa8c2f03152aed7c8da5c62.scope: Deactivated successfully.
Feb  2 06:54:42 np0005604943 funny_goldberg[244659]: 167 167
Feb  2 06:54:42 np0005604943 conmon[244659]: conmon 55b952750077476a2bb8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55b952750077476a2bb8f4f77b3ee8e71cac0f718fa8c2f03152aed7c8da5c62.scope/container/memory.events
Feb  2 06:54:42 np0005604943 podman[244643]: 2026-02-02 11:54:42.249008753 +0000 UTC m=+0.136974454 container died 55b952750077476a2bb8f4f77b3ee8e71cac0f718fa8c2f03152aed7c8da5c62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 06:54:42 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b97dc1d333aed0f442ba8b3c2ceefd50d8452752492b6fd9ec3a8597b97e2beb-merged.mount: Deactivated successfully.
Feb  2 06:54:42 np0005604943 podman[244643]: 2026-02-02 11:54:42.315999974 +0000 UTC m=+0.203965665 container remove 55b952750077476a2bb8f4f77b3ee8e71cac0f718fa8c2f03152aed7c8da5c62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 06:54:42 np0005604943 systemd[1]: libpod-conmon-55b952750077476a2bb8f4f77b3ee8e71cac0f718fa8c2f03152aed7c8da5c62.scope: Deactivated successfully.
Feb  2 06:54:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:54:42 np0005604943 podman[244686]: 2026-02-02 11:54:42.464119121 +0000 UTC m=+0.045137974 container create 8b664679795b74b7142200ce311d69f07ed1593631d10a9dcbe05a47cc3db225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_faraday, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:54:42 np0005604943 systemd[1]: Started libpod-conmon-8b664679795b74b7142200ce311d69f07ed1593631d10a9dcbe05a47cc3db225.scope.
Feb  2 06:54:42 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:54:42 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045fa7f87a92ef547e291438f73e64ab88f75221071412dd99a8d6624ce33e5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:54:42 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045fa7f87a92ef547e291438f73e64ab88f75221071412dd99a8d6624ce33e5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:54:42 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045fa7f87a92ef547e291438f73e64ab88f75221071412dd99a8d6624ce33e5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:54:42 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045fa7f87a92ef547e291438f73e64ab88f75221071412dd99a8d6624ce33e5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:54:42 np0005604943 podman[244686]: 2026-02-02 11:54:42.444134416 +0000 UTC m=+0.025153289 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:54:42 np0005604943 podman[244686]: 2026-02-02 11:54:42.55813761 +0000 UTC m=+0.139156483 container init 8b664679795b74b7142200ce311d69f07ed1593631d10a9dcbe05a47cc3db225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_faraday, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 06:54:42 np0005604943 podman[244686]: 2026-02-02 11:54:42.563916148 +0000 UTC m=+0.144935001 container start 8b664679795b74b7142200ce311d69f07ed1593631d10a9dcbe05a47cc3db225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_faraday, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:54:42 np0005604943 podman[244686]: 2026-02-02 11:54:42.567968569 +0000 UTC m=+0.148987432 container attach 8b664679795b74b7142200ce311d69f07ed1593631d10a9dcbe05a47cc3db225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_faraday, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:54:43 np0005604943 lvm[244779]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:54:43 np0005604943 lvm[244781]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:54:43 np0005604943 lvm[244779]: VG ceph_vg0 finished
Feb  2 06:54:43 np0005604943 lvm[244781]: VG ceph_vg1 finished
Feb  2 06:54:43 np0005604943 lvm[244783]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:54:43 np0005604943 lvm[244783]: VG ceph_vg2 finished
Feb  2 06:54:43 np0005604943 peaceful_faraday[244702]: {}
Feb  2 06:54:43 np0005604943 systemd[1]: libpod-8b664679795b74b7142200ce311d69f07ed1593631d10a9dcbe05a47cc3db225.scope: Deactivated successfully.
Feb  2 06:54:43 np0005604943 systemd[1]: libpod-8b664679795b74b7142200ce311d69f07ed1593631d10a9dcbe05a47cc3db225.scope: Consumed 1.298s CPU time.
Feb  2 06:54:43 np0005604943 podman[244686]: 2026-02-02 11:54:43.42654894 +0000 UTC m=+1.007567803 container died 8b664679795b74b7142200ce311d69f07ed1593631d10a9dcbe05a47cc3db225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 06:54:43 np0005604943 systemd[1]: var-lib-containers-storage-overlay-045fa7f87a92ef547e291438f73e64ab88f75221071412dd99a8d6624ce33e5e-merged.mount: Deactivated successfully.
Feb  2 06:54:43 np0005604943 podman[244686]: 2026-02-02 11:54:43.481295606 +0000 UTC m=+1.062314459 container remove 8b664679795b74b7142200ce311d69f07ed1593631d10a9dcbe05a47cc3db225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:54:43 np0005604943 systemd[1]: libpod-conmon-8b664679795b74b7142200ce311d69f07ed1593631d10a9dcbe05a47cc3db225.scope: Deactivated successfully.
Feb  2 06:54:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:54:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:54:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:54:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:54:43 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:54:43 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:54:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.8 KiB/s wr, 44 op/s
Feb  2 06:54:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:54:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3077486678' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:54:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:54:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3077486678' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:54:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:54:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2900463907' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:54:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:54:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2900463907' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:54:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 KiB/s wr, 43 op/s
Feb  2 06:54:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:54:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.1 KiB/s wr, 56 op/s
Feb  2 06:54:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.9 KiB/s wr, 42 op/s
Feb  2 06:54:50 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:54:50.390 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:54:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb  2 06:54:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:54:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Feb  2 06:54:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Feb  2 06:54:52 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Feb  2 06:54:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1023 B/s wr, 21 op/s
Feb  2 06:54:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Feb  2 06:54:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Feb  2 06:54:54 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Feb  2 06:54:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:54:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2483424172' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:54:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:54:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2483424172' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:54:56 np0005604943 podman[244826]: 2026-02-02 11:54:56.024596464 +0000 UTC m=+0.048240710 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  2 06:54:56 np0005604943 podman[244825]: 2026-02-02 11:54:56.077427198 +0000 UTC m=+0.101283940 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller)
Feb  2 06:54:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 895 B/s wr, 7 op/s
Feb  2 06:54:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:54:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 43 op/s
Feb  2 06:55:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.1 KiB/s wr, 45 op/s
Feb  2 06:55:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.7 KiB/s wr, 35 op/s
Feb  2 06:55:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:55:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Feb  2 06:55:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Feb  2 06:55:02 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Feb  2 06:55:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.6 KiB/s wr, 33 op/s
Feb  2 06:55:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3718642012' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3718642012' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.5 KiB/s wr, 31 op/s
Feb  2 06:55:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:55:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:55:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2149840360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:55:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 18 op/s
Feb  2 06:55:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Feb  2 06:55:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Feb  2 06:55:08 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Feb  2 06:55:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Feb  2 06:55:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Feb  2 06:55:09 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Feb  2 06:55:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:55:09
Feb  2 06:55:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:55:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:55:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log']
Feb  2 06:55:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:55:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:10.020 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:10.021 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:10.021 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.4 KiB/s wr, 27 op/s
Feb  2 06:55:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Feb  2 06:55:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Feb  2 06:55:10 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:55:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:55:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Feb  2 06:55:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Feb  2 06:55:11 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Feb  2 06:55:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 767 B/s wr, 9 op/s
Feb  2 06:55:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:55:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.4 KiB/s wr, 35 op/s
Feb  2 06:55:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:55:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2513412808' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:55:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4241782525' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4241782525' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:15 np0005604943 nova_compute[238883]: 2026-02-02 11:55:15.578 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Acquiring lock "7a09bf38-57d0-4d6f-a224-43442657d36e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:15 np0005604943 nova_compute[238883]: 2026-02-02 11:55:15.579 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lock "7a09bf38-57d0-4d6f-a224-43442657d36e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:15 np0005604943 nova_compute[238883]: 2026-02-02 11:55:15.614 238887 DEBUG nova.compute.manager [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 06:55:15 np0005604943 nova_compute[238883]: 2026-02-02 11:55:15.772 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:15 np0005604943 nova_compute[238883]: 2026-02-02 11:55:15.773 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:15 np0005604943 nova_compute[238883]: 2026-02-02 11:55:15.783 238887 DEBUG nova.virt.hardware [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 06:55:15 np0005604943 nova_compute[238883]: 2026-02-02 11:55:15.784 238887 INFO nova.compute.claims [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 06:55:15 np0005604943 nova_compute[238883]: 2026-02-02 11:55:15.894 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:55:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.4 KiB/s wr, 24 op/s
Feb  2 06:55:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3895672800' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3895672800' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:55:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2131695704' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.432 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.439 238887 DEBUG nova.compute.provider_tree [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.456 238887 DEBUG nova.scheduler.client.report [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.482 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.483 238887 DEBUG nova.compute.manager [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.530 238887 DEBUG nova.compute.manager [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.531 238887 DEBUG nova.network.neutron [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.557 238887 INFO nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.575 238887 DEBUG nova.compute.manager [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 06:55:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Feb  2 06:55:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Feb  2 06:55:16 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.653 238887 DEBUG nova.compute.manager [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.655 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.656 238887 INFO nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Creating image(s)#033[00m
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.678 238887 DEBUG nova.storage.rbd_utils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] rbd image 7a09bf38-57d0-4d6f-a224-43442657d36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.703 238887 DEBUG nova.storage.rbd_utils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] rbd image 7a09bf38-57d0-4d6f-a224-43442657d36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.729 238887 DEBUG nova.storage.rbd_utils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] rbd image 7a09bf38-57d0-4d6f-a224-43442657d36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.733 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Acquiring lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:16 np0005604943 nova_compute[238883]: 2026-02-02 11:55:16.735 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:17 np0005604943 nova_compute[238883]: 2026-02-02 11:55:17.105 238887 WARNING oslo_policy.policy [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Feb  2 06:55:17 np0005604943 nova_compute[238883]: 2026-02-02 11:55:17.106 238887 WARNING oslo_policy.policy [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Feb  2 06:55:17 np0005604943 nova_compute[238883]: 2026-02-02 11:55:17.109 238887 DEBUG nova.policy [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5a36971365664536a708363aa77853a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '38eed61cf7d3411bbda8849ccc572a02', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 06:55:17 np0005604943 nova_compute[238883]: 2026-02-02 11:55:17.166 238887 DEBUG nova.virt.libvirt.imagebackend [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Image locations are: [{'url': 'rbd://4548a36b-7cdc-5e3e-a814-4e1571be1fae/images/21b263f0-00f1-47be-b8b1-e3c07da0a6a2/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4548a36b-7cdc-5e3e-a814-4e1571be1fae/images/21b263f0-00f1-47be-b8b1-e3c07da0a6a2/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Feb  2 06:55:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:55:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Feb  2 06:55:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Feb  2 06:55:17 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Feb  2 06:55:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.3 KiB/s wr, 37 op/s
Feb  2 06:55:18 np0005604943 nova_compute[238883]: 2026-02-02 11:55:18.735 238887 DEBUG nova.network.neutron [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Successfully created port: f984b39b-d2ed-40c2-b5f7-631f92ebbb0c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 06:55:18 np0005604943 nova_compute[238883]: 2026-02-02 11:55:18.973 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:55:19 np0005604943 nova_compute[238883]: 2026-02-02 11:55:19.028 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8.part --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:55:19 np0005604943 nova_compute[238883]: 2026-02-02 11:55:19.030 238887 DEBUG nova.virt.images [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] 21b263f0-00f1-47be-b8b1-e3c07da0a6a2 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Feb  2 06:55:19 np0005604943 nova_compute[238883]: 2026-02-02 11:55:19.033 238887 DEBUG nova.privsep.utils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Feb  2 06:55:19 np0005604943 nova_compute[238883]: 2026-02-02 11:55:19.034 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8.part /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:55:19 np0005604943 nova_compute[238883]: 2026-02-02 11:55:19.371 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8.part /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8.converted" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:55:19 np0005604943 nova_compute[238883]: 2026-02-02 11:55:19.373 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:55:19 np0005604943 nova_compute[238883]: 2026-02-02 11:55:19.427 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8.converted --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:55:19 np0005604943 nova_compute[238883]: 2026-02-02 11:55:19.428 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:19 np0005604943 nova_compute[238883]: 2026-02-02 11:55:19.448 238887 DEBUG nova.storage.rbd_utils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] rbd image 7a09bf38-57d0-4d6f-a224-43442657d36e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:55:19 np0005604943 nova_compute[238883]: 2026-02-02 11:55:19.452 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 7a09bf38-57d0-4d6f-a224-43442657d36e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:55:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Feb  2 06:55:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Feb  2 06:55:19 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Feb  2 06:55:19 np0005604943 nova_compute[238883]: 2026-02-02 11:55:19.988 238887 DEBUG nova.network.neutron [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Successfully updated port: f984b39b-d2ed-40c2-b5f7-631f92ebbb0c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 06:55:20 np0005604943 nova_compute[238883]: 2026-02-02 11:55:20.008 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Acquiring lock "refresh_cache-7a09bf38-57d0-4d6f-a224-43442657d36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:55:20 np0005604943 nova_compute[238883]: 2026-02-02 11:55:20.008 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Acquired lock "refresh_cache-7a09bf38-57d0-4d6f-a224-43442657d36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:55:20 np0005604943 nova_compute[238883]: 2026-02-02 11:55:20.008 238887 DEBUG nova.network.neutron [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 06:55:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.0 KiB/s wr, 83 op/s
Feb  2 06:55:20 np0005604943 nova_compute[238883]: 2026-02-02 11:55:20.311 238887 DEBUG nova.network.neutron [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 06:55:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1453864175' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1453864175' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:20 np0005604943 nova_compute[238883]: 2026-02-02 11:55:20.515 238887 DEBUG nova.compute.manager [req-8d5deb73-1eb3-4541-ad0a-2fc8f24a211e req-8ebc2391-5750-4b43-be82-e01ab681d7c6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Received event network-changed-f984b39b-d2ed-40c2-b5f7-631f92ebbb0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:55:20 np0005604943 nova_compute[238883]: 2026-02-02 11:55:20.516 238887 DEBUG nova.compute.manager [req-8d5deb73-1eb3-4541-ad0a-2fc8f24a211e req-8ebc2391-5750-4b43-be82-e01ab681d7c6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Refreshing instance network info cache due to event network-changed-f984b39b-d2ed-40c2-b5f7-631f92ebbb0c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:55:20 np0005604943 nova_compute[238883]: 2026-02-02 11:55:20.516 238887 DEBUG oslo_concurrency.lockutils [req-8d5deb73-1eb3-4541-ad0a-2fc8f24a211e req-8ebc2391-5750-4b43-be82-e01ab681d7c6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-7a09bf38-57d0-4d6f-a224-43442657d36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:55:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Feb  2 06:55:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Feb  2 06:55:20 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.050 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 7a09bf38-57d0-4d6f-a224-43442657d36e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.126 238887 DEBUG nova.storage.rbd_utils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] resizing rbd image 7a09bf38-57d0-4d6f-a224-43442657d36e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.227 238887 DEBUG nova.objects.instance [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lazy-loading 'migration_context' on Instance uuid 7a09bf38-57d0-4d6f-a224-43442657d36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.246 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.246 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Ensure instance console log exists: /var/lib/nova/instances/7a09bf38-57d0-4d6f-a224-43442657d36e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.247 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.247 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.247 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.705757813286935e-06 of space, bias 1.0, pg target 0.0011117273439860804 quantized to 32 (current 32)
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.275790856966184e-07 of space, bias 1.0, pg target 9.827372570898553e-05 quantized to 32 (current 32)
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659709515341513 of space, bias 1.0, pg target 0.19979128546024538 quantized to 32 (current 32)
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.798843882578677e-06 of space, bias 4.0, pg target 0.0021586126590944126 quantized to 16 (current 16)
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:55:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.761 238887 DEBUG nova.network.neutron [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Updating instance_info_cache with network_info: [{"id": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "address": "fa:16:3e:a3:02:65", "network": {"id": "f00d2f52-f227-4a46-8fbf-09609a953903", "bridge": "br-int", "label": "tempest-VolumesActionsTest-8380463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38eed61cf7d3411bbda8849ccc572a02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf984b39b-d2", "ovs_interfaceid": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.787 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Releasing lock "refresh_cache-7a09bf38-57d0-4d6f-a224-43442657d36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.788 238887 DEBUG nova.compute.manager [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Instance network_info: |[{"id": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "address": "fa:16:3e:a3:02:65", "network": {"id": "f00d2f52-f227-4a46-8fbf-09609a953903", "bridge": "br-int", "label": "tempest-VolumesActionsTest-8380463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38eed61cf7d3411bbda8849ccc572a02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf984b39b-d2", "ovs_interfaceid": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.788 238887 DEBUG oslo_concurrency.lockutils [req-8d5deb73-1eb3-4541-ad0a-2fc8f24a211e req-8ebc2391-5750-4b43-be82-e01ab681d7c6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-7a09bf38-57d0-4d6f-a224-43442657d36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.788 238887 DEBUG nova.network.neutron [req-8d5deb73-1eb3-4541-ad0a-2fc8f24a211e req-8ebc2391-5750-4b43-be82-e01ab681d7c6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Refreshing network info cache for port f984b39b-d2ed-40c2-b5f7-631f92ebbb0c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:55:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.792 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Start _get_guest_xml network_info=[{"id": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "address": "fa:16:3e:a3:02:65", "network": {"id": "f00d2f52-f227-4a46-8fbf-09609a953903", "bridge": "br-int", "label": "tempest-VolumesActionsTest-8380463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38eed61cf7d3411bbda8849ccc572a02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf984b39b-d2", "ovs_interfaceid": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'image_id': '21b263f0-00f1-47be-b8b1-e3c07da0a6a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.797 238887 WARNING nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:55:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.803 238887 DEBUG nova.virt.libvirt.host [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.804 238887 DEBUG nova.virt.libvirt.host [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.806 238887 DEBUG nova.virt.libvirt.host [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.807 238887 DEBUG nova.virt.libvirt.host [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.808 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 06:55:21 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.808 238887 DEBUG nova.virt.hardware [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.808 238887 DEBUG nova.virt.hardware [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.809 238887 DEBUG nova.virt.hardware [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.809 238887 DEBUG nova.virt.hardware [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.809 238887 DEBUG nova.virt.hardware [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.809 238887 DEBUG nova.virt.hardware [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.809 238887 DEBUG nova.virt.hardware [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.810 238887 DEBUG nova.virt.hardware [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.810 238887 DEBUG nova.virt.hardware [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.810 238887 DEBUG nova.virt.hardware [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.810 238887 DEBUG nova.virt.hardware [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.813 238887 DEBUG nova.privsep.utils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Feb  2 06:55:21 np0005604943 nova_compute[238883]: 2026-02-02 11:55:21.814 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:55:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.9 KiB/s wr, 96 op/s
Feb  2 06:55:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2859680651' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2859680651' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:55:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4116927140' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.370 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.392 238887 DEBUG nova.storage.rbd_utils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] rbd image 7a09bf38-57d0-4d6f-a224-43442657d36e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.397 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:55:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:55:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:55:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577761852' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.909 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.911 238887 DEBUG nova.virt.libvirt.vif [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-970181999',display_name='tempest-VolumesActionsTest-instance-970181999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-970181999',id=1,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='38eed61cf7d3411bbda8849ccc572a02',ramdisk_id='',reservation_id='r-ha3pvaoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1711581404',owner_user_name='tempest-VolumesActionsTest-1711581404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:55:16Z,user_data=None,user_id='5a36971365664536a708363aa77853a1',uuid=7a09bf38-57d0-4d6f-a224-43442657d36e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "address": "fa:16:3e:a3:02:65", "network": {"id": "f00d2f52-f227-4a46-8fbf-09609a953903", "bridge": "br-int", "label": "tempest-VolumesActionsTest-8380463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38eed61cf7d3411bbda8849ccc572a02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf984b39b-d2", "ovs_interfaceid": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.912 238887 DEBUG nova.network.os_vif_util [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Converting VIF {"id": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "address": "fa:16:3e:a3:02:65", "network": {"id": "f00d2f52-f227-4a46-8fbf-09609a953903", "bridge": "br-int", "label": "tempest-VolumesActionsTest-8380463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38eed61cf7d3411bbda8849ccc572a02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf984b39b-d2", "ovs_interfaceid": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.913 238887 DEBUG nova.network.os_vif_util [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:02:65,bridge_name='br-int',has_traffic_filtering=True,id=f984b39b-d2ed-40c2-b5f7-631f92ebbb0c,network=Network(f00d2f52-f227-4a46-8fbf-09609a953903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf984b39b-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.916 238887 DEBUG nova.objects.instance [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7a09bf38-57d0-4d6f-a224-43442657d36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.932 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] End _get_guest_xml xml=<domain type="kvm">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  <uuid>7a09bf38-57d0-4d6f-a224-43442657d36e</uuid>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  <name>instance-00000001</name>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <nova:name>tempest-VolumesActionsTest-instance-970181999</nova:name>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 11:55:21</nova:creationTime>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:        <nova:user uuid="5a36971365664536a708363aa77853a1">tempest-VolumesActionsTest-1711581404-project-member</nova:user>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:        <nova:project uuid="38eed61cf7d3411bbda8849ccc572a02">tempest-VolumesActionsTest-1711581404</nova:project>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <nova:root type="image" uuid="21b263f0-00f1-47be-b8b1-e3c07da0a6a2"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:        <nova:port uuid="f984b39b-d2ed-40c2-b5f7-631f92ebbb0c">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <system>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <entry name="serial">7a09bf38-57d0-4d6f-a224-43442657d36e</entry>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <entry name="uuid">7a09bf38-57d0-4d6f-a224-43442657d36e</entry>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    </system>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  <os>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  </os>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  <features>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  </features>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  </clock>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  <devices>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/7a09bf38-57d0-4d6f-a224-43442657d36e_disk">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/7a09bf38-57d0-4d6f-a224-43442657d36e_disk.config">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:a3:02:65"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <target dev="tapf984b39b-d2"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    </interface>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/7a09bf38-57d0-4d6f-a224-43442657d36e/console.log" append="off"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    </serial>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <video>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    </video>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    </rng>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 06:55:22 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 06:55:22 np0005604943 nova_compute[238883]:  </devices>
Feb  2 06:55:22 np0005604943 nova_compute[238883]: </domain>
Feb  2 06:55:22 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.934 238887 DEBUG nova.compute.manager [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Preparing to wait for external event network-vif-plugged-f984b39b-d2ed-40c2-b5f7-631f92ebbb0c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.934 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Acquiring lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.935 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.935 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.936 238887 DEBUG nova.virt.libvirt.vif [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-970181999',display_name='tempest-VolumesActionsTest-instance-970181999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-970181999',id=1,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='38eed61cf7d3411bbda8849ccc572a02',ramdisk_id='',reservation_id='r-ha3pvaoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1711581404',owner_user_name='tempest-VolumesActionsTest-1711581404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:55:16Z,user_data=None,user_id='5a36971365664536a708363aa77853a1',uuid=7a09bf38-57d0-4d6f-a224-43442657d36e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "address": "fa:16:3e:a3:02:65", "network": {"id": "f00d2f52-f227-4a46-8fbf-09609a953903", "bridge": "br-int", "label": "tempest-VolumesActionsTest-8380463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38eed61cf7d3411bbda8849ccc572a02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf984b39b-d2", "ovs_interfaceid": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.937 238887 DEBUG nova.network.os_vif_util [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Converting VIF {"id": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "address": "fa:16:3e:a3:02:65", "network": {"id": "f00d2f52-f227-4a46-8fbf-09609a953903", "bridge": "br-int", "label": "tempest-VolumesActionsTest-8380463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38eed61cf7d3411bbda8849ccc572a02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf984b39b-d2", "ovs_interfaceid": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.938 238887 DEBUG nova.network.os_vif_util [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:02:65,bridge_name='br-int',has_traffic_filtering=True,id=f984b39b-d2ed-40c2-b5f7-631f92ebbb0c,network=Network(f00d2f52-f227-4a46-8fbf-09609a953903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf984b39b-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.939 238887 DEBUG os_vif [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:02:65,bridge_name='br-int',has_traffic_filtering=True,id=f984b39b-d2ed-40c2-b5f7-631f92ebbb0c,network=Network(f00d2f52-f227-4a46-8fbf-09609a953903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf984b39b-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.976 238887 DEBUG ovsdbapp.backend.ovs_idl [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.977 238887 DEBUG ovsdbapp.backend.ovs_idl [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.977 238887 DEBUG ovsdbapp.backend.ovs_idl [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.978 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.978 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.979 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.979 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.981 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.983 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.994 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.994 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.995 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:55:22 np0005604943 nova_compute[238883]: 2026-02-02 11:55:22.996 238887 INFO oslo.privsep.daemon [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp8j6j0ldg/privsep.sock']#033[00m
Feb  2 06:55:23 np0005604943 nova_compute[238883]: 2026-02-02 11:55:23.220 238887 DEBUG nova.network.neutron [req-8d5deb73-1eb3-4541-ad0a-2fc8f24a211e req-8ebc2391-5750-4b43-be82-e01ab681d7c6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Updated VIF entry in instance network info cache for port f984b39b-d2ed-40c2-b5f7-631f92ebbb0c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:55:23 np0005604943 nova_compute[238883]: 2026-02-02 11:55:23.221 238887 DEBUG nova.network.neutron [req-8d5deb73-1eb3-4541-ad0a-2fc8f24a211e req-8ebc2391-5750-4b43-be82-e01ab681d7c6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Updating instance_info_cache with network_info: [{"id": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "address": "fa:16:3e:a3:02:65", "network": {"id": "f00d2f52-f227-4a46-8fbf-09609a953903", "bridge": "br-int", "label": "tempest-VolumesActionsTest-8380463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38eed61cf7d3411bbda8849ccc572a02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf984b39b-d2", "ovs_interfaceid": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:55:23 np0005604943 nova_compute[238883]: 2026-02-02 11:55:23.249 238887 DEBUG oslo_concurrency.lockutils [req-8d5deb73-1eb3-4541-ad0a-2fc8f24a211e req-8ebc2391-5750-4b43-be82-e01ab681d7c6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-7a09bf38-57d0-4d6f-a224-43442657d36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:55:23 np0005604943 nova_compute[238883]: 2026-02-02 11:55:23.700 238887 INFO oslo.privsep.daemon [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Feb  2 06:55:23 np0005604943 nova_compute[238883]: 2026-02-02 11:55:23.544 245133 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb  2 06:55:23 np0005604943 nova_compute[238883]: 2026-02-02 11:55:23.548 245133 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb  2 06:55:23 np0005604943 nova_compute[238883]: 2026-02-02 11:55:23.550 245133 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Feb  2 06:55:23 np0005604943 nova_compute[238883]: 2026-02-02 11:55:23.550 245133 INFO oslo.privsep.daemon [-] privsep daemon running as pid 245133#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.060 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.061 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf984b39b-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.062 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf984b39b-d2, col_values=(('external_ids', {'iface-id': 'f984b39b-d2ed-40c2-b5f7-631f92ebbb0c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:02:65', 'vm-uuid': '7a09bf38-57d0-4d6f-a224-43442657d36e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.064 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:24 np0005604943 NetworkManager[49093]: <info>  [1770033324.0652] manager: (tapf984b39b-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.067 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.072 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.073 238887 INFO os_vif [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:02:65,bridge_name='br-int',has_traffic_filtering=True,id=f984b39b-d2ed-40c2-b5f7-631f92ebbb0c,network=Network(f00d2f52-f227-4a46-8fbf-09609a953903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf984b39b-d2')#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.125 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.126 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.127 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] No VIF found with MAC fa:16:3e:a3:02:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.127 238887 INFO nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Using config drive#033[00m
Feb  2 06:55:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 240 op/s
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.149 238887 DEBUG nova.storage.rbd_utils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] rbd image 7a09bf38-57d0-4d6f-a224-43442657d36e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.642 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.766 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.767 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.767 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.767 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 06:55:24 np0005604943 nova_compute[238883]: 2026-02-02 11:55:24.767 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.237 238887 INFO nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Creating config drive at /var/lib/nova/instances/7a09bf38-57d0-4d6f-a224-43442657d36e/disk.config#033[00m
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.241 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7a09bf38-57d0-4d6f-a224-43442657d36e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmplfndsyll execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:55:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:55:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919402497' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.333 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.368 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7a09bf38-57d0-4d6f-a224-43442657d36e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmplfndsyll" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.388 238887 DEBUG nova.storage.rbd_utils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] rbd image 7a09bf38-57d0-4d6f-a224-43442657d36e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.391 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7a09bf38-57d0-4d6f-a224-43442657d36e/disk.config 7a09bf38-57d0-4d6f-a224-43442657d36e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:55:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:55:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2632629111' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.452 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.452 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.501 238887 DEBUG oslo_concurrency.processutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7a09bf38-57d0-4d6f-a224-43442657d36e/disk.config 7a09bf38-57d0-4d6f-a224-43442657d36e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.502 238887 INFO nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Deleting local config drive /var/lib/nova/instances/7a09bf38-57d0-4d6f-a224-43442657d36e/disk.config because it was imported into RBD.#033[00m
Feb  2 06:55:25 np0005604943 systemd[1]: Starting libvirt secret daemon...
Feb  2 06:55:25 np0005604943 systemd[1]: Started libvirt secret daemon.
Feb  2 06:55:25 np0005604943 kernel: tun: Universal TUN/TAP device driver, 1.6
Feb  2 06:55:25 np0005604943 NetworkManager[49093]: <info>  [1770033325.6085] manager: (tapf984b39b-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Feb  2 06:55:25 np0005604943 kernel: tapf984b39b-d2: entered promiscuous mode
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.612 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:25 np0005604943 ovn_controller[145056]: 2026-02-02T11:55:25Z|00027|binding|INFO|Claiming lport f984b39b-d2ed-40c2-b5f7-631f92ebbb0c for this chassis.
Feb  2 06:55:25 np0005604943 ovn_controller[145056]: 2026-02-02T11:55:25Z|00028|binding|INFO|f984b39b-d2ed-40c2-b5f7-631f92ebbb0c: Claiming fa:16:3e:a3:02:65 10.100.0.8
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.615 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:25 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:25.634 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:02:65 10.100.0.8'], port_security=['fa:16:3e:a3:02:65 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7a09bf38-57d0-4d6f-a224-43442657d36e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00d2f52-f227-4a46-8fbf-09609a953903', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '38eed61cf7d3411bbda8849ccc572a02', 'neutron:revision_number': '2', 'neutron:security_group_ids': '98d0c131-f06c-4a3a-b5df-19a2fff19b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b95b83c-02ae-4601-8660-d30085871383, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=f984b39b-d2ed-40c2-b5f7-631f92ebbb0c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:55:25 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:25.636 155011 INFO neutron.agent.ovn.metadata.agent [-] Port f984b39b-d2ed-40c2-b5f7-631f92ebbb0c in datapath f00d2f52-f227-4a46-8fbf-09609a953903 bound to our chassis#033[00m
Feb  2 06:55:25 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:25.639 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00d2f52-f227-4a46-8fbf-09609a953903#033[00m
Feb  2 06:55:25 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:25.640 155011 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpk6kanhyo/privsep.sock']#033[00m
Feb  2 06:55:25 np0005604943 systemd-udevd[245253]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.646 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.647 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5019MB free_disk=59.96752426587045GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.648 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.648 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:25 np0005604943 NetworkManager[49093]: <info>  [1770033325.6533] device (tapf984b39b-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 06:55:25 np0005604943 NetworkManager[49093]: <info>  [1770033325.6539] device (tapf984b39b-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 06:55:25 np0005604943 systemd-machined[206973]: New machine qemu-1-instance-00000001.
Feb  2 06:55:25 np0005604943 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.670 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:25 np0005604943 ovn_controller[145056]: 2026-02-02T11:55:25Z|00029|binding|INFO|Setting lport f984b39b-d2ed-40c2-b5f7-631f92ebbb0c ovn-installed in OVS
Feb  2 06:55:25 np0005604943 ovn_controller[145056]: 2026-02-02T11:55:25Z|00030|binding|INFO|Setting lport f984b39b-d2ed-40c2-b5f7-631f92ebbb0c up in Southbound
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.679 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.739 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance 7a09bf38-57d0-4d6f-a224-43442657d36e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.740 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.741 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.793 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:55:25 np0005604943 nova_compute[238883]: 2026-02-02 11:55:25.831 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Feb  2 06:55:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Feb  2 06:55:25 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.013 238887 DEBUG nova.compute.manager [req-75e51482-613e-4842-b4e3-f3d2cd3fce9e req-7c79fda3-9559-4592-9592-aa8129eaaf9b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Received event network-vif-plugged-f984b39b-d2ed-40c2-b5f7-631f92ebbb0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.014 238887 DEBUG oslo_concurrency.lockutils [req-75e51482-613e-4842-b4e3-f3d2cd3fce9e req-7c79fda3-9559-4592-9592-aa8129eaaf9b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.015 238887 DEBUG oslo_concurrency.lockutils [req-75e51482-613e-4842-b4e3-f3d2cd3fce9e req-7c79fda3-9559-4592-9592-aa8129eaaf9b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.015 238887 DEBUG oslo_concurrency.lockutils [req-75e51482-613e-4842-b4e3-f3d2cd3fce9e req-7c79fda3-9559-4592-9592-aa8129eaaf9b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.015 238887 DEBUG nova.compute.manager [req-75e51482-613e-4842-b4e3-f3d2cd3fce9e req-7c79fda3-9559-4592-9592-aa8129eaaf9b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Processing event network-vif-plugged-f984b39b-d2ed-40c2-b5f7-631f92ebbb0c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 06:55:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 171 op/s
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.337 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033326.3361392, 7a09bf38-57d0-4d6f-a224-43442657d36e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.338 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] VM Started (Lifecycle Event)#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.340 238887 DEBUG nova.compute.manager [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.343 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.346 238887 INFO nova.virt.libvirt.driver [-] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Instance spawned successfully.#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.346 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.373 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:55:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:26.376 155011 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb  2 06:55:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:26.377 155011 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpk6kanhyo/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb  2 06:55:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:26.236 245329 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb  2 06:55:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:26.240 245329 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb  2 06:55:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:26.242 245329 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Feb  2 06:55:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:26.242 245329 INFO oslo.privsep.daemon [-] privsep daemon running as pid 245329#033[00m
Feb  2 06:55:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:26.380 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[aece41fe-2a0a-4064-bbb4-889bb8d2c9d9]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.382 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:55:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:55:26 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/778659370' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.407 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.409 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033326.3362913, 7a09bf38-57d0-4d6f-a224-43442657d36e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.409 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] VM Paused (Lifecycle Event)#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.411 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.418 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.419 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.419 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.419 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.420 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.420 238887 DEBUG nova.virt.libvirt.driver [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.425 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Updating inventory in ProviderTree for provider 30401227-b88f-415d-9c2d-3119bd1baf61 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.431 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.434 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033326.3513312, 7a09bf38-57d0-4d6f-a224-43442657d36e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.435 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] VM Resumed (Lifecycle Event)#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.485 238887 ERROR nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [req-7b24568b-0fbd-44d2-a7dd-910f0f63317f] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 30401227-b88f-415d-9c2d-3119bd1baf61.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-7b24568b-0fbd-44d2-a7dd-910f0f63317f"}]}#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.489 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.494 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.498 238887 INFO nova.compute.manager [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Took 9.84 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.499 238887 DEBUG nova.compute.manager [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.508 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Refreshing inventories for resource provider 30401227-b88f-415d-9c2d-3119bd1baf61 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.512 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.528 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Updating ProviderTree inventory for provider 30401227-b88f-415d-9c2d-3119bd1baf61 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.529 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Updating inventory in ProviderTree for provider 30401227-b88f-415d-9c2d-3119bd1baf61 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.550 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Refreshing aggregate associations for resource provider 30401227-b88f-415d-9c2d-3119bd1baf61, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.562 238887 INFO nova.compute.manager [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Took 10.83 seconds to build instance.#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.584 238887 DEBUG oslo_concurrency.lockutils [None req-b1ea55d6-89f4-43bc-8371-ea93bd36c095 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lock "7a09bf38-57d0-4d6f-a224-43442657d36e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.585 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Refreshing trait associations for resource provider 30401227-b88f-415d-9c2d-3119bd1baf61, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  2 06:55:26 np0005604943 nova_compute[238883]: 2026-02-02 11:55:26.617 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:55:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Feb  2 06:55:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Feb  2 06:55:26 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Feb  2 06:55:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:26.922 245329 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:26.923 245329 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:26.923 245329 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:27 np0005604943 podman[245362]: 2026-02-02 11:55:27.044001566 +0000 UTC m=+0.054671685 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 06:55:27 np0005604943 podman[245361]: 2026-02-02 11:55:27.069049031 +0000 UTC m=+0.079569766 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb  2 06:55:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:55:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3155036129' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:55:27 np0005604943 nova_compute[238883]: 2026-02-02 11:55:27.183 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:55:27 np0005604943 nova_compute[238883]: 2026-02-02 11:55:27.190 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Updating inventory in ProviderTree for provider 30401227-b88f-415d-9c2d-3119bd1baf61 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 06:55:27 np0005604943 nova_compute[238883]: 2026-02-02 11:55:27.239 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Updated inventory for provider 30401227-b88f-415d-9c2d-3119bd1baf61 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Feb  2 06:55:27 np0005604943 nova_compute[238883]: 2026-02-02 11:55:27.240 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Updating resource provider 30401227-b88f-415d-9c2d-3119bd1baf61 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Feb  2 06:55:27 np0005604943 nova_compute[238883]: 2026-02-02 11:55:27.240 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Updating inventory in ProviderTree for provider 30401227-b88f-415d-9c2d-3119bd1baf61 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 06:55:27 np0005604943 nova_compute[238883]: 2026-02-02 11:55:27.267 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 06:55:27 np0005604943 nova_compute[238883]: 2026-02-02 11:55:27.268 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:55:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Feb  2 06:55:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Feb  2 06:55:27 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Feb  2 06:55:27 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:27.529 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f00c6a-b506-4340-9006-80d296623af2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:27 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:27.530 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf00d2f52-f1 in ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 06:55:27 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:27.532 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf00d2f52-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 06:55:27 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:27.532 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d90e1210-7b6e-436c-acd4-a40f02b9dee9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:27 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:27.535 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[45ca2544-c5be-40ea-a7c9-6c6d708afdb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:27 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:27.553 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[c747c86f-9239-42ec-a84d-bdde063024ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:27 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:27.579 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b1faaf-c49a-4aa6-aa77-6fa9a6e821e2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:27 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:27.582 155011 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpuet8pyad/privsep.sock']#033[00m
Feb  2 06:55:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.6 MiB/s wr, 299 op/s
Feb  2 06:55:28 np0005604943 nova_compute[238883]: 2026-02-02 11:55:28.175 238887 DEBUG nova.compute.manager [req-98a1ee51-0995-4224-9952-a54a4e4a085b req-ac6f12b2-e7fb-409e-bb1a-b69a3ea97616 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Received event network-vif-plugged-f984b39b-d2ed-40c2-b5f7-631f92ebbb0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:55:28 np0005604943 nova_compute[238883]: 2026-02-02 11:55:28.176 238887 DEBUG oslo_concurrency.lockutils [req-98a1ee51-0995-4224-9952-a54a4e4a085b req-ac6f12b2-e7fb-409e-bb1a-b69a3ea97616 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:28 np0005604943 nova_compute[238883]: 2026-02-02 11:55:28.176 238887 DEBUG oslo_concurrency.lockutils [req-98a1ee51-0995-4224-9952-a54a4e4a085b req-ac6f12b2-e7fb-409e-bb1a-b69a3ea97616 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:28 np0005604943 nova_compute[238883]: 2026-02-02 11:55:28.176 238887 DEBUG oslo_concurrency.lockutils [req-98a1ee51-0995-4224-9952-a54a4e4a085b req-ac6f12b2-e7fb-409e-bb1a-b69a3ea97616 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:28 np0005604943 nova_compute[238883]: 2026-02-02 11:55:28.177 238887 DEBUG nova.compute.manager [req-98a1ee51-0995-4224-9952-a54a4e4a085b req-ac6f12b2-e7fb-409e-bb1a-b69a3ea97616 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] No waiting events found dispatching network-vif-plugged-f984b39b-d2ed-40c2-b5f7-631f92ebbb0c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:55:28 np0005604943 nova_compute[238883]: 2026-02-02 11:55:28.177 238887 WARNING nova.compute.manager [req-98a1ee51-0995-4224-9952-a54a4e4a085b req-ac6f12b2-e7fb-409e-bb1a-b69a3ea97616 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Received unexpected event network-vif-plugged-f984b39b-d2ed-40c2-b5f7-631f92ebbb0c for instance with vm_state active and task_state None.#033[00m
Feb  2 06:55:28 np0005604943 nova_compute[238883]: 2026-02-02 11:55:28.269 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:55:28 np0005604943 nova_compute[238883]: 2026-02-02 11:55:28.270 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 06:55:28 np0005604943 nova_compute[238883]: 2026-02-02 11:55:28.271 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 06:55:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:28.345 155011 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb  2 06:55:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:28.346 155011 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpuet8pyad/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb  2 06:55:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:28.146 245414 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb  2 06:55:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:28.151 245414 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb  2 06:55:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:28.153 245414 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Feb  2 06:55:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:28.153 245414 INFO oslo.privsep.daemon [-] privsep daemon running as pid 245414#033[00m
Feb  2 06:55:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:28.350 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[af8844c2-9f5f-4a6e-aa45-2aa28debd3c3]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:28 np0005604943 nova_compute[238883]: 2026-02-02 11:55:28.495 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "refresh_cache-7a09bf38-57d0-4d6f-a224-43442657d36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:55:28 np0005604943 nova_compute[238883]: 2026-02-02 11:55:28.495 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquired lock "refresh_cache-7a09bf38-57d0-4d6f-a224-43442657d36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:55:28 np0005604943 nova_compute[238883]: 2026-02-02 11:55:28.496 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 06:55:28 np0005604943 nova_compute[238883]: 2026-02-02 11:55:28.496 238887 DEBUG nova.objects.instance [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7a09bf38-57d0-4d6f-a224-43442657d36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:55:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Feb  2 06:55:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Feb  2 06:55:28 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Feb  2 06:55:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:28.919 245414 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:28.919 245414 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:28.920 245414 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:29 np0005604943 nova_compute[238883]: 2026-02-02 11:55:29.066 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.472 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[616cd4a6-ae5f-4121-bb87-72cff7d70dcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:29 np0005604943 NetworkManager[49093]: <info>  [1770033329.4960] manager: (tapf00d2f52-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.492 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[4286e14b-5623-4d98-b25d-2ed492119bd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.514 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[44f852e4-d3de-4fb0-94a6-18f534215523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.516 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb3b5aa-faae-4299-8201-b64946ee99d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:29 np0005604943 systemd-udevd[245426]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:55:29 np0005604943 NetworkManager[49093]: <info>  [1770033329.5335] device (tapf00d2f52-f0): carrier: link connected
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.538 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[14a58d5e-2397-4a45-a3f9-823fbae83aad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.550 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[20eabe96-f517-443b-8d58-8a0cb54312f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00d2f52-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:c3:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377488, 'reachable_time': 30979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245444, 'error': None, 'target': 'ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.559 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b054ba8f-8e65-4ee3-98e7-b9e5c48ed4bd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:c3e4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377488, 'tstamp': 377488}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245445, 'error': None, 'target': 'ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.566 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[8545fbf9-199c-46c7-84d2-496362a6865a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00d2f52-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:c3:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377488, 'reachable_time': 30979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 245446, 'error': None, 'target': 'ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.579 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[78127a62-3102-4f02-829b-bde7263b0bd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.622 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[41697e80-b70e-48c6-8b63-0b47ede56a34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.625 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00d2f52-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.626 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.626 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00d2f52-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:55:29 np0005604943 kernel: tapf00d2f52-f0: entered promiscuous mode
Feb  2 06:55:29 np0005604943 NetworkManager[49093]: <info>  [1770033329.6695] manager: (tapf00d2f52-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.674 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00d2f52-f0, col_values=(('external_ids', {'iface-id': 'ae396908-b0b5-4524-8aea-d6ce5b05a23e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:55:29 np0005604943 ovn_controller[145056]: 2026-02-02T11:55:29Z|00031|binding|INFO|Releasing lport ae396908-b0b5-4524-8aea-d6ce5b05a23e from this chassis (sb_readonly=0)
Feb  2 06:55:29 np0005604943 nova_compute[238883]: 2026-02-02 11:55:29.676 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:29 np0005604943 nova_compute[238883]: 2026-02-02 11:55:29.690 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.691 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f00d2f52-f227-4a46-8fbf-09609a953903.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f00d2f52-f227-4a46-8fbf-09609a953903.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.692 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c2f41b-c935-453d-bd1e-c21356dd694b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.693 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-f00d2f52-f227-4a46-8fbf-09609a953903
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/f00d2f52-f227-4a46-8fbf-09609a953903.pid.haproxy
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID f00d2f52-f227-4a46-8fbf-09609a953903
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 06:55:29 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:29.693 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903', 'env', 'PROCESS_TAG=haproxy-f00d2f52-f227-4a46-8fbf-09609a953903', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f00d2f52-f227-4a46-8fbf-09609a953903.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 06:55:29 np0005604943 nova_compute[238883]: 2026-02-02 11:55:29.843 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Updating instance_info_cache with network_info: [{"id": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "address": "fa:16:3e:a3:02:65", "network": {"id": "f00d2f52-f227-4a46-8fbf-09609a953903", "bridge": "br-int", "label": "tempest-VolumesActionsTest-8380463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38eed61cf7d3411bbda8849ccc572a02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf984b39b-d2", "ovs_interfaceid": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:55:29 np0005604943 nova_compute[238883]: 2026-02-02 11:55:29.862 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Releasing lock "refresh_cache-7a09bf38-57d0-4d6f-a224-43442657d36e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:55:29 np0005604943 nova_compute[238883]: 2026-02-02 11:55:29.863 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 06:55:29 np0005604943 nova_compute[238883]: 2026-02-02 11:55:29.864 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:55:29 np0005604943 nova_compute[238883]: 2026-02-02 11:55:29.864 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:55:29 np0005604943 nova_compute[238883]: 2026-02-02 11:55:29.865 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:55:29 np0005604943 nova_compute[238883]: 2026-02-02 11:55:29.865 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:55:29 np0005604943 nova_compute[238883]: 2026-02-02 11:55:29.866 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:55:30 np0005604943 podman[245479]: 2026-02-02 11:55:30.014630462 +0000 UTC m=+0.053146293 container create 77e47292096e14912afa87e769d3c07defda752f0eb627deeaa9b73c1362d45f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb  2 06:55:30 np0005604943 systemd[1]: Started libpod-conmon-77e47292096e14912afa87e769d3c07defda752f0eb627deeaa9b73c1362d45f.scope.
Feb  2 06:55:30 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:55:30 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b974cb4c38ba520a040942058e1be8815fe88571ebaa5aa036a2022b7621b4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 06:55:30 np0005604943 podman[245479]: 2026-02-02 11:55:29.983905882 +0000 UTC m=+0.022421733 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 06:55:30 np0005604943 podman[245479]: 2026-02-02 11:55:30.093553758 +0000 UTC m=+0.132069609 container init 77e47292096e14912afa87e769d3c07defda752f0eb627deeaa9b73c1362d45f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Feb  2 06:55:30 np0005604943 podman[245479]: 2026-02-02 11:55:30.09983517 +0000 UTC m=+0.138351001 container start 77e47292096e14912afa87e769d3c07defda752f0eb627deeaa9b73c1362d45f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 06:55:30 np0005604943 neutron-haproxy-ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903[245495]: [NOTICE]   (245499) : New worker (245501) forked
Feb  2 06:55:30 np0005604943 neutron-haproxy-ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903[245495]: [NOTICE]   (245499) : Loading success.
Feb  2 06:55:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 5.3 MiB/s rd, 40 KiB/s wr, 258 op/s
Feb  2 06:55:30 np0005604943 nova_compute[238883]: 2026-02-02 11:55:30.886 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Feb  2 06:55:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Feb  2 06:55:30 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Feb  2 06:55:31 np0005604943 nova_compute[238883]: 2026-02-02 11:55:31.231 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:55:31 np0005604943 nova_compute[238883]: 2026-02-02 11:55:31.232 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:55:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2894755986' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2894755986' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 32 KiB/s wr, 209 op/s
Feb  2 06:55:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2711689760' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2711689760' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.373 238887 DEBUG oslo_concurrency.lockutils [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Acquiring lock "7a09bf38-57d0-4d6f-a224-43442657d36e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.373 238887 DEBUG oslo_concurrency.lockutils [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lock "7a09bf38-57d0-4d6f-a224-43442657d36e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.374 238887 DEBUG oslo_concurrency.lockutils [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Acquiring lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.374 238887 DEBUG oslo_concurrency.lockutils [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.374 238887 DEBUG oslo_concurrency.lockutils [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.375 238887 INFO nova.compute.manager [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Terminating instance#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.376 238887 DEBUG nova.compute.manager [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 06:55:33 np0005604943 kernel: tapf984b39b-d2 (unregistering): left promiscuous mode
Feb  2 06:55:33 np0005604943 NetworkManager[49093]: <info>  [1770033333.4199] device (tapf984b39b-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 06:55:33 np0005604943 ovn_controller[145056]: 2026-02-02T11:55:33Z|00032|binding|INFO|Releasing lport f984b39b-d2ed-40c2-b5f7-631f92ebbb0c from this chassis (sb_readonly=0)
Feb  2 06:55:33 np0005604943 ovn_controller[145056]: 2026-02-02T11:55:33Z|00033|binding|INFO|Setting lport f984b39b-d2ed-40c2-b5f7-631f92ebbb0c down in Southbound
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.424 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:33 np0005604943 ovn_controller[145056]: 2026-02-02T11:55:33Z|00034|binding|INFO|Removing iface tapf984b39b-d2 ovn-installed in OVS
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.426 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:33.434 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:02:65 10.100.0.8'], port_security=['fa:16:3e:a3:02:65 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7a09bf38-57d0-4d6f-a224-43442657d36e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00d2f52-f227-4a46-8fbf-09609a953903', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '38eed61cf7d3411bbda8849ccc572a02', 'neutron:revision_number': '4', 'neutron:security_group_ids': '98d0c131-f06c-4a3a-b5df-19a2fff19b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b95b83c-02ae-4601-8660-d30085871383, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=f984b39b-d2ed-40c2-b5f7-631f92ebbb0c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:55:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:33.435 155011 INFO neutron.agent.ovn.metadata.agent [-] Port f984b39b-d2ed-40c2-b5f7-631f92ebbb0c in datapath f00d2f52-f227-4a46-8fbf-09609a953903 unbound from our chassis#033[00m
Feb  2 06:55:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:33.437 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f00d2f52-f227-4a46-8fbf-09609a953903, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.437 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:33.438 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c27deb04-9fe6-4ce1-8f1b-4a4aca79c524]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:33.439 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903 namespace which is not needed anymore#033[00m
Feb  2 06:55:33 np0005604943 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Feb  2 06:55:33 np0005604943 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 7.723s CPU time.
Feb  2 06:55:33 np0005604943 systemd-machined[206973]: Machine qemu-1-instance-00000001 terminated.
Feb  2 06:55:33 np0005604943 neutron-haproxy-ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903[245495]: [NOTICE]   (245499) : haproxy version is 2.8.14-c23fe91
Feb  2 06:55:33 np0005604943 neutron-haproxy-ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903[245495]: [NOTICE]   (245499) : path to executable is /usr/sbin/haproxy
Feb  2 06:55:33 np0005604943 neutron-haproxy-ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903[245495]: [WARNING]  (245499) : Exiting Master process...
Feb  2 06:55:33 np0005604943 neutron-haproxy-ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903[245495]: [ALERT]    (245499) : Current worker (245501) exited with code 143 (Terminated)
Feb  2 06:55:33 np0005604943 neutron-haproxy-ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903[245495]: [WARNING]  (245499) : All workers exited. Exiting... (0)
Feb  2 06:55:33 np0005604943 systemd[1]: libpod-77e47292096e14912afa87e769d3c07defda752f0eb627deeaa9b73c1362d45f.scope: Deactivated successfully.
Feb  2 06:55:33 np0005604943 conmon[245495]: conmon 77e47292096e14912afa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77e47292096e14912afa87e769d3c07defda752f0eb627deeaa9b73c1362d45f.scope/container/memory.events
Feb  2 06:55:33 np0005604943 podman[245535]: 2026-02-02 11:55:33.571004573 +0000 UTC m=+0.041536346 container died 77e47292096e14912afa87e769d3c07defda752f0eb627deeaa9b73c1362d45f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 06:55:33 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77e47292096e14912afa87e769d3c07defda752f0eb627deeaa9b73c1362d45f-userdata-shm.mount: Deactivated successfully.
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.600 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:33 np0005604943 systemd[1]: var-lib-containers-storage-overlay-07b974cb4c38ba520a040942058e1be8815fe88571ebaa5aa036a2022b7621b4-merged.mount: Deactivated successfully.
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.604 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.613 238887 INFO nova.virt.libvirt.driver [-] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Instance destroyed successfully.#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.614 238887 DEBUG nova.objects.instance [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lazy-loading 'resources' on Instance uuid 7a09bf38-57d0-4d6f-a224-43442657d36e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:55:33 np0005604943 podman[245535]: 2026-02-02 11:55:33.620573428 +0000 UTC m=+0.091105201 container cleanup 77e47292096e14912afa87e769d3c07defda752f0eb627deeaa9b73c1362d45f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:55:33 np0005604943 systemd[1]: libpod-conmon-77e47292096e14912afa87e769d3c07defda752f0eb627deeaa9b73c1362d45f.scope: Deactivated successfully.
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.636 238887 DEBUG nova.virt.libvirt.vif [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-970181999',display_name='tempest-VolumesActionsTest-instance-970181999',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-970181999',id=1,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:55:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='38eed61cf7d3411bbda8849ccc572a02',ramdisk_id='',reservation_id='r-ha3pvaoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1711581404',owner_user_name='tempest-VolumesActionsTest-1711581404-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:55:26Z,user_data=None,user_id='5a36971365664536a708363aa77853a1',uuid=7a09bf38-57d0-4d6f-a224-43442657d36e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "address": "fa:16:3e:a3:02:65", "network": {"id": "f00d2f52-f227-4a46-8fbf-09609a953903", "bridge": "br-int", "label": "tempest-VolumesActionsTest-8380463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38eed61cf7d3411bbda8849ccc572a02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf984b39b-d2", "ovs_interfaceid": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.637 238887 DEBUG nova.network.os_vif_util [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Converting VIF {"id": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "address": "fa:16:3e:a3:02:65", "network": {"id": "f00d2f52-f227-4a46-8fbf-09609a953903", "bridge": "br-int", "label": "tempest-VolumesActionsTest-8380463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "38eed61cf7d3411bbda8849ccc572a02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf984b39b-d2", "ovs_interfaceid": "f984b39b-d2ed-40c2-b5f7-631f92ebbb0c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.638 238887 DEBUG nova.network.os_vif_util [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a3:02:65,bridge_name='br-int',has_traffic_filtering=True,id=f984b39b-d2ed-40c2-b5f7-631f92ebbb0c,network=Network(f00d2f52-f227-4a46-8fbf-09609a953903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf984b39b-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.639 238887 DEBUG os_vif [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:02:65,bridge_name='br-int',has_traffic_filtering=True,id=f984b39b-d2ed-40c2-b5f7-631f92ebbb0c,network=Network(f00d2f52-f227-4a46-8fbf-09609a953903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf984b39b-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.641 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.641 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf984b39b-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.643 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.647 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.652 238887 INFO os_vif [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:02:65,bridge_name='br-int',has_traffic_filtering=True,id=f984b39b-d2ed-40c2-b5f7-631f92ebbb0c,network=Network(f00d2f52-f227-4a46-8fbf-09609a953903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf984b39b-d2')#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.689 238887 DEBUG nova.compute.manager [req-a5295897-63bb-47df-81b2-490fedbe0e86 req-9d78a8dd-f1ba-4033-a0d6-0f80ff362464 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Received event network-vif-unplugged-f984b39b-d2ed-40c2-b5f7-631f92ebbb0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.689 238887 DEBUG oslo_concurrency.lockutils [req-a5295897-63bb-47df-81b2-490fedbe0e86 req-9d78a8dd-f1ba-4033-a0d6-0f80ff362464 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.690 238887 DEBUG oslo_concurrency.lockutils [req-a5295897-63bb-47df-81b2-490fedbe0e86 req-9d78a8dd-f1ba-4033-a0d6-0f80ff362464 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.690 238887 DEBUG oslo_concurrency.lockutils [req-a5295897-63bb-47df-81b2-490fedbe0e86 req-9d78a8dd-f1ba-4033-a0d6-0f80ff362464 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.690 238887 DEBUG nova.compute.manager [req-a5295897-63bb-47df-81b2-490fedbe0e86 req-9d78a8dd-f1ba-4033-a0d6-0f80ff362464 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] No waiting events found dispatching network-vif-unplugged-f984b39b-d2ed-40c2-b5f7-631f92ebbb0c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.690 238887 DEBUG nova.compute.manager [req-a5295897-63bb-47df-81b2-490fedbe0e86 req-9d78a8dd-f1ba-4033-a0d6-0f80ff362464 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Received event network-vif-unplugged-f984b39b-d2ed-40c2-b5f7-631f92ebbb0c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 06:55:33 np0005604943 podman[245572]: 2026-02-02 11:55:33.699051102 +0000 UTC m=+0.051203810 container remove 77e47292096e14912afa87e769d3c07defda752f0eb627deeaa9b73c1362d45f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 06:55:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:33.705 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[56585926-5a22-4ac4-ac72-88698dcc066c]: (4, ('Mon Feb  2 11:55:33 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903 (77e47292096e14912afa87e769d3c07defda752f0eb627deeaa9b73c1362d45f)\n77e47292096e14912afa87e769d3c07defda752f0eb627deeaa9b73c1362d45f\nMon Feb  2 11:55:33 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903 (77e47292096e14912afa87e769d3c07defda752f0eb627deeaa9b73c1362d45f)\n77e47292096e14912afa87e769d3c07defda752f0eb627deeaa9b73c1362d45f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:33.708 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[694425fe-12ef-465d-8672-979957b4ead2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:33.709 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00d2f52-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.712 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:33 np0005604943 kernel: tapf00d2f52-f0: left promiscuous mode
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.719 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:33.724 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[05322446-20fe-42f0-b47f-8ee5ec36ca46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:33.740 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[773d1d3d-5aa3-44d9-8cf4-75eb549da820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:33.743 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[13f07556-b45b-4beb-90f5-a61268a2f56a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:33.760 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc7617f-73d4-469b-92cd-f27e2cd55e0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377481, 'reachable_time': 38805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245605, 'error': None, 'target': 'ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:33 np0005604943 systemd[1]: run-netns-ovnmeta\x2df00d2f52\x2df227\x2d4a46\x2d8fbf\x2d09609a953903.mount: Deactivated successfully.
Feb  2 06:55:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:33.775 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f00d2f52-f227-4a46-8fbf-09609a953903 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 06:55:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:33.776 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f30b25-7c9b-4597-9673-d664815a6287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.930 238887 INFO nova.virt.libvirt.driver [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Deleting instance files /var/lib/nova/instances/7a09bf38-57d0-4d6f-a224-43442657d36e_del#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.931 238887 INFO nova.virt.libvirt.driver [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Deletion of /var/lib/nova/instances/7a09bf38-57d0-4d6f-a224-43442657d36e_del complete#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.998 238887 DEBUG nova.virt.libvirt.host [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Feb  2 06:55:33 np0005604943 nova_compute[238883]: 2026-02-02 11:55:33.999 238887 INFO nova.virt.libvirt.host [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] UEFI support detected#033[00m
Feb  2 06:55:34 np0005604943 nova_compute[238883]: 2026-02-02 11:55:34.001 238887 INFO nova.compute.manager [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 06:55:34 np0005604943 nova_compute[238883]: 2026-02-02 11:55:34.002 238887 DEBUG oslo.service.loopingcall [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 06:55:34 np0005604943 nova_compute[238883]: 2026-02-02 11:55:34.003 238887 DEBUG nova.compute.manager [-] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 06:55:34 np0005604943 nova_compute[238883]: 2026-02-02 11:55:34.003 238887 DEBUG nova.network.neutron [-] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 06:55:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 5.2 KiB/s wr, 202 op/s
Feb  2 06:55:35 np0005604943 nova_compute[238883]: 2026-02-02 11:55:35.265 238887 DEBUG nova.network.neutron [-] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:55:35 np0005604943 nova_compute[238883]: 2026-02-02 11:55:35.287 238887 INFO nova.compute.manager [-] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Took 1.28 seconds to deallocate network for instance.#033[00m
Feb  2 06:55:35 np0005604943 nova_compute[238883]: 2026-02-02 11:55:35.342 238887 DEBUG oslo_concurrency.lockutils [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:35 np0005604943 nova_compute[238883]: 2026-02-02 11:55:35.342 238887 DEBUG oslo_concurrency.lockutils [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:35 np0005604943 nova_compute[238883]: 2026-02-02 11:55:35.413 238887 DEBUG oslo_concurrency.processutils [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:55:35 np0005604943 nova_compute[238883]: 2026-02-02 11:55:35.849 238887 DEBUG nova.compute.manager [req-cab12b76-1b2e-47fa-b2d6-df911850148b req-252c6341-7a79-4a14-98aa-fab3640fa6dd 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Received event network-vif-plugged-f984b39b-d2ed-40c2-b5f7-631f92ebbb0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:55:35 np0005604943 nova_compute[238883]: 2026-02-02 11:55:35.849 238887 DEBUG oslo_concurrency.lockutils [req-cab12b76-1b2e-47fa-b2d6-df911850148b req-252c6341-7a79-4a14-98aa-fab3640fa6dd 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:55:35 np0005604943 nova_compute[238883]: 2026-02-02 11:55:35.850 238887 DEBUG oslo_concurrency.lockutils [req-cab12b76-1b2e-47fa-b2d6-df911850148b req-252c6341-7a79-4a14-98aa-fab3640fa6dd 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:55:35 np0005604943 nova_compute[238883]: 2026-02-02 11:55:35.850 238887 DEBUG oslo_concurrency.lockutils [req-cab12b76-1b2e-47fa-b2d6-df911850148b req-252c6341-7a79-4a14-98aa-fab3640fa6dd 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "7a09bf38-57d0-4d6f-a224-43442657d36e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:35 np0005604943 nova_compute[238883]: 2026-02-02 11:55:35.850 238887 DEBUG nova.compute.manager [req-cab12b76-1b2e-47fa-b2d6-df911850148b req-252c6341-7a79-4a14-98aa-fab3640fa6dd 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] No waiting events found dispatching network-vif-plugged-f984b39b-d2ed-40c2-b5f7-631f92ebbb0c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:55:35 np0005604943 nova_compute[238883]: 2026-02-02 11:55:35.850 238887 WARNING nova.compute.manager [req-cab12b76-1b2e-47fa-b2d6-df911850148b req-252c6341-7a79-4a14-98aa-fab3640fa6dd 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Received unexpected event network-vif-plugged-f984b39b-d2ed-40c2-b5f7-631f92ebbb0c for instance with vm_state deleted and task_state None.#033[00m
Feb  2 06:55:35 np0005604943 nova_compute[238883]: 2026-02-02 11:55:35.850 238887 DEBUG nova.compute.manager [req-cab12b76-1b2e-47fa-b2d6-df911850148b req-252c6341-7a79-4a14-98aa-fab3640fa6dd 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Received event network-vif-deleted-f984b39b-d2ed-40c2-b5f7-631f92ebbb0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:55:35 np0005604943 nova_compute[238883]: 2026-02-02 11:55:35.889 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:55:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3694379663' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:55:35 np0005604943 nova_compute[238883]: 2026-02-02 11:55:35.967 238887 DEBUG oslo_concurrency.processutils [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:55:35 np0005604943 nova_compute[238883]: 2026-02-02 11:55:35.972 238887 DEBUG nova.compute.provider_tree [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:55:35 np0005604943 nova_compute[238883]: 2026-02-02 11:55:35.988 238887 DEBUG nova.scheduler.client.report [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:55:36 np0005604943 nova_compute[238883]: 2026-02-02 11:55:36.011 238887 DEBUG oslo_concurrency.lockutils [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:36 np0005604943 nova_compute[238883]: 2026-02-02 11:55:36.054 238887 INFO nova.scheduler.client.report [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Deleted allocations for instance 7a09bf38-57d0-4d6f-a224-43442657d36e#033[00m
Feb  2 06:55:36 np0005604943 nova_compute[238883]: 2026-02-02 11:55:36.148 238887 DEBUG oslo_concurrency.lockutils [None req-4e3d4f67-ae4f-4896-9cab-380de12d351b 5a36971365664536a708363aa77853a1 38eed61cf7d3411bbda8849ccc572a02 - - default default] Lock "7a09bf38-57d0-4d6f-a224-43442657d36e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:55:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 1016 KiB/s rd, 3.4 KiB/s wr, 116 op/s
Feb  2 06:55:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:55:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Feb  2 06:55:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Feb  2 06:55:37 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Feb  2 06:55:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 62 MiB data, 186 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 4.0 KiB/s wr, 111 op/s
Feb  2 06:55:38 np0005604943 nova_compute[238883]: 2026-02-02 11:55:38.644 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 3.6 KiB/s wr, 97 op/s
Feb  2 06:55:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3965989122' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3965989122' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:55:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:55:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:55:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:55:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:55:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:55:40 np0005604943 nova_compute[238883]: 2026-02-02 11:55:40.889 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Feb  2 06:55:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Feb  2 06:55:42 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Feb  2 06:55:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 38 op/s
Feb  2 06:55:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:55:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Feb  2 06:55:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Feb  2 06:55:43 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Feb  2 06:55:43 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  2 06:55:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3401894182' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3401894182' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:43 np0005604943 nova_compute[238883]: 2026-02-02 11:55:43.646 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:55:44.126306) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033344126419, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2346, "num_deletes": 261, "total_data_size": 3507813, "memory_usage": 3561616, "flush_reason": "Manual Compaction"}
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Feb  2 06:55:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 4.5 KiB/s wr, 59 op/s
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033344166906, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3442937, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16352, "largest_seqno": 18697, "table_properties": {"data_size": 3431859, "index_size": 7252, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22931, "raw_average_key_size": 21, "raw_value_size": 3409688, "raw_average_value_size": 3131, "num_data_blocks": 318, "num_entries": 1089, "num_filter_entries": 1089, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770033169, "oldest_key_time": 1770033169, "file_creation_time": 1770033344, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 40610 microseconds, and 6761 cpu microseconds.
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:55:44.166950) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3442937 bytes OK
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:55:44.166969) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:55:44.172070) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:55:44.172165) EVENT_LOG_v1 {"time_micros": 1770033344172150, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:55:44.172226) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3497754, prev total WAL file size 3497754, number of live WAL files 2.
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:55:44.173378) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3362KB)], [38(7706KB)]
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033344173465, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11334012, "oldest_snapshot_seqno": -1}
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4580 keys, 9574562 bytes, temperature: kUnknown
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033344374189, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9574562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9539518, "index_size": 22518, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11461, "raw_key_size": 111119, "raw_average_key_size": 24, "raw_value_size": 9452381, "raw_average_value_size": 2063, "num_data_blocks": 949, "num_entries": 4580, "num_filter_entries": 4580, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770033344, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:55:44.374662) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9574562 bytes
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:55:44.455604) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 56.4 rd, 47.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.5 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5108, records dropped: 528 output_compression: NoCompression
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:55:44.455667) EVENT_LOG_v1 {"time_micros": 1770033344455649, "job": 18, "event": "compaction_finished", "compaction_time_micros": 200895, "compaction_time_cpu_micros": 13562, "output_level": 6, "num_output_files": 1, "total_output_size": 9574562, "num_input_records": 5108, "num_output_records": 4580, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033344456072, "job": 18, "event": "table_file_deletion", "file_number": 40}
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033344456775, "job": 18, "event": "table_file_deletion", "file_number": 38}
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:55:44.173252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:55:44.456816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:55:44.456822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:55:44.456823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:55:44.456824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:55:44.456826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:55:44 np0005604943 podman[245774]: 2026-02-02 11:55:44.83330853 +0000 UTC m=+0.040365514 container create a6b915af7681e63994d44ee30aa7964e511a548f8ce063a0f3013d19cb6623b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_agnesi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:55:44 np0005604943 systemd[1]: Started libpod-conmon-a6b915af7681e63994d44ee30aa7964e511a548f8ce063a0f3013d19cb6623b6.scope.
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3427763967' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3427763967' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:44 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:55:44 np0005604943 podman[245774]: 2026-02-02 11:55:44.81502109 +0000 UTC m=+0.022078114 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:55:44 np0005604943 podman[245774]: 2026-02-02 11:55:44.919287899 +0000 UTC m=+0.126344913 container init a6b915af7681e63994d44ee30aa7964e511a548f8ce063a0f3013d19cb6623b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:55:44 np0005604943 podman[245774]: 2026-02-02 11:55:44.926233199 +0000 UTC m=+0.133290193 container start a6b915af7681e63994d44ee30aa7964e511a548f8ce063a0f3013d19cb6623b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_agnesi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:55:44 np0005604943 podman[245774]: 2026-02-02 11:55:44.929889149 +0000 UTC m=+0.136946163 container attach a6b915af7681e63994d44ee30aa7964e511a548f8ce063a0f3013d19cb6623b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 06:55:44 np0005604943 distracted_agnesi[245791]: 167 167
Feb  2 06:55:44 np0005604943 systemd[1]: libpod-a6b915af7681e63994d44ee30aa7964e511a548f8ce063a0f3013d19cb6623b6.scope: Deactivated successfully.
Feb  2 06:55:44 np0005604943 conmon[245791]: conmon a6b915af7681e63994d4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6b915af7681e63994d44ee30aa7964e511a548f8ce063a0f3013d19cb6623b6.scope/container/memory.events
Feb  2 06:55:44 np0005604943 podman[245774]: 2026-02-02 11:55:44.934468094 +0000 UTC m=+0.141525078 container died a6b915af7681e63994d44ee30aa7964e511a548f8ce063a0f3013d19cb6623b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_agnesi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 06:55:44 np0005604943 systemd[1]: var-lib-containers-storage-overlay-816f18c56dc5f14d5680d596a9006040dba5e985ab97bf97d3d43303f09b2318-merged.mount: Deactivated successfully.
Feb  2 06:55:44 np0005604943 podman[245774]: 2026-02-02 11:55:44.973972243 +0000 UTC m=+0.181029227 container remove a6b915af7681e63994d44ee30aa7964e511a548f8ce063a0f3013d19cb6623b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_agnesi, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:55:44 np0005604943 systemd[1]: libpod-conmon-a6b915af7681e63994d44ee30aa7964e511a548f8ce063a0f3013d19cb6623b6.scope: Deactivated successfully.
Feb  2 06:55:45 np0005604943 podman[245814]: 2026-02-02 11:55:45.126633925 +0000 UTC m=+0.045567356 container create 5837bf8a07590504f4db80e5e7b2bead913390f3258173a46f0d93fe59d2f7de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_shannon, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:55:45 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:55:45 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:55:45 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:55:45 np0005604943 systemd[1]: Started libpod-conmon-5837bf8a07590504f4db80e5e7b2bead913390f3258173a46f0d93fe59d2f7de.scope.
Feb  2 06:55:45 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:55:45 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ef0718c6de78567c401e05b108b83c6a1ba5e686cf1d12fd88cb047039d93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:55:45 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ef0718c6de78567c401e05b108b83c6a1ba5e686cf1d12fd88cb047039d93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:55:45 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ef0718c6de78567c401e05b108b83c6a1ba5e686cf1d12fd88cb047039d93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:55:45 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ef0718c6de78567c401e05b108b83c6a1ba5e686cf1d12fd88cb047039d93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:55:45 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ef0718c6de78567c401e05b108b83c6a1ba5e686cf1d12fd88cb047039d93/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:55:45 np0005604943 podman[245814]: 2026-02-02 11:55:45.106716661 +0000 UTC m=+0.025650122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:55:45 np0005604943 podman[245814]: 2026-02-02 11:55:45.212971344 +0000 UTC m=+0.131904795 container init 5837bf8a07590504f4db80e5e7b2bead913390f3258173a46f0d93fe59d2f7de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_shannon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 06:55:45 np0005604943 podman[245814]: 2026-02-02 11:55:45.221423255 +0000 UTC m=+0.140356686 container start 5837bf8a07590504f4db80e5e7b2bead913390f3258173a46f0d93fe59d2f7de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:55:45 np0005604943 podman[245814]: 2026-02-02 11:55:45.22707184 +0000 UTC m=+0.146005301 container attach 5837bf8a07590504f4db80e5e7b2bead913390f3258173a46f0d93fe59d2f7de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_shannon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 06:55:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Feb  2 06:55:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Feb  2 06:55:45 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  2 06:55:45 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Feb  2 06:55:45 np0005604943 thirsty_shannon[245830]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:55:45 np0005604943 thirsty_shannon[245830]: --> All data devices are unavailable
Feb  2 06:55:45 np0005604943 systemd[1]: libpod-5837bf8a07590504f4db80e5e7b2bead913390f3258173a46f0d93fe59d2f7de.scope: Deactivated successfully.
Feb  2 06:55:45 np0005604943 podman[245814]: 2026-02-02 11:55:45.743154461 +0000 UTC m=+0.662087902 container died 5837bf8a07590504f4db80e5e7b2bead913390f3258173a46f0d93fe59d2f7de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 06:55:45 np0005604943 systemd[1]: var-lib-containers-storage-overlay-7f5ef0718c6de78567c401e05b108b83c6a1ba5e686cf1d12fd88cb047039d93-merged.mount: Deactivated successfully.
Feb  2 06:55:45 np0005604943 podman[245814]: 2026-02-02 11:55:45.786043044 +0000 UTC m=+0.704976475 container remove 5837bf8a07590504f4db80e5e7b2bead913390f3258173a46f0d93fe59d2f7de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 06:55:45 np0005604943 systemd[1]: libpod-conmon-5837bf8a07590504f4db80e5e7b2bead913390f3258173a46f0d93fe59d2f7de.scope: Deactivated successfully.
Feb  2 06:55:45 np0005604943 nova_compute[238883]: 2026-02-02 11:55:45.890 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/714317281' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/714317281' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 6.4 KiB/s wr, 84 op/s
Feb  2 06:55:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/710036360' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/710036360' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:46 np0005604943 podman[245925]: 2026-02-02 11:55:46.299635428 +0000 UTC m=+0.043999314 container create dcd72b8d16a5d265e07056a243fa0e0a066874894b55bd9b9aa6ac4add419d8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:55:46 np0005604943 systemd[1]: Started libpod-conmon-dcd72b8d16a5d265e07056a243fa0e0a066874894b55bd9b9aa6ac4add419d8d.scope.
Feb  2 06:55:46 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:55:46 np0005604943 podman[245925]: 2026-02-02 11:55:46.283281331 +0000 UTC m=+0.027645237 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:55:46 np0005604943 podman[245925]: 2026-02-02 11:55:46.38130013 +0000 UTC m=+0.125664046 container init dcd72b8d16a5d265e07056a243fa0e0a066874894b55bd9b9aa6ac4add419d8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 06:55:46 np0005604943 podman[245925]: 2026-02-02 11:55:46.388387003 +0000 UTC m=+0.132750889 container start dcd72b8d16a5d265e07056a243fa0e0a066874894b55bd9b9aa6ac4add419d8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:55:46 np0005604943 podman[245925]: 2026-02-02 11:55:46.391940811 +0000 UTC m=+0.136304717 container attach dcd72b8d16a5d265e07056a243fa0e0a066874894b55bd9b9aa6ac4add419d8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 06:55:46 np0005604943 epic_banach[245942]: 167 167
Feb  2 06:55:46 np0005604943 systemd[1]: libpod-dcd72b8d16a5d265e07056a243fa0e0a066874894b55bd9b9aa6ac4add419d8d.scope: Deactivated successfully.
Feb  2 06:55:46 np0005604943 podman[245925]: 2026-02-02 11:55:46.396865825 +0000 UTC m=+0.141229721 container died dcd72b8d16a5d265e07056a243fa0e0a066874894b55bd9b9aa6ac4add419d8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:55:46 np0005604943 systemd[1]: var-lib-containers-storage-overlay-41355cf51988372526abad6c4e543ecce33d6b9acfb0fe153a76b94a699b25bd-merged.mount: Deactivated successfully.
Feb  2 06:55:46 np0005604943 podman[245925]: 2026-02-02 11:55:46.434868713 +0000 UTC m=+0.179232599 container remove dcd72b8d16a5d265e07056a243fa0e0a066874894b55bd9b9aa6ac4add419d8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 06:55:46 np0005604943 systemd[1]: libpod-conmon-dcd72b8d16a5d265e07056a243fa0e0a066874894b55bd9b9aa6ac4add419d8d.scope: Deactivated successfully.
Feb  2 06:55:46 np0005604943 podman[245965]: 2026-02-02 11:55:46.552250661 +0000 UTC m=+0.037632170 container create ad7cc7b51525d317662816cd1b0e9246f5700e8592888372ef2aea0bb5b9e5f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_allen, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 06:55:46 np0005604943 systemd[1]: Started libpod-conmon-ad7cc7b51525d317662816cd1b0e9246f5700e8592888372ef2aea0bb5b9e5f3.scope.
Feb  2 06:55:46 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:55:46 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b120a822857a33f2a32aac3ee21df2e0b0ae7c2216e3a90ba2ee75f8d4d3581/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:55:46 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b120a822857a33f2a32aac3ee21df2e0b0ae7c2216e3a90ba2ee75f8d4d3581/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:55:46 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b120a822857a33f2a32aac3ee21df2e0b0ae7c2216e3a90ba2ee75f8d4d3581/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:55:46 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b120a822857a33f2a32aac3ee21df2e0b0ae7c2216e3a90ba2ee75f8d4d3581/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:55:46 np0005604943 podman[245965]: 2026-02-02 11:55:46.535754551 +0000 UTC m=+0.021136080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:55:46 np0005604943 podman[245965]: 2026-02-02 11:55:46.639067743 +0000 UTC m=+0.124449302 container init ad7cc7b51525d317662816cd1b0e9246f5700e8592888372ef2aea0bb5b9e5f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 06:55:46 np0005604943 podman[245965]: 2026-02-02 11:55:46.647112213 +0000 UTC m=+0.132493732 container start ad7cc7b51525d317662816cd1b0e9246f5700e8592888372ef2aea0bb5b9e5f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_allen, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:55:46 np0005604943 podman[245965]: 2026-02-02 11:55:46.650738573 +0000 UTC m=+0.136120102 container attach ad7cc7b51525d317662816cd1b0e9246f5700e8592888372ef2aea0bb5b9e5f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_allen, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:55:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2414237955' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2414237955' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:46 np0005604943 priceless_allen[245982]: {
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:    "0": [
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:        {
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "devices": [
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "/dev/loop3"
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            ],
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "lv_name": "ceph_lv0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "lv_size": "21470642176",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "name": "ceph_lv0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "tags": {
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.cluster_name": "ceph",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.crush_device_class": "",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.encrypted": "0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.objectstore": "bluestore",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.osd_id": "0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.type": "block",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.vdo": "0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.with_tpm": "0"
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            },
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "type": "block",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "vg_name": "ceph_vg0"
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:        }
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:    ],
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:    "1": [
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:        {
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "devices": [
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "/dev/loop4"
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            ],
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "lv_name": "ceph_lv1",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "lv_size": "21470642176",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "name": "ceph_lv1",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "tags": {
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.cluster_name": "ceph",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.crush_device_class": "",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.encrypted": "0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.objectstore": "bluestore",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.osd_id": "1",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.type": "block",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.vdo": "0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.with_tpm": "0"
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            },
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "type": "block",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "vg_name": "ceph_vg1"
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:        }
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:    ],
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:    "2": [
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:        {
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "devices": [
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "/dev/loop5"
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            ],
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "lv_name": "ceph_lv2",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "lv_size": "21470642176",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "name": "ceph_lv2",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "tags": {
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.cluster_name": "ceph",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.crush_device_class": "",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.encrypted": "0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.objectstore": "bluestore",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.osd_id": "2",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.type": "block",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.vdo": "0",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:                "ceph.with_tpm": "0"
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            },
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "type": "block",
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:            "vg_name": "ceph_vg2"
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:        }
Feb  2 06:55:46 np0005604943 priceless_allen[245982]:    ]
Feb  2 06:55:46 np0005604943 priceless_allen[245982]: }
Feb  2 06:55:47 np0005604943 systemd[1]: libpod-ad7cc7b51525d317662816cd1b0e9246f5700e8592888372ef2aea0bb5b9e5f3.scope: Deactivated successfully.
Feb  2 06:55:47 np0005604943 podman[245965]: 2026-02-02 11:55:47.00013977 +0000 UTC m=+0.485521289 container died ad7cc7b51525d317662816cd1b0e9246f5700e8592888372ef2aea0bb5b9e5f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_allen, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 06:55:47 np0005604943 systemd[1]: var-lib-containers-storage-overlay-6b120a822857a33f2a32aac3ee21df2e0b0ae7c2216e3a90ba2ee75f8d4d3581-merged.mount: Deactivated successfully.
Feb  2 06:55:47 np0005604943 podman[245965]: 2026-02-02 11:55:47.098530118 +0000 UTC m=+0.583911627 container remove ad7cc7b51525d317662816cd1b0e9246f5700e8592888372ef2aea0bb5b9e5f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_allen, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 06:55:47 np0005604943 systemd[1]: libpod-conmon-ad7cc7b51525d317662816cd1b0e9246f5700e8592888372ef2aea0bb5b9e5f3.scope: Deactivated successfully.
Feb  2 06:55:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:55:47 np0005604943 podman[246067]: 2026-02-02 11:55:47.520723706 +0000 UTC m=+0.040452197 container create d249d1797a6a29ff7da9a2e5f79d94cdac672c51c66ce868c2fb363833f48453 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhabha, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Feb  2 06:55:47 np0005604943 systemd[1]: Started libpod-conmon-d249d1797a6a29ff7da9a2e5f79d94cdac672c51c66ce868c2fb363833f48453.scope.
Feb  2 06:55:47 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:55:47 np0005604943 podman[246067]: 2026-02-02 11:55:47.574272539 +0000 UTC m=+0.094001050 container init d249d1797a6a29ff7da9a2e5f79d94cdac672c51c66ce868c2fb363833f48453 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 06:55:47 np0005604943 podman[246067]: 2026-02-02 11:55:47.579467771 +0000 UTC m=+0.099196272 container start d249d1797a6a29ff7da9a2e5f79d94cdac672c51c66ce868c2fb363833f48453 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 06:55:47 np0005604943 focused_bhabha[246083]: 167 167
Feb  2 06:55:47 np0005604943 systemd[1]: libpod-d249d1797a6a29ff7da9a2e5f79d94cdac672c51c66ce868c2fb363833f48453.scope: Deactivated successfully.
Feb  2 06:55:47 np0005604943 podman[246067]: 2026-02-02 11:55:47.590063551 +0000 UTC m=+0.109792072 container attach d249d1797a6a29ff7da9a2e5f79d94cdac672c51c66ce868c2fb363833f48453 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 06:55:47 np0005604943 podman[246067]: 2026-02-02 11:55:47.591305334 +0000 UTC m=+0.111033825 container died d249d1797a6a29ff7da9a2e5f79d94cdac672c51c66ce868c2fb363833f48453 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 06:55:47 np0005604943 podman[246067]: 2026-02-02 11:55:47.502312273 +0000 UTC m=+0.022040814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:55:47 np0005604943 systemd[1]: var-lib-containers-storage-overlay-d1e4bc17aae275a6a26721c14e74345b566c2f936408cd7f666ff27b95f91e54-merged.mount: Deactivated successfully.
Feb  2 06:55:47 np0005604943 podman[246067]: 2026-02-02 11:55:47.635531633 +0000 UTC m=+0.155260124 container remove d249d1797a6a29ff7da9a2e5f79d94cdac672c51c66ce868c2fb363833f48453 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhabha, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:55:47 np0005604943 systemd[1]: libpod-conmon-d249d1797a6a29ff7da9a2e5f79d94cdac672c51c66ce868c2fb363833f48453.scope: Deactivated successfully.
Feb  2 06:55:47 np0005604943 podman[246107]: 2026-02-02 11:55:47.779112837 +0000 UTC m=+0.043274084 container create 1f7208b6624a547e6e7368960f24bdf0e35f8418ef6091ac78c323763f1e7d6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  2 06:55:47 np0005604943 systemd[1]: Started libpod-conmon-1f7208b6624a547e6e7368960f24bdf0e35f8418ef6091ac78c323763f1e7d6f.scope.
Feb  2 06:55:47 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:55:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8078b0ec5404363302b21917eded3202ad754ac2d42ce73b3354e2699bfebfe5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:55:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8078b0ec5404363302b21917eded3202ad754ac2d42ce73b3354e2699bfebfe5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:55:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8078b0ec5404363302b21917eded3202ad754ac2d42ce73b3354e2699bfebfe5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:55:47 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8078b0ec5404363302b21917eded3202ad754ac2d42ce73b3354e2699bfebfe5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:55:47 np0005604943 podman[246107]: 2026-02-02 11:55:47.760711633 +0000 UTC m=+0.024872900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:55:47 np0005604943 podman[246107]: 2026-02-02 11:55:47.858860985 +0000 UTC m=+0.123022232 container init 1f7208b6624a547e6e7368960f24bdf0e35f8418ef6091ac78c323763f1e7d6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:55:47 np0005604943 podman[246107]: 2026-02-02 11:55:47.864251503 +0000 UTC m=+0.128412750 container start 1f7208b6624a547e6e7368960f24bdf0e35f8418ef6091ac78c323763f1e7d6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 06:55:47 np0005604943 podman[246107]: 2026-02-02 11:55:47.867284185 +0000 UTC m=+0.131445462 container attach 1f7208b6624a547e6e7368960f24bdf0e35f8418ef6091ac78c323763f1e7d6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:55:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 41 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 9.7 KiB/s wr, 220 op/s
Feb  2 06:55:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Feb  2 06:55:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Feb  2 06:55:48 np0005604943 lvm[246202]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:55:48 np0005604943 lvm[246203]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:55:48 np0005604943 lvm[246203]: VG ceph_vg1 finished
Feb  2 06:55:48 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Feb  2 06:55:48 np0005604943 lvm[246202]: VG ceph_vg0 finished
Feb  2 06:55:48 np0005604943 lvm[246205]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:55:48 np0005604943 lvm[246205]: VG ceph_vg2 finished
Feb  2 06:55:48 np0005604943 optimistic_golick[246123]: {}
Feb  2 06:55:48 np0005604943 systemd[1]: libpod-1f7208b6624a547e6e7368960f24bdf0e35f8418ef6091ac78c323763f1e7d6f.scope: Deactivated successfully.
Feb  2 06:55:48 np0005604943 systemd[1]: libpod-1f7208b6624a547e6e7368960f24bdf0e35f8418ef6091ac78c323763f1e7d6f.scope: Consumed 1.059s CPU time.
Feb  2 06:55:48 np0005604943 nova_compute[238883]: 2026-02-02 11:55:48.611 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033333.6092513, 7a09bf38-57d0-4d6f-a224-43442657d36e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:55:48 np0005604943 nova_compute[238883]: 2026-02-02 11:55:48.612 238887 INFO nova.compute.manager [-] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] VM Stopped (Lifecycle Event)#033[00m
Feb  2 06:55:48 np0005604943 podman[246208]: 2026-02-02 11:55:48.622679518 +0000 UTC m=+0.023633247 container died 1f7208b6624a547e6e7368960f24bdf0e35f8418ef6091ac78c323763f1e7d6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:55:48 np0005604943 nova_compute[238883]: 2026-02-02 11:55:48.635 238887 DEBUG nova.compute.manager [None req-5cb1da61-a28b-43fc-9198-1f9227504d28 - - - - - -] [instance: 7a09bf38-57d0-4d6f-a224-43442657d36e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:55:48 np0005604943 systemd[1]: var-lib-containers-storage-overlay-8078b0ec5404363302b21917eded3202ad754ac2d42ce73b3354e2699bfebfe5-merged.mount: Deactivated successfully.
Feb  2 06:55:48 np0005604943 nova_compute[238883]: 2026-02-02 11:55:48.649 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:48 np0005604943 podman[246208]: 2026-02-02 11:55:48.66523415 +0000 UTC m=+0.066187869 container remove 1f7208b6624a547e6e7368960f24bdf0e35f8418ef6091ac78c323763f1e7d6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:55:48 np0005604943 systemd[1]: libpod-conmon-1f7208b6624a547e6e7368960f24bdf0e35f8418ef6091ac78c323763f1e7d6f.scope: Deactivated successfully.
Feb  2 06:55:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:55:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:55:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:55:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:55:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:48.785 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:55:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:48.787 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 06:55:48 np0005604943 nova_compute[238883]: 2026-02-02 11:55:48.785 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1560360821' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1560360821' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:49 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:55:49 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:55:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 41 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 7.4 KiB/s wr, 188 op/s
Feb  2 06:55:50 np0005604943 nova_compute[238883]: 2026-02-02 11:55:50.892 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 41 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 5.6 KiB/s wr, 142 op/s
Feb  2 06:55:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:55:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Feb  2 06:55:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Feb  2 06:55:52 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Feb  2 06:55:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3819963145' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3819963145' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/435008672' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/435008672' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:53 np0005604943 nova_compute[238883]: 2026-02-02 11:55:53.651 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:55:53.788 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:55:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 41 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 190 KiB/s rd, 10 KiB/s wr, 254 op/s
Feb  2 06:55:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/248352284' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/248352284' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/641326315' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/641326315' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:55:55 np0005604943 nova_compute[238883]: 2026-02-02 11:55:55.895 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 41 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 6.4 KiB/s wr, 131 op/s
Feb  2 06:55:56 np0005604943 nova_compute[238883]: 2026-02-02 11:55:56.514 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:55:58 np0005604943 podman[246248]: 2026-02-02 11:55:58.059141901 +0000 UTC m=+0.078010903 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  2 06:55:58 np0005604943 podman[246247]: 2026-02-02 11:55:58.059180702 +0000 UTC m=+0.076182014 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 06:55:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 41 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 5.8 KiB/s wr, 156 op/s
Feb  2 06:55:58 np0005604943 nova_compute[238883]: 2026-02-02 11:55:58.653 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:55:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:55:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3730448792' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:55:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:55:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3730448792' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:56:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 41 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 5.4 KiB/s wr, 140 op/s
Feb  2 06:56:00 np0005604943 nova_compute[238883]: 2026-02-02 11:56:00.896 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:56:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1693488652' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:56:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:56:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1693488652' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:56:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 41 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 5.4 KiB/s wr, 140 op/s
Feb  2 06:56:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:56:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Feb  2 06:56:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Feb  2 06:56:02 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  2 06:56:02 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Feb  2 06:56:03 np0005604943 nova_compute[238883]: 2026-02-02 11:56:03.654 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 2.3 KiB/s wr, 84 op/s
Feb  2 06:56:05 np0005604943 nova_compute[238883]: 2026-02-02 11:56:05.897 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 2.3 KiB/s wr, 84 op/s
Feb  2 06:56:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:56:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.6 KiB/s wr, 40 op/s
Feb  2 06:56:08 np0005604943 nova_compute[238883]: 2026-02-02 11:56:08.655 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:56:09
Feb  2 06:56:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:56:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:56:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'volumes', 'vms', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'default.rgw.control']
Feb  2 06:56:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:56:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:56:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1843045026' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:56:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:56:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1843045026' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:56:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:10.020 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:10.021 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:10.021 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.0 KiB/s wr, 36 op/s
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:56:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:56:10 np0005604943 nova_compute[238883]: 2026-02-02 11:56:10.900 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:56:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1029487481' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:56:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:56:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1029487481' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:56:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:56:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3059793907' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:56:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:56:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3059793907' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:56:11 np0005604943 nova_compute[238883]: 2026-02-02 11:56:11.637 238887 DEBUG oslo_concurrency.lockutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Acquiring lock "52749176-480b-4b66-b02a-2d5041414572" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:11 np0005604943 nova_compute[238883]: 2026-02-02 11:56:11.637 238887 DEBUG oslo_concurrency.lockutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lock "52749176-480b-4b66-b02a-2d5041414572" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:11 np0005604943 nova_compute[238883]: 2026-02-02 11:56:11.667 238887 DEBUG nova.compute.manager [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 06:56:11 np0005604943 nova_compute[238883]: 2026-02-02 11:56:11.738 238887 DEBUG oslo_concurrency.lockutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:11 np0005604943 nova_compute[238883]: 2026-02-02 11:56:11.739 238887 DEBUG oslo_concurrency.lockutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:11 np0005604943 nova_compute[238883]: 2026-02-02 11:56:11.750 238887 DEBUG nova.virt.hardware [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 06:56:11 np0005604943 nova_compute[238883]: 2026-02-02 11:56:11.751 238887 INFO nova.compute.claims [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 06:56:11 np0005604943 nova_compute[238883]: 2026-02-02 11:56:11.855 238887 DEBUG oslo_concurrency.processutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.0 KiB/s wr, 36 op/s
Feb  2 06:56:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:56:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/110981718' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.425 238887 DEBUG oslo_concurrency.processutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.431 238887 DEBUG nova.compute.provider_tree [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:56:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.451 238887 DEBUG nova.scheduler.client.report [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.475 238887 DEBUG oslo_concurrency.lockutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.476 238887 DEBUG nova.compute.manager [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.538 238887 DEBUG nova.compute.manager [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.538 238887 DEBUG nova.network.neutron [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.566 238887 INFO nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.583 238887 DEBUG nova.compute.manager [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.691 238887 DEBUG nova.compute.manager [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.693 238887 DEBUG nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.693 238887 INFO nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Creating image(s)#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.722 238887 DEBUG nova.storage.rbd_utils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] rbd image 52749176-480b-4b66-b02a-2d5041414572_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.751 238887 DEBUG nova.storage.rbd_utils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] rbd image 52749176-480b-4b66-b02a-2d5041414572_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.780 238887 DEBUG nova.storage.rbd_utils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] rbd image 52749176-480b-4b66-b02a-2d5041414572_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.788 238887 DEBUG oslo_concurrency.processutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.841 238887 DEBUG oslo_concurrency.processutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.842 238887 DEBUG oslo_concurrency.lockutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Acquiring lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.843 238887 DEBUG oslo_concurrency.lockutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.844 238887 DEBUG oslo_concurrency.lockutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.866 238887 DEBUG nova.storage.rbd_utils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] rbd image 52749176-480b-4b66-b02a-2d5041414572_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.870 238887 DEBUG oslo_concurrency.processutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 52749176-480b-4b66-b02a-2d5041414572_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.922 238887 DEBUG nova.network.neutron [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb  2 06:56:12 np0005604943 nova_compute[238883]: 2026-02-02 11:56:12.923 238887 DEBUG nova.compute.manager [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 06:56:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:56:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1684812530' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:56:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:56:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1684812530' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.094 238887 DEBUG oslo_concurrency.processutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 52749176-480b-4b66-b02a-2d5041414572_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.154 238887 DEBUG nova.storage.rbd_utils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] resizing rbd image 52749176-480b-4b66-b02a-2d5041414572_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 06:56:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:56:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/334978000' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:56:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:56:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/334978000' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.245 238887 DEBUG nova.objects.instance [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lazy-loading 'migration_context' on Instance uuid 52749176-480b-4b66-b02a-2d5041414572 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.264 238887 DEBUG nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.264 238887 DEBUG nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Ensure instance console log exists: /var/lib/nova/instances/52749176-480b-4b66-b02a-2d5041414572/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.265 238887 DEBUG oslo_concurrency.lockutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.266 238887 DEBUG oslo_concurrency.lockutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.266 238887 DEBUG oslo_concurrency.lockutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.268 238887 DEBUG nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'image_id': '21b263f0-00f1-47be-b8b1-e3c07da0a6a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.274 238887 WARNING nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.278 238887 DEBUG nova.virt.libvirt.host [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.278 238887 DEBUG nova.virt.libvirt.host [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.280 238887 DEBUG nova.virt.libvirt.host [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.280 238887 DEBUG nova.virt.libvirt.host [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.281 238887 DEBUG nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.281 238887 DEBUG nova.virt.hardware [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.281 238887 DEBUG nova.virt.hardware [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.281 238887 DEBUG nova.virt.hardware [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.281 238887 DEBUG nova.virt.hardware [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.282 238887 DEBUG nova.virt.hardware [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.282 238887 DEBUG nova.virt.hardware [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.282 238887 DEBUG nova.virt.hardware [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.282 238887 DEBUG nova.virt.hardware [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.282 238887 DEBUG nova.virt.hardware [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.282 238887 DEBUG nova.virt.hardware [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.283 238887 DEBUG nova.virt.hardware [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.285 238887 DEBUG oslo_concurrency.processutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.658 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:56:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2013473764' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.845 238887 DEBUG oslo_concurrency.processutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.871 238887 DEBUG nova.storage.rbd_utils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] rbd image 52749176-480b-4b66-b02a-2d5041414572_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:13 np0005604943 nova_compute[238883]: 2026-02-02 11:56:13.875 238887 DEBUG oslo_concurrency.processutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 51 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 456 KiB/s wr, 89 op/s
Feb  2 06:56:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:56:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1626218510' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:56:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:56:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/75724592' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:56:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:56:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/75724592' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:56:14 np0005604943 nova_compute[238883]: 2026-02-02 11:56:14.439 238887 DEBUG oslo_concurrency.processutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:14 np0005604943 nova_compute[238883]: 2026-02-02 11:56:14.441 238887 DEBUG nova.objects.instance [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 52749176-480b-4b66-b02a-2d5041414572 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:56:14 np0005604943 nova_compute[238883]: 2026-02-02 11:56:14.457 238887 DEBUG nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] End _get_guest_xml xml=<domain type="kvm">
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  <uuid>52749176-480b-4b66-b02a-2d5041414572</uuid>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  <name>instance-00000002</name>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <nova:name>tempest-VolumesNegativeTest-instance-1286308573</nova:name>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 11:56:13</nova:creationTime>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 06:56:14 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:        <nova:user uuid="5bf4d002c327409f9dabb3daeed85748">tempest-VolumesNegativeTest-1546424742-project-member</nova:user>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:        <nova:project uuid="9d8f7eef7fe44d68b33a8aab1d201cd1">tempest-VolumesNegativeTest-1546424742</nova:project>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <nova:root type="image" uuid="21b263f0-00f1-47be-b8b1-e3c07da0a6a2"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <nova:ports/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <system>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <entry name="serial">52749176-480b-4b66-b02a-2d5041414572</entry>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <entry name="uuid">52749176-480b-4b66-b02a-2d5041414572</entry>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    </system>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  <os>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  </os>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  <features>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  </features>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  </clock>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  <devices>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/52749176-480b-4b66-b02a-2d5041414572_disk">
Feb  2 06:56:14 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:56:14 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/52749176-480b-4b66-b02a-2d5041414572_disk.config">
Feb  2 06:56:14 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:56:14 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/52749176-480b-4b66-b02a-2d5041414572/console.log" append="off"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    </serial>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <video>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    </video>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    </rng>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 06:56:14 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 06:56:14 np0005604943 nova_compute[238883]:  </devices>
Feb  2 06:56:14 np0005604943 nova_compute[238883]: </domain>
Feb  2 06:56:14 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 06:56:14 np0005604943 nova_compute[238883]: 2026-02-02 11:56:14.559 238887 DEBUG nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:56:14 np0005604943 nova_compute[238883]: 2026-02-02 11:56:14.560 238887 DEBUG nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:56:14 np0005604943 nova_compute[238883]: 2026-02-02 11:56:14.561 238887 INFO nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Using config drive#033[00m
Feb  2 06:56:14 np0005604943 nova_compute[238883]: 2026-02-02 11:56:14.585 238887 DEBUG nova.storage.rbd_utils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] rbd image 52749176-480b-4b66-b02a-2d5041414572_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:14 np0005604943 nova_compute[238883]: 2026-02-02 11:56:14.946 238887 INFO nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Creating config drive at /var/lib/nova/instances/52749176-480b-4b66-b02a-2d5041414572/disk.config#033[00m
Feb  2 06:56:14 np0005604943 nova_compute[238883]: 2026-02-02 11:56:14.951 238887 DEBUG oslo_concurrency.processutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52749176-480b-4b66-b02a-2d5041414572/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpso5ywwhx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:15 np0005604943 nova_compute[238883]: 2026-02-02 11:56:15.077 238887 DEBUG oslo_concurrency.processutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52749176-480b-4b66-b02a-2d5041414572/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpso5ywwhx" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:15 np0005604943 nova_compute[238883]: 2026-02-02 11:56:15.107 238887 DEBUG nova.storage.rbd_utils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] rbd image 52749176-480b-4b66-b02a-2d5041414572_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:15 np0005604943 nova_compute[238883]: 2026-02-02 11:56:15.111 238887 DEBUG oslo_concurrency.processutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52749176-480b-4b66-b02a-2d5041414572/disk.config 52749176-480b-4b66-b02a-2d5041414572_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:15 np0005604943 nova_compute[238883]: 2026-02-02 11:56:15.229 238887 DEBUG oslo_concurrency.processutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52749176-480b-4b66-b02a-2d5041414572/disk.config 52749176-480b-4b66-b02a-2d5041414572_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:15 np0005604943 nova_compute[238883]: 2026-02-02 11:56:15.230 238887 INFO nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Deleting local config drive /var/lib/nova/instances/52749176-480b-4b66-b02a-2d5041414572/disk.config because it was imported into RBD.#033[00m
Feb  2 06:56:15 np0005604943 systemd-machined[206973]: New machine qemu-2-instance-00000002.
Feb  2 06:56:15 np0005604943 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Feb  2 06:56:15 np0005604943 nova_compute[238883]: 2026-02-02 11:56:15.821 238887 DEBUG nova.compute.manager [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 06:56:15 np0005604943 nova_compute[238883]: 2026-02-02 11:56:15.822 238887 DEBUG nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 06:56:15 np0005604943 nova_compute[238883]: 2026-02-02 11:56:15.822 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033375.8207796, 52749176-480b-4b66-b02a-2d5041414572 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:56:15 np0005604943 nova_compute[238883]: 2026-02-02 11:56:15.822 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 52749176-480b-4b66-b02a-2d5041414572] VM Resumed (Lifecycle Event)#033[00m
Feb  2 06:56:15 np0005604943 nova_compute[238883]: 2026-02-02 11:56:15.827 238887 INFO nova.virt.libvirt.driver [-] [instance: 52749176-480b-4b66-b02a-2d5041414572] Instance spawned successfully.#033[00m
Feb  2 06:56:15 np0005604943 nova_compute[238883]: 2026-02-02 11:56:15.827 238887 DEBUG nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 06:56:15 np0005604943 nova_compute[238883]: 2026-02-02 11:56:15.902 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.026 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 52749176-480b-4b66-b02a-2d5041414572] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.032 238887 DEBUG nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.033 238887 DEBUG nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.033 238887 DEBUG nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.034 238887 DEBUG nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.034 238887 DEBUG nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.035 238887 DEBUG nova.virt.libvirt.driver [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.039 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 52749176-480b-4b66-b02a-2d5041414572] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.110 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 52749176-480b-4b66-b02a-2d5041414572] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.110 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033375.8216665, 52749176-480b-4b66-b02a-2d5041414572 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.111 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 52749176-480b-4b66-b02a-2d5041414572] VM Started (Lifecycle Event)#033[00m
Feb  2 06:56:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 51 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 445 KiB/s wr, 69 op/s
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.252 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 52749176-480b-4b66-b02a-2d5041414572] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.257 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 52749176-480b-4b66-b02a-2d5041414572] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.263 238887 INFO nova.compute.manager [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Took 3.57 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.264 238887 DEBUG nova.compute.manager [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.276 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 52749176-480b-4b66-b02a-2d5041414572] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.324 238887 INFO nova.compute.manager [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Took 4.62 seconds to build instance.#033[00m
Feb  2 06:56:16 np0005604943 nova_compute[238883]: 2026-02-02 11:56:16.340 238887 DEBUG oslo_concurrency.lockutils [None req-0cf7531c-da5c-4842-85a0-1f801fe90533 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lock "52749176-480b-4b66-b02a-2d5041414572" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:17 np0005604943 nova_compute[238883]: 2026-02-02 11:56:17.363 238887 DEBUG oslo_concurrency.lockutils [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Acquiring lock "52749176-480b-4b66-b02a-2d5041414572" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:17 np0005604943 nova_compute[238883]: 2026-02-02 11:56:17.363 238887 DEBUG oslo_concurrency.lockutils [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lock "52749176-480b-4b66-b02a-2d5041414572" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:17 np0005604943 nova_compute[238883]: 2026-02-02 11:56:17.363 238887 DEBUG oslo_concurrency.lockutils [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Acquiring lock "52749176-480b-4b66-b02a-2d5041414572-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:17 np0005604943 nova_compute[238883]: 2026-02-02 11:56:17.364 238887 DEBUG oslo_concurrency.lockutils [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lock "52749176-480b-4b66-b02a-2d5041414572-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:17 np0005604943 nova_compute[238883]: 2026-02-02 11:56:17.364 238887 DEBUG oslo_concurrency.lockutils [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lock "52749176-480b-4b66-b02a-2d5041414572-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:17 np0005604943 nova_compute[238883]: 2026-02-02 11:56:17.365 238887 INFO nova.compute.manager [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Terminating instance#033[00m
Feb  2 06:56:17 np0005604943 nova_compute[238883]: 2026-02-02 11:56:17.366 238887 DEBUG oslo_concurrency.lockutils [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Acquiring lock "refresh_cache-52749176-480b-4b66-b02a-2d5041414572" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:56:17 np0005604943 nova_compute[238883]: 2026-02-02 11:56:17.366 238887 DEBUG oslo_concurrency.lockutils [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Acquired lock "refresh_cache-52749176-480b-4b66-b02a-2d5041414572" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:56:17 np0005604943 nova_compute[238883]: 2026-02-02 11:56:17.366 238887 DEBUG nova.network.neutron [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 06:56:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:56:17 np0005604943 nova_compute[238883]: 2026-02-02 11:56:17.835 238887 DEBUG nova.network.neutron [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 06:56:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Feb  2 06:56:18 np0005604943 nova_compute[238883]: 2026-02-02 11:56:18.175 238887 DEBUG nova.network.neutron [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:56:18 np0005604943 nova_compute[238883]: 2026-02-02 11:56:18.198 238887 DEBUG oslo_concurrency.lockutils [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Releasing lock "refresh_cache-52749176-480b-4b66-b02a-2d5041414572" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:56:18 np0005604943 nova_compute[238883]: 2026-02-02 11:56:18.199 238887 DEBUG nova.compute.manager [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 06:56:18 np0005604943 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Feb  2 06:56:18 np0005604943 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 3.037s CPU time.
Feb  2 06:56:18 np0005604943 systemd-machined[206973]: Machine qemu-2-instance-00000002 terminated.
Feb  2 06:56:18 np0005604943 nova_compute[238883]: 2026-02-02 11:56:18.464 238887 INFO nova.virt.libvirt.driver [-] [instance: 52749176-480b-4b66-b02a-2d5041414572] Instance destroyed successfully.#033[00m
Feb  2 06:56:18 np0005604943 nova_compute[238883]: 2026-02-02 11:56:18.465 238887 DEBUG nova.objects.instance [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lazy-loading 'resources' on Instance uuid 52749176-480b-4b66-b02a-2d5041414572 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:56:18 np0005604943 nova_compute[238883]: 2026-02-02 11:56:18.661 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:18 np0005604943 nova_compute[238883]: 2026-02-02 11:56:18.706 238887 INFO nova.virt.libvirt.driver [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Deleting instance files /var/lib/nova/instances/52749176-480b-4b66-b02a-2d5041414572_del#033[00m
Feb  2 06:56:18 np0005604943 nova_compute[238883]: 2026-02-02 11:56:18.706 238887 INFO nova.virt.libvirt.driver [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Deletion of /var/lib/nova/instances/52749176-480b-4b66-b02a-2d5041414572_del complete#033[00m
Feb  2 06:56:18 np0005604943 nova_compute[238883]: 2026-02-02 11:56:18.761 238887 INFO nova.compute.manager [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] [instance: 52749176-480b-4b66-b02a-2d5041414572] Took 0.56 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 06:56:18 np0005604943 nova_compute[238883]: 2026-02-02 11:56:18.762 238887 DEBUG oslo.service.loopingcall [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 06:56:18 np0005604943 nova_compute[238883]: 2026-02-02 11:56:18.762 238887 DEBUG nova.compute.manager [-] [instance: 52749176-480b-4b66-b02a-2d5041414572] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 06:56:18 np0005604943 nova_compute[238883]: 2026-02-02 11:56:18.763 238887 DEBUG nova.network.neutron [-] [instance: 52749176-480b-4b66-b02a-2d5041414572] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 06:56:19 np0005604943 nova_compute[238883]: 2026-02-02 11:56:19.358 238887 DEBUG nova.network.neutron [-] [instance: 52749176-480b-4b66-b02a-2d5041414572] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 06:56:19 np0005604943 nova_compute[238883]: 2026-02-02 11:56:19.375 238887 DEBUG nova.network.neutron [-] [instance: 52749176-480b-4b66-b02a-2d5041414572] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:56:19 np0005604943 nova_compute[238883]: 2026-02-02 11:56:19.390 238887 INFO nova.compute.manager [-] [instance: 52749176-480b-4b66-b02a-2d5041414572] Took 0.63 seconds to deallocate network for instance.#033[00m
Feb  2 06:56:19 np0005604943 nova_compute[238883]: 2026-02-02 11:56:19.441 238887 DEBUG oslo_concurrency.lockutils [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:19 np0005604943 nova_compute[238883]: 2026-02-02 11:56:19.441 238887 DEBUG oslo_concurrency.lockutils [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:19 np0005604943 nova_compute[238883]: 2026-02-02 11:56:19.492 238887 DEBUG oslo_concurrency.processutils [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:56:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1288302182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:56:20 np0005604943 nova_compute[238883]: 2026-02-02 11:56:20.010 238887 DEBUG oslo_concurrency.processutils [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:20 np0005604943 nova_compute[238883]: 2026-02-02 11:56:20.016 238887 DEBUG nova.compute.provider_tree [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:56:20 np0005604943 nova_compute[238883]: 2026-02-02 11:56:20.032 238887 DEBUG nova.scheduler.client.report [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:56:20 np0005604943 nova_compute[238883]: 2026-02-02 11:56:20.054 238887 DEBUG oslo_concurrency.lockutils [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:20 np0005604943 nova_compute[238883]: 2026-02-02 11:56:20.082 238887 INFO nova.scheduler.client.report [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Deleted allocations for instance 52749176-480b-4b66-b02a-2d5041414572#033[00m
Feb  2 06:56:20 np0005604943 nova_compute[238883]: 2026-02-02 11:56:20.152 238887 DEBUG oslo_concurrency.lockutils [None req-506d7fce-5f36-42ff-9fce-f00c1fccca0b 5bf4d002c327409f9dabb3daeed85748 9d8f7eef7fe44d68b33a8aab1d201cd1 - - default default] Lock "52749176-480b-4b66-b02a-2d5041414572" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 78 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 198 op/s
Feb  2 06:56:20 np0005604943 nova_compute[238883]: 2026-02-02 11:56:20.903 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0002645913407448144 of space, bias 1.0, pg target 0.07937740222344433 quantized to 32 (current 32)
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.81850319035376e-06 of space, bias 1.0, pg target 0.001745550957106128 quantized to 32 (current 32)
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.2674372258763562e-07 of space, bias 1.0, pg target 6.802311677629069e-05 quantized to 32 (current 32)
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659821916885609 of space, bias 1.0, pg target 0.19979465750656827 quantized to 32 (current 32)
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3730842216239821e-06 of space, bias 4.0, pg target 0.0016477010659487787 quantized to 16 (current 16)
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:56:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:56:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Feb  2 06:56:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Feb  2 06:56:21 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Feb  2 06:56:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 78 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 236 op/s
Feb  2 06:56:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:56:23 np0005604943 nova_compute[238883]: 2026-02-02 11:56:23.662 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.6 MiB/s wr, 189 op/s
Feb  2 06:56:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Feb  2 06:56:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Feb  2 06:56:24 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Feb  2 06:56:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Feb  2 06:56:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Feb  2 06:56:25 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Feb  2 06:56:25 np0005604943 nova_compute[238883]: 2026-02-02 11:56:25.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:56:25 np0005604943 nova_compute[238883]: 2026-02-02 11:56:25.905 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.3 KiB/s wr, 54 op/s
Feb  2 06:56:26 np0005604943 nova_compute[238883]: 2026-02-02 11:56:26.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:56:26 np0005604943 nova_compute[238883]: 2026-02-02 11:56:26.642 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 06:56:26 np0005604943 nova_compute[238883]: 2026-02-02 11:56:26.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:56:26 np0005604943 nova_compute[238883]: 2026-02-02 11:56:26.662 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:26 np0005604943 nova_compute[238883]: 2026-02-02 11:56:26.663 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:26 np0005604943 nova_compute[238883]: 2026-02-02 11:56:26.663 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:26 np0005604943 nova_compute[238883]: 2026-02-02 11:56:26.663 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 06:56:26 np0005604943 nova_compute[238883]: 2026-02-02 11:56:26.664 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:56:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4221828913' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:56:27 np0005604943 nova_compute[238883]: 2026-02-02 11:56:27.250 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:27 np0005604943 nova_compute[238883]: 2026-02-02 11:56:27.416 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:56:27 np0005604943 nova_compute[238883]: 2026-02-02 11:56:27.417 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4833MB free_disk=59.98825871013105GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 06:56:27 np0005604943 nova_compute[238883]: 2026-02-02 11:56:27.418 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:27 np0005604943 nova_compute[238883]: 2026-02-02 11:56:27.418 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:56:27 np0005604943 nova_compute[238883]: 2026-02-02 11:56:27.496 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 06:56:27 np0005604943 nova_compute[238883]: 2026-02-02 11:56:27.497 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 06:56:27 np0005604943 nova_compute[238883]: 2026-02-02 11:56:27.524 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Feb  2 06:56:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Feb  2 06:56:27 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Feb  2 06:56:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:56:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/64904368' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:56:28 np0005604943 nova_compute[238883]: 2026-02-02 11:56:28.101 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:28 np0005604943 nova_compute[238883]: 2026-02-02 11:56:28.106 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:56:28 np0005604943 nova_compute[238883]: 2026-02-02 11:56:28.125 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:56:28 np0005604943 nova_compute[238883]: 2026-02-02 11:56:28.145 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 06:56:28 np0005604943 nova_compute[238883]: 2026-02-02 11:56:28.146 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 10 KiB/s wr, 147 op/s
Feb  2 06:56:28 np0005604943 nova_compute[238883]: 2026-02-02 11:56:28.663 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:29 np0005604943 podman[246748]: 2026-02-02 11:56:29.057852335 +0000 UTC m=+0.061024928 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 06:56:29 np0005604943 podman[246747]: 2026-02-02 11:56:29.098511506 +0000 UTC m=+0.101880095 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 06:56:29 np0005604943 nova_compute[238883]: 2026-02-02 11:56:29.147 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:56:29 np0005604943 nova_compute[238883]: 2026-02-02 11:56:29.148 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 06:56:29 np0005604943 nova_compute[238883]: 2026-02-02 11:56:29.148 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 06:56:29 np0005604943 nova_compute[238883]: 2026-02-02 11:56:29.173 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 06:56:29 np0005604943 nova_compute[238883]: 2026-02-02 11:56:29.173 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:56:29 np0005604943 nova_compute[238883]: 2026-02-02 11:56:29.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:56:29 np0005604943 nova_compute[238883]: 2026-02-02 11:56:29.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:56:29 np0005604943 nova_compute[238883]: 2026-02-02 11:56:29.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:56:29 np0005604943 ovn_controller[145056]: 2026-02-02T11:56:29Z|00035|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Feb  2 06:56:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 8.0 KiB/s wr, 104 op/s
Feb  2 06:56:30 np0005604943 nova_compute[238883]: 2026-02-02 11:56:30.905 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:56:30 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1796647948' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:56:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:56:30 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1796647948' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:56:31 np0005604943 nova_compute[238883]: 2026-02-02 11:56:31.050 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "2611c633-f397-48e0-a70b-bc81c48cbb65" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:31 np0005604943 nova_compute[238883]: 2026-02-02 11:56:31.050 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "2611c633-f397-48e0-a70b-bc81c48cbb65" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:31 np0005604943 nova_compute[238883]: 2026-02-02 11:56:31.067 238887 DEBUG nova.compute.manager [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 06:56:31 np0005604943 nova_compute[238883]: 2026-02-02 11:56:31.138 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:31 np0005604943 nova_compute[238883]: 2026-02-02 11:56:31.139 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:31 np0005604943 nova_compute[238883]: 2026-02-02 11:56:31.146 238887 DEBUG nova.virt.hardware [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 06:56:31 np0005604943 nova_compute[238883]: 2026-02-02 11:56:31.146 238887 INFO nova.compute.claims [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 06:56:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:56:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2902158191' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:56:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:56:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2902158191' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:56:31 np0005604943 nova_compute[238883]: 2026-02-02 11:56:31.254 238887 DEBUG oslo_concurrency.processutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:31 np0005604943 nova_compute[238883]: 2026-02-02 11:56:31.636 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:56:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:56:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1384077360' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:56:31 np0005604943 nova_compute[238883]: 2026-02-02 11:56:31.769 238887 DEBUG oslo_concurrency.processutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:31 np0005604943 nova_compute[238883]: 2026-02-02 11:56:31.773 238887 DEBUG nova.compute.provider_tree [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:56:31 np0005604943 nova_compute[238883]: 2026-02-02 11:56:31.790 238887 DEBUG nova.scheduler.client.report [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:56:31 np0005604943 nova_compute[238883]: 2026-02-02 11:56:31.933 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:31 np0005604943 nova_compute[238883]: 2026-02-02 11:56:31.933 238887 DEBUG nova.compute.manager [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 06:56:31 np0005604943 nova_compute[238883]: 2026-02-02 11:56:31.995 238887 DEBUG nova.compute.manager [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 06:56:31 np0005604943 nova_compute[238883]: 2026-02-02 11:56:31.996 238887 DEBUG nova.network.neutron [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.014 238887 INFO nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.033 238887 DEBUG nova.compute.manager [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.118 238887 DEBUG nova.compute.manager [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.120 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.120 238887 INFO nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Creating image(s)#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.140 238887 DEBUG nova.storage.rbd_utils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] rbd image 2611c633-f397-48e0-a70b-bc81c48cbb65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.163 238887 DEBUG nova.storage.rbd_utils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] rbd image 2611c633-f397-48e0-a70b-bc81c48cbb65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 6.4 KiB/s wr, 82 op/s
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.184 238887 DEBUG nova.storage.rbd_utils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] rbd image 2611c633-f397-48e0-a70b-bc81c48cbb65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.189 238887 DEBUG oslo_concurrency.processutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.234 238887 DEBUG oslo_concurrency.processutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.234 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.235 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.235 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.253 238887 DEBUG nova.storage.rbd_utils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] rbd image 2611c633-f397-48e0-a70b-bc81c48cbb65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.256 238887 DEBUG oslo_concurrency.processutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 2611c633-f397-48e0-a70b-bc81c48cbb65_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.273 238887 DEBUG nova.policy [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37a4dd38356f4cbd937094eb4da6f5cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e7640959e7c4de1a4850ecd1b55f37c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 06:56:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:56:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Feb  2 06:56:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Feb  2 06:56:32 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.450 238887 DEBUG oslo_concurrency.processutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 2611c633-f397-48e0-a70b-bc81c48cbb65_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.502 238887 DEBUG nova.storage.rbd_utils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] resizing rbd image 2611c633-f397-48e0-a70b-bc81c48cbb65_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.569 238887 DEBUG nova.objects.instance [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lazy-loading 'migration_context' on Instance uuid 2611c633-f397-48e0-a70b-bc81c48cbb65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.582 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.583 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Ensure instance console log exists: /var/lib/nova/instances/2611c633-f397-48e0-a70b-bc81c48cbb65/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.584 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.585 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:32 np0005604943 nova_compute[238883]: 2026-02-02 11:56:32.585 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:33 np0005604943 nova_compute[238883]: 2026-02-02 11:56:33.245 238887 DEBUG nova.network.neutron [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Successfully created port: d01febf2-ab0a-47cd-8e66-e0336c3333e5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 06:56:33 np0005604943 nova_compute[238883]: 2026-02-02 11:56:33.413 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033378.4123726, 52749176-480b-4b66-b02a-2d5041414572 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:56:33 np0005604943 nova_compute[238883]: 2026-02-02 11:56:33.414 238887 INFO nova.compute.manager [-] [instance: 52749176-480b-4b66-b02a-2d5041414572] VM Stopped (Lifecycle Event)#033[00m
Feb  2 06:56:33 np0005604943 nova_compute[238883]: 2026-02-02 11:56:33.446 238887 DEBUG nova.compute.manager [None req-22185e47-b4d0-41fa-85df-b7cf3ebf78a1 - - - - - -] [instance: 52749176-480b-4b66-b02a-2d5041414572] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:56:33 np0005604943 nova_compute[238883]: 2026-02-02 11:56:33.665 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:34 np0005604943 nova_compute[238883]: 2026-02-02 11:56:34.068 238887 DEBUG nova.network.neutron [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Successfully updated port: d01febf2-ab0a-47cd-8e66-e0336c3333e5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 06:56:34 np0005604943 nova_compute[238883]: 2026-02-02 11:56:34.085 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "refresh_cache-2611c633-f397-48e0-a70b-bc81c48cbb65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:56:34 np0005604943 nova_compute[238883]: 2026-02-02 11:56:34.085 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquired lock "refresh_cache-2611c633-f397-48e0-a70b-bc81c48cbb65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:56:34 np0005604943 nova_compute[238883]: 2026-02-02 11:56:34.085 238887 DEBUG nova.network.neutron [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 06:56:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 82 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.8 MiB/s wr, 209 op/s
Feb  2 06:56:34 np0005604943 nova_compute[238883]: 2026-02-02 11:56:34.478 238887 DEBUG nova.compute.manager [req-bb0e2072-0fef-4f84-9cca-dea62398ad3e req-e47b3691-196a-4828-8f8d-5d9b805fff86 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Received event network-changed-d01febf2-ab0a-47cd-8e66-e0336c3333e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:56:34 np0005604943 nova_compute[238883]: 2026-02-02 11:56:34.479 238887 DEBUG nova.compute.manager [req-bb0e2072-0fef-4f84-9cca-dea62398ad3e req-e47b3691-196a-4828-8f8d-5d9b805fff86 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Refreshing instance network info cache due to event network-changed-d01febf2-ab0a-47cd-8e66-e0336c3333e5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:56:34 np0005604943 nova_compute[238883]: 2026-02-02 11:56:34.479 238887 DEBUG oslo_concurrency.lockutils [req-bb0e2072-0fef-4f84-9cca-dea62398ad3e req-e47b3691-196a-4828-8f8d-5d9b805fff86 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-2611c633-f397-48e0-a70b-bc81c48cbb65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:56:34 np0005604943 nova_compute[238883]: 2026-02-02 11:56:34.512 238887 DEBUG nova.network.neutron [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.392 238887 DEBUG nova.network.neutron [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Updating instance_info_cache with network_info: [{"id": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "address": "fa:16:3e:f3:ec:cc", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01febf2-ab", "ovs_interfaceid": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.413 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Releasing lock "refresh_cache-2611c633-f397-48e0-a70b-bc81c48cbb65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.414 238887 DEBUG nova.compute.manager [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Instance network_info: |[{"id": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "address": "fa:16:3e:f3:ec:cc", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01febf2-ab", "ovs_interfaceid": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.414 238887 DEBUG oslo_concurrency.lockutils [req-bb0e2072-0fef-4f84-9cca-dea62398ad3e req-e47b3691-196a-4828-8f8d-5d9b805fff86 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-2611c633-f397-48e0-a70b-bc81c48cbb65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.414 238887 DEBUG nova.network.neutron [req-bb0e2072-0fef-4f84-9cca-dea62398ad3e req-e47b3691-196a-4828-8f8d-5d9b805fff86 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Refreshing network info cache for port d01febf2-ab0a-47cd-8e66-e0336c3333e5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.417 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Start _get_guest_xml network_info=[{"id": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "address": "fa:16:3e:f3:ec:cc", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01febf2-ab", "ovs_interfaceid": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'image_id': '21b263f0-00f1-47be-b8b1-e3c07da0a6a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.420 238887 WARNING nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.425 238887 DEBUG nova.virt.libvirt.host [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.426 238887 DEBUG nova.virt.libvirt.host [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.428 238887 DEBUG nova.virt.libvirt.host [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.428 238887 DEBUG nova.virt.libvirt.host [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.429 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.429 238887 DEBUG nova.virt.hardware [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.429 238887 DEBUG nova.virt.hardware [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.430 238887 DEBUG nova.virt.hardware [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.430 238887 DEBUG nova.virt.hardware [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.430 238887 DEBUG nova.virt.hardware [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.430 238887 DEBUG nova.virt.hardware [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.431 238887 DEBUG nova.virt.hardware [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.431 238887 DEBUG nova.virt.hardware [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.431 238887 DEBUG nova.virt.hardware [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.431 238887 DEBUG nova.virt.hardware [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.431 238887 DEBUG nova.virt.hardware [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.434 238887 DEBUG oslo_concurrency.processutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.907 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:56:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1465685106' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.974 238887 DEBUG oslo_concurrency.processutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.992 238887 DEBUG nova.storage.rbd_utils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] rbd image 2611c633-f397-48e0-a70b-bc81c48cbb65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:35 np0005604943 nova_compute[238883]: 2026-02-02 11:56:35.995 238887 DEBUG oslo_concurrency.processutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 82 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 4.5 MiB/s wr, 132 op/s
Feb  2 06:56:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:56:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3682833500' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.568 238887 DEBUG oslo_concurrency.processutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.569 238887 DEBUG nova.virt.libvirt.vif [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1480210165',display_name='tempest-VolumesActionsTest-instance-1480210165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1480210165',id=3,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e7640959e7c4de1a4850ecd1b55f37c',ramdisk_id='',reservation_id='r-j08c62x0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-401080261',owner_user_name='tempest-VolumesActionsTest-401080261-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:56:32Z,user_data=None,user_id='37a4dd38356f4cbd937094eb4da6f5cb',uuid=2611c633-f397-48e0-a70b-bc81c48cbb65,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "address": "fa:16:3e:f3:ec:cc", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01febf2-ab", "ovs_interfaceid": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.570 238887 DEBUG nova.network.os_vif_util [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Converting VIF {"id": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "address": "fa:16:3e:f3:ec:cc", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01febf2-ab", "ovs_interfaceid": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.571 238887 DEBUG nova.network.os_vif_util [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:ec:cc,bridge_name='br-int',has_traffic_filtering=True,id=d01febf2-ab0a-47cd-8e66-e0336c3333e5,network=Network(efd07eae-76b7-411a-9564-96e7e46d25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd01febf2-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.572 238887 DEBUG nova.objects.instance [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lazy-loading 'pci_devices' on Instance uuid 2611c633-f397-48e0-a70b-bc81c48cbb65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.596 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] End _get_guest_xml xml=<domain type="kvm">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  <uuid>2611c633-f397-48e0-a70b-bc81c48cbb65</uuid>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  <name>instance-00000003</name>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <nova:name>tempest-VolumesActionsTest-instance-1480210165</nova:name>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 11:56:35</nova:creationTime>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:        <nova:user uuid="37a4dd38356f4cbd937094eb4da6f5cb">tempest-VolumesActionsTest-401080261-project-member</nova:user>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:        <nova:project uuid="4e7640959e7c4de1a4850ecd1b55f37c">tempest-VolumesActionsTest-401080261</nova:project>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <nova:root type="image" uuid="21b263f0-00f1-47be-b8b1-e3c07da0a6a2"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:        <nova:port uuid="d01febf2-ab0a-47cd-8e66-e0336c3333e5">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <system>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <entry name="serial">2611c633-f397-48e0-a70b-bc81c48cbb65</entry>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <entry name="uuid">2611c633-f397-48e0-a70b-bc81c48cbb65</entry>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    </system>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  <os>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  </os>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  <features>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  </features>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  </clock>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  <devices>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/2611c633-f397-48e0-a70b-bc81c48cbb65_disk">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/2611c633-f397-48e0-a70b-bc81c48cbb65_disk.config">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:f3:ec:cc"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <target dev="tapd01febf2-ab"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    </interface>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/2611c633-f397-48e0-a70b-bc81c48cbb65/console.log" append="off"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    </serial>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <video>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    </video>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    </rng>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 06:56:36 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 06:56:36 np0005604943 nova_compute[238883]:  </devices>
Feb  2 06:56:36 np0005604943 nova_compute[238883]: </domain>
Feb  2 06:56:36 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.597 238887 DEBUG nova.compute.manager [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Preparing to wait for external event network-vif-plugged-d01febf2-ab0a-47cd-8e66-e0336c3333e5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.597 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.598 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.598 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.599 238887 DEBUG nova.virt.libvirt.vif [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1480210165',display_name='tempest-VolumesActionsTest-instance-1480210165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1480210165',id=3,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e7640959e7c4de1a4850ecd1b55f37c',ramdisk_id='',reservation_id='r-j08c62x0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-401080261',owner_user_name='tempest-VolumesActionsTest-401080261-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:56:32Z,user_data=None,user_id='37a4dd38356f4cbd937094eb4da6f5cb',uuid=2611c633-f397-48e0-a70b-bc81c48cbb65,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "address": "fa:16:3e:f3:ec:cc", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01febf2-ab", "ovs_interfaceid": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.599 238887 DEBUG nova.network.os_vif_util [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Converting VIF {"id": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "address": "fa:16:3e:f3:ec:cc", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01febf2-ab", "ovs_interfaceid": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.599 238887 DEBUG nova.network.os_vif_util [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:ec:cc,bridge_name='br-int',has_traffic_filtering=True,id=d01febf2-ab0a-47cd-8e66-e0336c3333e5,network=Network(efd07eae-76b7-411a-9564-96e7e46d25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd01febf2-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.600 238887 DEBUG os_vif [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:ec:cc,bridge_name='br-int',has_traffic_filtering=True,id=d01febf2-ab0a-47cd-8e66-e0336c3333e5,network=Network(efd07eae-76b7-411a-9564-96e7e46d25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd01febf2-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.600 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.601 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.601 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.604 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.604 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd01febf2-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.605 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd01febf2-ab, col_values=(('external_ids', {'iface-id': 'd01febf2-ab0a-47cd-8e66-e0336c3333e5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:ec:cc', 'vm-uuid': '2611c633-f397-48e0-a70b-bc81c48cbb65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.606 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:36 np0005604943 NetworkManager[49093]: <info>  [1770033396.6070] manager: (tapd01febf2-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.608 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.612 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.612 238887 INFO os_vif [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:ec:cc,bridge_name='br-int',has_traffic_filtering=True,id=d01febf2-ab0a-47cd-8e66-e0336c3333e5,network=Network(efd07eae-76b7-411a-9564-96e7e46d25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd01febf2-ab')#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.620 238887 DEBUG nova.network.neutron [req-bb0e2072-0fef-4f84-9cca-dea62398ad3e req-e47b3691-196a-4828-8f8d-5d9b805fff86 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Updated VIF entry in instance network info cache for port d01febf2-ab0a-47cd-8e66-e0336c3333e5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.621 238887 DEBUG nova.network.neutron [req-bb0e2072-0fef-4f84-9cca-dea62398ad3e req-e47b3691-196a-4828-8f8d-5d9b805fff86 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Updating instance_info_cache with network_info: [{"id": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "address": "fa:16:3e:f3:ec:cc", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01febf2-ab", "ovs_interfaceid": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.657 238887 DEBUG oslo_concurrency.lockutils [req-bb0e2072-0fef-4f84-9cca-dea62398ad3e req-e47b3691-196a-4828-8f8d-5d9b805fff86 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-2611c633-f397-48e0-a70b-bc81c48cbb65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.674 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.674 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.674 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] No VIF found with MAC fa:16:3e:f3:ec:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.675 238887 INFO nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Using config drive#033[00m
Feb  2 06:56:36 np0005604943 nova_compute[238883]: 2026-02-02 11:56:36.692 238887 DEBUG nova.storage.rbd_utils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] rbd image 2611c633-f397-48e0-a70b-bc81c48cbb65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:56:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Feb  2 06:56:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Feb  2 06:56:37 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Feb  2 06:56:37 np0005604943 nova_compute[238883]: 2026-02-02 11:56:37.735 238887 INFO nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Creating config drive at /var/lib/nova/instances/2611c633-f397-48e0-a70b-bc81c48cbb65/disk.config#033[00m
Feb  2 06:56:37 np0005604943 nova_compute[238883]: 2026-02-02 11:56:37.739 238887 DEBUG oslo_concurrency.processutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2611c633-f397-48e0-a70b-bc81c48cbb65/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpeikogm9z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:37 np0005604943 nova_compute[238883]: 2026-02-02 11:56:37.861 238887 DEBUG oslo_concurrency.processutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2611c633-f397-48e0-a70b-bc81c48cbb65/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpeikogm9z" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:37 np0005604943 nova_compute[238883]: 2026-02-02 11:56:37.890 238887 DEBUG nova.storage.rbd_utils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] rbd image 2611c633-f397-48e0-a70b-bc81c48cbb65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:37 np0005604943 nova_compute[238883]: 2026-02-02 11:56:37.893 238887 DEBUG oslo_concurrency.processutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2611c633-f397-48e0-a70b-bc81c48cbb65/disk.config 2611c633-f397-48e0-a70b-bc81c48cbb65_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:56:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/650480307' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:56:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:56:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/650480307' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:56:38 np0005604943 nova_compute[238883]: 2026-02-02 11:56:38.013 238887 DEBUG oslo_concurrency.processutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2611c633-f397-48e0-a70b-bc81c48cbb65/disk.config 2611c633-f397-48e0-a70b-bc81c48cbb65_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:38 np0005604943 nova_compute[238883]: 2026-02-02 11:56:38.014 238887 INFO nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Deleting local config drive /var/lib/nova/instances/2611c633-f397-48e0-a70b-bc81c48cbb65/disk.config because it was imported into RBD.#033[00m
Feb  2 06:56:38 np0005604943 kernel: tapd01febf2-ab: entered promiscuous mode
Feb  2 06:56:38 np0005604943 ovn_controller[145056]: 2026-02-02T11:56:38Z|00036|binding|INFO|Claiming lport d01febf2-ab0a-47cd-8e66-e0336c3333e5 for this chassis.
Feb  2 06:56:38 np0005604943 ovn_controller[145056]: 2026-02-02T11:56:38Z|00037|binding|INFO|d01febf2-ab0a-47cd-8e66-e0336c3333e5: Claiming fa:16:3e:f3:ec:cc 10.100.0.8
Feb  2 06:56:38 np0005604943 nova_compute[238883]: 2026-02-02 11:56:38.054 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:38 np0005604943 NetworkManager[49093]: <info>  [1770033398.0565] manager: (tapd01febf2-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Feb  2 06:56:38 np0005604943 nova_compute[238883]: 2026-02-02 11:56:38.058 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:38 np0005604943 nova_compute[238883]: 2026-02-02 11:56:38.062 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:38 np0005604943 nova_compute[238883]: 2026-02-02 11:56:38.066 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.078 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:ec:cc 10.100.0.8'], port_security=['fa:16:3e:f3:ec:cc 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2611c633-f397-48e0-a70b-bc81c48cbb65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efd07eae-76b7-411a-9564-96e7e46d25ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e7640959e7c4de1a4850ecd1b55f37c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '713394b4-1bd6-46bb-a85c-8ab4d32885b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51ac8d6c-86ce-45d9-a8cf-5ff78d5d5bc9, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=d01febf2-ab0a-47cd-8e66-e0336c3333e5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.079 155011 INFO neutron.agent.ovn.metadata.agent [-] Port d01febf2-ab0a-47cd-8e66-e0336c3333e5 in datapath efd07eae-76b7-411a-9564-96e7e46d25ba bound to our chassis#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.080 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network efd07eae-76b7-411a-9564-96e7e46d25ba#033[00m
Feb  2 06:56:38 np0005604943 systemd-machined[206973]: New machine qemu-3-instance-00000003.
Feb  2 06:56:38 np0005604943 systemd-udevd[247115]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.089 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[44c881f3-b48c-4c4e-acae-5003242633f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.089 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapefd07eae-71 in ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.090 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapefd07eae-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.091 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9e574893-6386-436f-a2f1-67c23246328d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.092 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[0e116b67-31b2-4682-8252-f54220e7fddd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:38 np0005604943 NetworkManager[49093]: <info>  [1770033398.0966] device (tapd01febf2-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 06:56:38 np0005604943 NetworkManager[49093]: <info>  [1770033398.0975] device (tapd01febf2-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 06:56:38 np0005604943 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Feb  2 06:56:38 np0005604943 ovn_controller[145056]: 2026-02-02T11:56:38Z|00038|binding|INFO|Setting lport d01febf2-ab0a-47cd-8e66-e0336c3333e5 ovn-installed in OVS
Feb  2 06:56:38 np0005604943 ovn_controller[145056]: 2026-02-02T11:56:38Z|00039|binding|INFO|Setting lport d01febf2-ab0a-47cd-8e66-e0336c3333e5 up in Southbound
Feb  2 06:56:38 np0005604943 nova_compute[238883]: 2026-02-02 11:56:38.103 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.105 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[d26ab520-777b-40e8-8966-e8bba86cddd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.120 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fd853f53-3e77-4927-bb16-b197c38ae4fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.145 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b652b7-5570-49de-9f7a-053601c2d0c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.151 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[915caa44-17ce-478f-ace5-8c913cef6470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:38 np0005604943 NetworkManager[49093]: <info>  [1770033398.1526] manager: (tapefd07eae-70): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Feb  2 06:56:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 5.3 MiB/s wr, 153 op/s
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.179 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[70e18c56-8a89-4809-829f-2ab350411b71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:38 np0005604943 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.182 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[a96324be-a1f9-4b11-a8c9-b1967b8ff484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:38 np0005604943 NetworkManager[49093]: <info>  [1770033398.2047] device (tapefd07eae-70): carrier: link connected
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.209 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[4f01c835-340e-41ca-a8c6-589613725037]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.222 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[54db0d55-ba35-4640-bf06-81935b5404fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefd07eae-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:73:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384355, 'reachable_time': 41687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247149, 'error': None, 'target': 'ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.233 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[85623dbd-df04-46e3-a0ce-61748a854611]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:73f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384355, 'tstamp': 384355}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247150, 'error': None, 'target': 'ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.246 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9abdc8b8-a96e-4830-a6e6-867446520d6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefd07eae-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:73:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384355, 'reachable_time': 41687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247151, 'error': None, 'target': 'ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.271 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d313e21d-2974-42b6-801f-c299bfd2720e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.315 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[18e9b802-618f-4ccd-8c2d-9b7ae465c249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.316 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefd07eae-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.316 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.317 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefd07eae-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:56:38 np0005604943 kernel: tapefd07eae-70: entered promiscuous mode
Feb  2 06:56:38 np0005604943 nova_compute[238883]: 2026-02-02 11:56:38.319 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:38 np0005604943 NetworkManager[49093]: <info>  [1770033398.3209] manager: (tapefd07eae-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.321 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapefd07eae-70, col_values=(('external_ids', {'iface-id': '7a7f9bbc-6c88-4c95-b635-acbc93d76395'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:56:38 np0005604943 ovn_controller[145056]: 2026-02-02T11:56:38Z|00040|binding|INFO|Releasing lport 7a7f9bbc-6c88-4c95-b635-acbc93d76395 from this chassis (sb_readonly=0)
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.324 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/efd07eae-76b7-411a-9564-96e7e46d25ba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/efd07eae-76b7-411a-9564-96e7e46d25ba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.325 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d704de75-0df7-4267-a69b-a9dfc531a03e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.325 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-efd07eae-76b7-411a-9564-96e7e46d25ba
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/efd07eae-76b7-411a-9564-96e7e46d25ba.pid.haproxy
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID efd07eae-76b7-411a-9564-96e7e46d25ba
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 06:56:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:38.326 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba', 'env', 'PROCESS_TAG=haproxy-efd07eae-76b7-411a-9564-96e7e46d25ba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/efd07eae-76b7-411a-9564-96e7e46d25ba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 06:56:38 np0005604943 nova_compute[238883]: 2026-02-02 11:56:38.330 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:38 np0005604943 nova_compute[238883]: 2026-02-02 11:56:38.412 238887 DEBUG nova.compute.manager [req-f2d67ebf-ff05-425e-a467-91f5c5009308 req-0378651a-3bb8-45c3-8599-ef8c84f0b205 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Received event network-vif-plugged-d01febf2-ab0a-47cd-8e66-e0336c3333e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:56:38 np0005604943 nova_compute[238883]: 2026-02-02 11:56:38.412 238887 DEBUG oslo_concurrency.lockutils [req-f2d67ebf-ff05-425e-a467-91f5c5009308 req-0378651a-3bb8-45c3-8599-ef8c84f0b205 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:38 np0005604943 nova_compute[238883]: 2026-02-02 11:56:38.412 238887 DEBUG oslo_concurrency.lockutils [req-f2d67ebf-ff05-425e-a467-91f5c5009308 req-0378651a-3bb8-45c3-8599-ef8c84f0b205 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:38 np0005604943 nova_compute[238883]: 2026-02-02 11:56:38.412 238887 DEBUG oslo_concurrency.lockutils [req-f2d67ebf-ff05-425e-a467-91f5c5009308 req-0378651a-3bb8-45c3-8599-ef8c84f0b205 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:38 np0005604943 nova_compute[238883]: 2026-02-02 11:56:38.413 238887 DEBUG nova.compute.manager [req-f2d67ebf-ff05-425e-a467-91f5c5009308 req-0378651a-3bb8-45c3-8599-ef8c84f0b205 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Processing event network-vif-plugged-d01febf2-ab0a-47cd-8e66-e0336c3333e5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 06:56:38 np0005604943 podman[247183]: 2026-02-02 11:56:38.68368441 +0000 UTC m=+0.046831791 container create beb92f43b6c2c36d0ee0bb057bf8c869f4de4509b5cde55103ee2e1bca8d21a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb  2 06:56:38 np0005604943 systemd[1]: Started libpod-conmon-beb92f43b6c2c36d0ee0bb057bf8c869f4de4509b5cde55103ee2e1bca8d21a7.scope.
Feb  2 06:56:38 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:56:38 np0005604943 podman[247183]: 2026-02-02 11:56:38.65805867 +0000 UTC m=+0.021206051 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 06:56:38 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5d2d713ddabcd22ddb5287cacaf02579bb173cdccc2517ac0cf08e83a49b68/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 06:56:38 np0005604943 podman[247183]: 2026-02-02 11:56:38.764331524 +0000 UTC m=+0.127478915 container init beb92f43b6c2c36d0ee0bb057bf8c869f4de4509b5cde55103ee2e1bca8d21a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb  2 06:56:38 np0005604943 podman[247183]: 2026-02-02 11:56:38.769956368 +0000 UTC m=+0.133103749 container start beb92f43b6c2c36d0ee0bb057bf8c869f4de4509b5cde55103ee2e1bca8d21a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Feb  2 06:56:38 np0005604943 neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba[247199]: [NOTICE]   (247203) : New worker (247205) forked
Feb  2 06:56:38 np0005604943 neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba[247199]: [NOTICE]   (247203) : Loading success.
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.000 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033398.9998388, 2611c633-f397-48e0-a70b-bc81c48cbb65 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.001 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] VM Started (Lifecycle Event)#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.003 238887 DEBUG nova.compute.manager [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.008 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.012 238887 INFO nova.virt.libvirt.driver [-] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Instance spawned successfully.#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.013 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.034 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.039 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.040 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.040 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.040 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.041 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.041 238887 DEBUG nova.virt.libvirt.driver [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.045 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.080 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.080 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033399.0010135, 2611c633-f397-48e0-a70b-bc81c48cbb65 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.081 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] VM Paused (Lifecycle Event)#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.108 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.110 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033399.0073142, 2611c633-f397-48e0-a70b-bc81c48cbb65 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.110 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] VM Resumed (Lifecycle Event)#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.118 238887 INFO nova.compute.manager [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Took 7.00 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.119 238887 DEBUG nova.compute.manager [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.130 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.133 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.157 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.190 238887 INFO nova.compute.manager [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Took 8.07 seconds to build instance.#033[00m
Feb  2 06:56:39 np0005604943 nova_compute[238883]: 2026-02-02 11:56:39.216 238887 DEBUG oslo_concurrency.lockutils [None req-bfe58f53-e599-4ee6-ae07-4b8eef20ecb2 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "2611c633-f397-48e0-a70b-bc81c48cbb65" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 5.3 MiB/s wr, 172 op/s
Feb  2 06:56:40 np0005604943 nova_compute[238883]: 2026-02-02 11:56:40.579 238887 DEBUG nova.compute.manager [req-2e27e170-efc1-4893-9095-5667ca6ec07e req-36b8e487-63ef-4435-9d23-de02e3796240 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Received event network-vif-plugged-d01febf2-ab0a-47cd-8e66-e0336c3333e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:56:40 np0005604943 nova_compute[238883]: 2026-02-02 11:56:40.579 238887 DEBUG oslo_concurrency.lockutils [req-2e27e170-efc1-4893-9095-5667ca6ec07e req-36b8e487-63ef-4435-9d23-de02e3796240 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:40 np0005604943 nova_compute[238883]: 2026-02-02 11:56:40.580 238887 DEBUG oslo_concurrency.lockutils [req-2e27e170-efc1-4893-9095-5667ca6ec07e req-36b8e487-63ef-4435-9d23-de02e3796240 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:40 np0005604943 nova_compute[238883]: 2026-02-02 11:56:40.580 238887 DEBUG oslo_concurrency.lockutils [req-2e27e170-efc1-4893-9095-5667ca6ec07e req-36b8e487-63ef-4435-9d23-de02e3796240 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:40 np0005604943 nova_compute[238883]: 2026-02-02 11:56:40.580 238887 DEBUG nova.compute.manager [req-2e27e170-efc1-4893-9095-5667ca6ec07e req-36b8e487-63ef-4435-9d23-de02e3796240 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] No waiting events found dispatching network-vif-plugged-d01febf2-ab0a-47cd-8e66-e0336c3333e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:56:40 np0005604943 nova_compute[238883]: 2026-02-02 11:56:40.580 238887 WARNING nova.compute.manager [req-2e27e170-efc1-4893-9095-5667ca6ec07e req-36b8e487-63ef-4435-9d23-de02e3796240 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Received unexpected event network-vif-plugged-d01febf2-ab0a-47cd-8e66-e0336c3333e5 for instance with vm_state active and task_state None.#033[00m
Feb  2 06:56:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:56:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:56:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:56:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:56:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:56:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:56:40 np0005604943 nova_compute[238883]: 2026-02-02 11:56:40.949 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:41 np0005604943 nova_compute[238883]: 2026-02-02 11:56:41.606 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 3.7 MiB/s wr, 97 op/s
Feb  2 06:56:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:56:42 np0005604943 nova_compute[238883]: 2026-02-02 11:56:42.967 238887 DEBUG oslo_concurrency.lockutils [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "2611c633-f397-48e0-a70b-bc81c48cbb65" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:42 np0005604943 nova_compute[238883]: 2026-02-02 11:56:42.968 238887 DEBUG oslo_concurrency.lockutils [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "2611c633-f397-48e0-a70b-bc81c48cbb65" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:42 np0005604943 nova_compute[238883]: 2026-02-02 11:56:42.969 238887 DEBUG oslo_concurrency.lockutils [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:42 np0005604943 nova_compute[238883]: 2026-02-02 11:56:42.969 238887 DEBUG oslo_concurrency.lockutils [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:42 np0005604943 nova_compute[238883]: 2026-02-02 11:56:42.969 238887 DEBUG oslo_concurrency.lockutils [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:42 np0005604943 nova_compute[238883]: 2026-02-02 11:56:42.970 238887 INFO nova.compute.manager [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Terminating instance#033[00m
Feb  2 06:56:42 np0005604943 nova_compute[238883]: 2026-02-02 11:56:42.971 238887 DEBUG nova.compute.manager [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 06:56:43 np0005604943 kernel: tapd01febf2-ab (unregistering): left promiscuous mode
Feb  2 06:56:43 np0005604943 NetworkManager[49093]: <info>  [1770033403.0061] device (tapd01febf2-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 06:56:43 np0005604943 ovn_controller[145056]: 2026-02-02T11:56:43Z|00041|binding|INFO|Releasing lport d01febf2-ab0a-47cd-8e66-e0336c3333e5 from this chassis (sb_readonly=0)
Feb  2 06:56:43 np0005604943 ovn_controller[145056]: 2026-02-02T11:56:43Z|00042|binding|INFO|Setting lport d01febf2-ab0a-47cd-8e66-e0336c3333e5 down in Southbound
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.012 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:43 np0005604943 ovn_controller[145056]: 2026-02-02T11:56:43Z|00043|binding|INFO|Removing iface tapd01febf2-ab ovn-installed in OVS
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.014 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:43.019 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:ec:cc 10.100.0.8'], port_security=['fa:16:3e:f3:ec:cc 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2611c633-f397-48e0-a70b-bc81c48cbb65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efd07eae-76b7-411a-9564-96e7e46d25ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e7640959e7c4de1a4850ecd1b55f37c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '713394b4-1bd6-46bb-a85c-8ab4d32885b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51ac8d6c-86ce-45d9-a8cf-5ff78d5d5bc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=d01febf2-ab0a-47cd-8e66-e0336c3333e5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:56:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:43.021 155011 INFO neutron.agent.ovn.metadata.agent [-] Port d01febf2-ab0a-47cd-8e66-e0336c3333e5 in datapath efd07eae-76b7-411a-9564-96e7e46d25ba unbound from our chassis#033[00m
Feb  2 06:56:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:43.022 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network efd07eae-76b7-411a-9564-96e7e46d25ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 06:56:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:43.023 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[508da63e-5459-4b9a-be3a-1e59fc9fbdaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:43.024 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba namespace which is not needed anymore#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.025 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:43 np0005604943 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Feb  2 06:56:43 np0005604943 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 4.944s CPU time.
Feb  2 06:56:43 np0005604943 systemd-machined[206973]: Machine qemu-3-instance-00000003 terminated.
Feb  2 06:56:43 np0005604943 neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba[247199]: [NOTICE]   (247203) : haproxy version is 2.8.14-c23fe91
Feb  2 06:56:43 np0005604943 neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba[247199]: [NOTICE]   (247203) : path to executable is /usr/sbin/haproxy
Feb  2 06:56:43 np0005604943 neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba[247199]: [WARNING]  (247203) : Exiting Master process...
Feb  2 06:56:43 np0005604943 neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba[247199]: [WARNING]  (247203) : Exiting Master process...
Feb  2 06:56:43 np0005604943 neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba[247199]: [ALERT]    (247203) : Current worker (247205) exited with code 143 (Terminated)
Feb  2 06:56:43 np0005604943 neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba[247199]: [WARNING]  (247203) : All workers exited. Exiting... (0)
Feb  2 06:56:43 np0005604943 systemd[1]: libpod-beb92f43b6c2c36d0ee0bb057bf8c869f4de4509b5cde55103ee2e1bca8d21a7.scope: Deactivated successfully.
Feb  2 06:56:43 np0005604943 podman[247281]: 2026-02-02 11:56:43.147464287 +0000 UTC m=+0.044127096 container died beb92f43b6c2c36d0ee0bb057bf8c869f4de4509b5cde55103ee2e1bca8d21a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 06:56:43 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-beb92f43b6c2c36d0ee0bb057bf8c869f4de4509b5cde55103ee2e1bca8d21a7-userdata-shm.mount: Deactivated successfully.
Feb  2 06:56:43 np0005604943 systemd[1]: var-lib-containers-storage-overlay-fe5d2d713ddabcd22ddb5287cacaf02579bb173cdccc2517ac0cf08e83a49b68-merged.mount: Deactivated successfully.
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.191 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:43 np0005604943 podman[247281]: 2026-02-02 11:56:43.194832732 +0000 UTC m=+0.091495551 container cleanup beb92f43b6c2c36d0ee0bb057bf8c869f4de4509b5cde55103ee2e1bca8d21a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.195 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:43 np0005604943 systemd[1]: libpod-conmon-beb92f43b6c2c36d0ee0bb057bf8c869f4de4509b5cde55103ee2e1bca8d21a7.scope: Deactivated successfully.
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.213 238887 INFO nova.virt.libvirt.driver [-] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Instance destroyed successfully.#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.213 238887 DEBUG nova.objects.instance [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lazy-loading 'resources' on Instance uuid 2611c633-f397-48e0-a70b-bc81c48cbb65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.224 238887 DEBUG nova.compute.manager [req-df33ba3d-9ad0-446d-b550-988acdf719fb req-c845411e-bb6c-47e2-8cc1-dca8a61507f5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Received event network-vif-unplugged-d01febf2-ab0a-47cd-8e66-e0336c3333e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.225 238887 DEBUG oslo_concurrency.lockutils [req-df33ba3d-9ad0-446d-b550-988acdf719fb req-c845411e-bb6c-47e2-8cc1-dca8a61507f5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.226 238887 DEBUG oslo_concurrency.lockutils [req-df33ba3d-9ad0-446d-b550-988acdf719fb req-c845411e-bb6c-47e2-8cc1-dca8a61507f5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.226 238887 DEBUG oslo_concurrency.lockutils [req-df33ba3d-9ad0-446d-b550-988acdf719fb req-c845411e-bb6c-47e2-8cc1-dca8a61507f5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.226 238887 DEBUG nova.compute.manager [req-df33ba3d-9ad0-446d-b550-988acdf719fb req-c845411e-bb6c-47e2-8cc1-dca8a61507f5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] No waiting events found dispatching network-vif-unplugged-d01febf2-ab0a-47cd-8e66-e0336c3333e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.227 238887 DEBUG nova.compute.manager [req-df33ba3d-9ad0-446d-b550-988acdf719fb req-c845411e-bb6c-47e2-8cc1-dca8a61507f5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Received event network-vif-unplugged-d01febf2-ab0a-47cd-8e66-e0336c3333e5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.229 238887 DEBUG nova.virt.libvirt.vif [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1480210165',display_name='tempest-VolumesActionsTest-instance-1480210165',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1480210165',id=3,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:56:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e7640959e7c4de1a4850ecd1b55f37c',ramdisk_id='',reservation_id='r-j08c62x0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-401080261',owner_user_name='tempest-VolumesActionsTest-401080261-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:56:39Z,user_data=None,user_id='37a4dd38356f4cbd937094eb4da6f5cb',uuid=2611c633-f397-48e0-a70b-bc81c48cbb65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "address": "fa:16:3e:f3:ec:cc", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01febf2-ab", "ovs_interfaceid": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.230 238887 DEBUG nova.network.os_vif_util [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Converting VIF {"id": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "address": "fa:16:3e:f3:ec:cc", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd01febf2-ab", "ovs_interfaceid": "d01febf2-ab0a-47cd-8e66-e0336c3333e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.231 238887 DEBUG nova.network.os_vif_util [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:ec:cc,bridge_name='br-int',has_traffic_filtering=True,id=d01febf2-ab0a-47cd-8e66-e0336c3333e5,network=Network(efd07eae-76b7-411a-9564-96e7e46d25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd01febf2-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.231 238887 DEBUG os_vif [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:ec:cc,bridge_name='br-int',has_traffic_filtering=True,id=d01febf2-ab0a-47cd-8e66-e0336c3333e5,network=Network(efd07eae-76b7-411a-9564-96e7e46d25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd01febf2-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.233 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.235 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd01febf2-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.236 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.238 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.241 238887 INFO os_vif [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:ec:cc,bridge_name='br-int',has_traffic_filtering=True,id=d01febf2-ab0a-47cd-8e66-e0336c3333e5,network=Network(efd07eae-76b7-411a-9564-96e7e46d25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd01febf2-ab')#033[00m
Feb  2 06:56:43 np0005604943 podman[247317]: 2026-02-02 11:56:43.263043106 +0000 UTC m=+0.049999367 container remove beb92f43b6c2c36d0ee0bb057bf8c869f4de4509b5cde55103ee2e1bca8d21a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb  2 06:56:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:43.267 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[25102fa7-4e49-48aa-8cb7-86e0f7f34b67]: (4, ('Mon Feb  2 11:56:43 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba (beb92f43b6c2c36d0ee0bb057bf8c869f4de4509b5cde55103ee2e1bca8d21a7)\nbeb92f43b6c2c36d0ee0bb057bf8c869f4de4509b5cde55103ee2e1bca8d21a7\nMon Feb  2 11:56:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba (beb92f43b6c2c36d0ee0bb057bf8c869f4de4509b5cde55103ee2e1bca8d21a7)\nbeb92f43b6c2c36d0ee0bb057bf8c869f4de4509b5cde55103ee2e1bca8d21a7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:43.269 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[8ddf85f1-ee93-4893-99ab-58afaf434484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:43.270 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefd07eae-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.272 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:43 np0005604943 kernel: tapefd07eae-70: left promiscuous mode
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.281 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:43.284 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[6e5991cd-8f2e-4f1a-b29d-532a239d493d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:43.302 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[2cdfa548-b08c-48f1-9fd2-9bdd90cd11d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:43.303 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[61f61871-80e2-409e-bf5b-6560b6861b9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:43.316 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[dde88982-c84e-474f-aa9a-96856705a7a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384349, 'reachable_time': 26405, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247354, 'error': None, 'target': 'ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:43.321 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 06:56:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:43.321 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[3080914d-996d-4d03-a01e-7ef339c7e541]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:43 np0005604943 systemd[1]: run-netns-ovnmeta\x2defd07eae\x2d76b7\x2d411a\x2d9564\x2d96e7e46d25ba.mount: Deactivated successfully.
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.524 238887 INFO nova.virt.libvirt.driver [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Deleting instance files /var/lib/nova/instances/2611c633-f397-48e0-a70b-bc81c48cbb65_del#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.524 238887 INFO nova.virt.libvirt.driver [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Deletion of /var/lib/nova/instances/2611c633-f397-48e0-a70b-bc81c48cbb65_del complete#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.583 238887 INFO nova.compute.manager [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.584 238887 DEBUG oslo.service.loopingcall [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.584 238887 DEBUG nova.compute.manager [-] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 06:56:43 np0005604943 nova_compute[238883]: 2026-02-02 11:56:43.584 238887 DEBUG nova.network.neutron [-] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 06:56:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 72 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 481 KiB/s wr, 130 op/s
Feb  2 06:56:44 np0005604943 nova_compute[238883]: 2026-02-02 11:56:44.393 238887 DEBUG nova.network.neutron [-] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:56:44 np0005604943 nova_compute[238883]: 2026-02-02 11:56:44.414 238887 INFO nova.compute.manager [-] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Took 0.83 seconds to deallocate network for instance.#033[00m
Feb  2 06:56:44 np0005604943 nova_compute[238883]: 2026-02-02 11:56:44.461 238887 DEBUG oslo_concurrency.lockutils [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:44 np0005604943 nova_compute[238883]: 2026-02-02 11:56:44.462 238887 DEBUG oslo_concurrency.lockutils [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:44 np0005604943 nova_compute[238883]: 2026-02-02 11:56:44.512 238887 DEBUG oslo_concurrency.processutils [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:44 np0005604943 nova_compute[238883]: 2026-02-02 11:56:44.623 238887 DEBUG nova.compute.manager [req-35182411-d355-4690-ab46-c09f411e876a req-6dcb07e1-eeb5-46ad-9b3a-02e8cb5b1ffe 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Received event network-vif-deleted-d01febf2-ab0a-47cd-8e66-e0336c3333e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:56:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:56:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3106068788' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:56:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:56:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3106068788' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:56:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:56:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/135260994' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:56:45 np0005604943 nova_compute[238883]: 2026-02-02 11:56:45.074 238887 DEBUG oslo_concurrency.processutils [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:45 np0005604943 nova_compute[238883]: 2026-02-02 11:56:45.078 238887 DEBUG nova.compute.provider_tree [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:56:45 np0005604943 nova_compute[238883]: 2026-02-02 11:56:45.103 238887 DEBUG nova.scheduler.client.report [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:56:45 np0005604943 nova_compute[238883]: 2026-02-02 11:56:45.141 238887 DEBUG oslo_concurrency.lockutils [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:45 np0005604943 nova_compute[238883]: 2026-02-02 11:56:45.174 238887 INFO nova.scheduler.client.report [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Deleted allocations for instance 2611c633-f397-48e0-a70b-bc81c48cbb65#033[00m
Feb  2 06:56:45 np0005604943 nova_compute[238883]: 2026-02-02 11:56:45.356 238887 DEBUG nova.compute.manager [req-cd7959c0-1fbd-44ea-9ee4-24e766f6d346 req-1561580f-0b21-4996-875c-9cedd5eb4512 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Received event network-vif-plugged-d01febf2-ab0a-47cd-8e66-e0336c3333e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:56:45 np0005604943 nova_compute[238883]: 2026-02-02 11:56:45.357 238887 DEBUG oslo_concurrency.lockutils [req-cd7959c0-1fbd-44ea-9ee4-24e766f6d346 req-1561580f-0b21-4996-875c-9cedd5eb4512 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:45 np0005604943 nova_compute[238883]: 2026-02-02 11:56:45.357 238887 DEBUG oslo_concurrency.lockutils [req-cd7959c0-1fbd-44ea-9ee4-24e766f6d346 req-1561580f-0b21-4996-875c-9cedd5eb4512 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:45 np0005604943 nova_compute[238883]: 2026-02-02 11:56:45.357 238887 DEBUG oslo_concurrency.lockutils [req-cd7959c0-1fbd-44ea-9ee4-24e766f6d346 req-1561580f-0b21-4996-875c-9cedd5eb4512 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2611c633-f397-48e0-a70b-bc81c48cbb65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:45 np0005604943 nova_compute[238883]: 2026-02-02 11:56:45.358 238887 DEBUG nova.compute.manager [req-cd7959c0-1fbd-44ea-9ee4-24e766f6d346 req-1561580f-0b21-4996-875c-9cedd5eb4512 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] No waiting events found dispatching network-vif-plugged-d01febf2-ab0a-47cd-8e66-e0336c3333e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:56:45 np0005604943 nova_compute[238883]: 2026-02-02 11:56:45.358 238887 WARNING nova.compute.manager [req-cd7959c0-1fbd-44ea-9ee4-24e766f6d346 req-1561580f-0b21-4996-875c-9cedd5eb4512 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Received unexpected event network-vif-plugged-d01febf2-ab0a-47cd-8e66-e0336c3333e5 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 06:56:45 np0005604943 nova_compute[238883]: 2026-02-02 11:56:45.361 238887 DEBUG oslo_concurrency.lockutils [None req-f7bf913a-d640-46ed-9240-be7c7e9114b0 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "2611c633-f397-48e0-a70b-bc81c48cbb65" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Feb  2 06:56:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Feb  2 06:56:45 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Feb  2 06:56:45 np0005604943 nova_compute[238883]: 2026-02-02 11:56:45.951 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 72 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 131 op/s
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:56:47.450837) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033407450879, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 990, "num_deletes": 253, "total_data_size": 1208317, "memory_usage": 1228616, "flush_reason": "Manual Compaction"}
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033407455529, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 848500, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18698, "largest_seqno": 19687, "table_properties": {"data_size": 844136, "index_size": 1888, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11475, "raw_average_key_size": 21, "raw_value_size": 834695, "raw_average_value_size": 1531, "num_data_blocks": 83, "num_entries": 545, "num_filter_entries": 545, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770033344, "oldest_key_time": 1770033344, "file_creation_time": 1770033407, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 4726 microseconds, and 2295 cpu microseconds.
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:56:47.455567) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 848500 bytes OK
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:56:47.455583) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:56:47.456946) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:56:47.456961) EVENT_LOG_v1 {"time_micros": 1770033407456958, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:56:47.456979) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1203450, prev total WAL file size 1203450, number of live WAL files 2.
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:56:47.457385) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(828KB)], [41(9350KB)]
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033407457436, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 10423062, "oldest_snapshot_seqno": -1}
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4631 keys, 7426084 bytes, temperature: kUnknown
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033407490402, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 7426084, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7394137, "index_size": 19293, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 112899, "raw_average_key_size": 24, "raw_value_size": 7309518, "raw_average_value_size": 1578, "num_data_blocks": 807, "num_entries": 4631, "num_filter_entries": 4631, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770033407, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:56:47.490597) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 7426084 bytes
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:56:47.492111) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 315.7 rd, 224.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.1 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(21.0) write-amplify(8.8) OK, records in: 5125, records dropped: 494 output_compression: NoCompression
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:56:47.492128) EVENT_LOG_v1 {"time_micros": 1770033407492120, "job": 20, "event": "compaction_finished", "compaction_time_micros": 33017, "compaction_time_cpu_micros": 14922, "output_level": 6, "num_output_files": 1, "total_output_size": 7426084, "num_input_records": 5125, "num_output_records": 4631, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033407492320, "job": 20, "event": "table_file_deletion", "file_number": 43}
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033407492908, "job": 20, "event": "table_file_deletion", "file_number": 41}
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:56:47.457306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:56:47.493082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:56:47.493096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:56:47.493098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:56:47.493101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:56:47.493103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482155475' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482155475' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:56:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1535440887' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:56:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 41 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 156 op/s
Feb  2 06:56:48 np0005604943 nova_compute[238883]: 2026-02-02 11:56:48.238 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:56:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:56:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:56:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:56:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:56:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:56:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:56:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:56:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:56:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:56:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:56:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:56:49 np0005604943 nova_compute[238883]: 2026-02-02 11:56:49.554 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:49 np0005604943 nova_compute[238883]: 2026-02-02 11:56:49.555 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:49 np0005604943 nova_compute[238883]: 2026-02-02 11:56:49.573 238887 DEBUG nova.compute.manager [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 06:56:49 np0005604943 nova_compute[238883]: 2026-02-02 11:56:49.661 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:49 np0005604943 nova_compute[238883]: 2026-02-02 11:56:49.661 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:49 np0005604943 nova_compute[238883]: 2026-02-02 11:56:49.673 238887 DEBUG nova.virt.hardware [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 06:56:49 np0005604943 nova_compute[238883]: 2026-02-02 11:56:49.674 238887 INFO nova.compute.claims [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 06:56:49 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:56:49 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:56:49 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:56:49 np0005604943 podman[247522]: 2026-02-02 11:56:49.734466176 +0000 UTC m=+0.060921436 container create c06695d1b870c5ab6c3d02a4d412111b729dc338fc8a762b8aeac5344c228935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldwasser, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:56:49 np0005604943 systemd[1]: Started libpod-conmon-c06695d1b870c5ab6c3d02a4d412111b729dc338fc8a762b8aeac5344c228935.scope.
Feb  2 06:56:49 np0005604943 podman[247522]: 2026-02-02 11:56:49.699636925 +0000 UTC m=+0.026092205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:56:49 np0005604943 nova_compute[238883]: 2026-02-02 11:56:49.803 238887 DEBUG oslo_concurrency.processutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:49 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:56:49 np0005604943 podman[247522]: 2026-02-02 11:56:49.837940524 +0000 UTC m=+0.164395804 container init c06695d1b870c5ab6c3d02a4d412111b729dc338fc8a762b8aeac5344c228935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Feb  2 06:56:49 np0005604943 podman[247522]: 2026-02-02 11:56:49.844054521 +0000 UTC m=+0.170509781 container start c06695d1b870c5ab6c3d02a4d412111b729dc338fc8a762b8aeac5344c228935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 06:56:49 np0005604943 modest_goldwasser[247538]: 167 167
Feb  2 06:56:49 np0005604943 systemd[1]: libpod-c06695d1b870c5ab6c3d02a4d412111b729dc338fc8a762b8aeac5344c228935.scope: Deactivated successfully.
Feb  2 06:56:49 np0005604943 podman[247522]: 2026-02-02 11:56:49.854953769 +0000 UTC m=+0.181409059 container attach c06695d1b870c5ab6c3d02a4d412111b729dc338fc8a762b8aeac5344c228935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldwasser, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:56:49 np0005604943 podman[247522]: 2026-02-02 11:56:49.855845933 +0000 UTC m=+0.182301203 container died c06695d1b870c5ab6c3d02a4d412111b729dc338fc8a762b8aeac5344c228935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:56:49 np0005604943 systemd[1]: var-lib-containers-storage-overlay-35c0bfdaa72ee8b29d7160f2165aecfb2d8b868f4eaca355d897080479279fd7-merged.mount: Deactivated successfully.
Feb  2 06:56:49 np0005604943 podman[247522]: 2026-02-02 11:56:49.897238274 +0000 UTC m=+0.223693554 container remove c06695d1b870c5ab6c3d02a4d412111b729dc338fc8a762b8aeac5344c228935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goldwasser, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 06:56:49 np0005604943 systemd[1]: libpod-conmon-c06695d1b870c5ab6c3d02a4d412111b729dc338fc8a762b8aeac5344c228935.scope: Deactivated successfully.
Feb  2 06:56:50 np0005604943 podman[247582]: 2026-02-02 11:56:50.011331942 +0000 UTC m=+0.032662654 container create 188fdb9a2d8aeb0b49f6b67fdb546f9fe3479f7f6aa65aae7dc7024ff5205976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:56:50 np0005604943 systemd[1]: Started libpod-conmon-188fdb9a2d8aeb0b49f6b67fdb546f9fe3479f7f6aa65aae7dc7024ff5205976.scope.
Feb  2 06:56:50 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:56:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/261f32ed29459b537afc8874ed198c8820b4e8c3a43e3114b3b09c44f50bccc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:56:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/261f32ed29459b537afc8874ed198c8820b4e8c3a43e3114b3b09c44f50bccc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:56:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/261f32ed29459b537afc8874ed198c8820b4e8c3a43e3114b3b09c44f50bccc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:56:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/261f32ed29459b537afc8874ed198c8820b4e8c3a43e3114b3b09c44f50bccc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:56:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/261f32ed29459b537afc8874ed198c8820b4e8c3a43e3114b3b09c44f50bccc6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:56:50 np0005604943 podman[247582]: 2026-02-02 11:56:49.998515271 +0000 UTC m=+0.019845983 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:56:50 np0005604943 podman[247582]: 2026-02-02 11:56:50.101840455 +0000 UTC m=+0.123171187 container init 188fdb9a2d8aeb0b49f6b67fdb546f9fe3479f7f6aa65aae7dc7024ff5205976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_villani, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 06:56:50 np0005604943 podman[247582]: 2026-02-02 11:56:50.111836188 +0000 UTC m=+0.133166910 container start 188fdb9a2d8aeb0b49f6b67fdb546f9fe3479f7f6aa65aae7dc7024ff5205976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 06:56:50 np0005604943 podman[247582]: 2026-02-02 11:56:50.125795619 +0000 UTC m=+0.147126341 container attach 188fdb9a2d8aeb0b49f6b67fdb546f9fe3479f7f6aa65aae7dc7024ff5205976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:56:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 97 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.6 MiB/s wr, 170 op/s
Feb  2 06:56:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:56:50 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3837085895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.320 238887 DEBUG oslo_concurrency.processutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.324 238887 DEBUG nova.compute.provider_tree [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.355 238887 DEBUG nova.scheduler.client.report [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.433 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.435 238887 DEBUG nova.compute.manager [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.491 238887 DEBUG nova.compute.manager [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.491 238887 DEBUG nova.network.neutron [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.510 238887 INFO nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.536 238887 DEBUG nova.compute.manager [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 06:56:50 np0005604943 happy_villani[247599]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:56:50 np0005604943 happy_villani[247599]: --> All data devices are unavailable
Feb  2 06:56:50 np0005604943 systemd[1]: libpod-188fdb9a2d8aeb0b49f6b67fdb546f9fe3479f7f6aa65aae7dc7024ff5205976.scope: Deactivated successfully.
Feb  2 06:56:50 np0005604943 podman[247582]: 2026-02-02 11:56:50.574317606 +0000 UTC m=+0.595648328 container died 188fdb9a2d8aeb0b49f6b67fdb546f9fe3479f7f6aa65aae7dc7024ff5205976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:56:50 np0005604943 systemd[1]: var-lib-containers-storage-overlay-261f32ed29459b537afc8874ed198c8820b4e8c3a43e3114b3b09c44f50bccc6-merged.mount: Deactivated successfully.
Feb  2 06:56:50 np0005604943 podman[247582]: 2026-02-02 11:56:50.65098031 +0000 UTC m=+0.672311012 container remove 188fdb9a2d8aeb0b49f6b67fdb546f9fe3479f7f6aa65aae7dc7024ff5205976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_villani, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Feb  2 06:56:50 np0005604943 systemd[1]: libpod-conmon-188fdb9a2d8aeb0b49f6b67fdb546f9fe3479f7f6aa65aae7dc7024ff5205976.scope: Deactivated successfully.
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.659 238887 DEBUG nova.compute.manager [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.660 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.660 238887 INFO nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Creating image(s)#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.686 238887 DEBUG nova.storage.rbd_utils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] rbd image 5490d2e6-ef55-40d2-9077-0a99a07fb3e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.717 238887 DEBUG nova.storage.rbd_utils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] rbd image 5490d2e6-ef55-40d2-9077-0a99a07fb3e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.745 238887 DEBUG nova.storage.rbd_utils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] rbd image 5490d2e6-ef55-40d2-9077-0a99a07fb3e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.749 238887 DEBUG oslo_concurrency.processutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.777 238887 DEBUG nova.policy [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37a4dd38356f4cbd937094eb4da6f5cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e7640959e7c4de1a4850ecd1b55f37c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.833 238887 DEBUG oslo_concurrency.processutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.834 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.835 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.835 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.867 238887 DEBUG nova.storage.rbd_utils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] rbd image 5490d2e6-ef55-40d2-9077-0a99a07fb3e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.871 238887 DEBUG oslo_concurrency.processutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 5490d2e6-ef55-40d2-9077-0a99a07fb3e7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:50 np0005604943 nova_compute[238883]: 2026-02-02 11:56:50.952 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:51 np0005604943 podman[247790]: 2026-02-02 11:56:51.189304372 +0000 UTC m=+0.123423454 container create 3a37d189b25c55189703ec7fbf138cfbb4de50f3f788f53dd8803e37a731cec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 06:56:51 np0005604943 podman[247790]: 2026-02-02 11:56:51.105399278 +0000 UTC m=+0.039518390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:56:51 np0005604943 systemd[1]: Started libpod-conmon-3a37d189b25c55189703ec7fbf138cfbb4de50f3f788f53dd8803e37a731cec2.scope.
Feb  2 06:56:51 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:56:51 np0005604943 nova_compute[238883]: 2026-02-02 11:56:51.291 238887 DEBUG oslo_concurrency.processutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 5490d2e6-ef55-40d2-9077-0a99a07fb3e7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:51 np0005604943 podman[247790]: 2026-02-02 11:56:51.2972322 +0000 UTC m=+0.231351292 container init 3a37d189b25c55189703ec7fbf138cfbb4de50f3f788f53dd8803e37a731cec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 06:56:51 np0005604943 podman[247790]: 2026-02-02 11:56:51.303056539 +0000 UTC m=+0.237175601 container start 3a37d189b25c55189703ec7fbf138cfbb4de50f3f788f53dd8803e37a731cec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:56:51 np0005604943 lucid_northcutt[247808]: 167 167
Feb  2 06:56:51 np0005604943 systemd[1]: libpod-3a37d189b25c55189703ec7fbf138cfbb4de50f3f788f53dd8803e37a731cec2.scope: Deactivated successfully.
Feb  2 06:56:51 np0005604943 nova_compute[238883]: 2026-02-02 11:56:51.321 238887 DEBUG nova.network.neutron [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Successfully created port: ee11e2b4-e1be-49da-91ec-7ed9a8c91002 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 06:56:51 np0005604943 podman[247790]: 2026-02-02 11:56:51.335407153 +0000 UTC m=+0.269526235 container attach 3a37d189b25c55189703ec7fbf138cfbb4de50f3f788f53dd8803e37a731cec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_northcutt, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 06:56:51 np0005604943 podman[247790]: 2026-02-02 11:56:51.336274077 +0000 UTC m=+0.270393139 container died 3a37d189b25c55189703ec7fbf138cfbb4de50f3f788f53dd8803e37a731cec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 06:56:51 np0005604943 nova_compute[238883]: 2026-02-02 11:56:51.363 238887 DEBUG nova.storage.rbd_utils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] resizing rbd image 5490d2e6-ef55-40d2-9077-0a99a07fb3e7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 06:56:51 np0005604943 systemd[1]: var-lib-containers-storage-overlay-c76160d1fd3f5981b398741db1ed3ca9b0eedf432a501f38707463dc760b6742-merged.mount: Deactivated successfully.
Feb  2 06:56:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:56:51 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2975060980' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:56:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:56:51 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2975060980' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:56:51 np0005604943 podman[247790]: 2026-02-02 11:56:51.446652794 +0000 UTC m=+0.380771876 container remove 3a37d189b25c55189703ec7fbf138cfbb4de50f3f788f53dd8803e37a731cec2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_northcutt, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:56:51 np0005604943 systemd[1]: libpod-conmon-3a37d189b25c55189703ec7fbf138cfbb4de50f3f788f53dd8803e37a731cec2.scope: Deactivated successfully.
Feb  2 06:56:51 np0005604943 nova_compute[238883]: 2026-02-02 11:56:51.579 238887 DEBUG nova.objects.instance [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lazy-loading 'migration_context' on Instance uuid 5490d2e6-ef55-40d2-9077-0a99a07fb3e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:56:51 np0005604943 nova_compute[238883]: 2026-02-02 11:56:51.612 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 06:56:51 np0005604943 nova_compute[238883]: 2026-02-02 11:56:51.613 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Ensure instance console log exists: /var/lib/nova/instances/5490d2e6-ef55-40d2-9077-0a99a07fb3e7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 06:56:51 np0005604943 nova_compute[238883]: 2026-02-02 11:56:51.613 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:51 np0005604943 nova_compute[238883]: 2026-02-02 11:56:51.613 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:51 np0005604943 nova_compute[238883]: 2026-02-02 11:56:51.614 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:51 np0005604943 podman[247895]: 2026-02-02 11:56:51.628353398 +0000 UTC m=+0.070814635 container create 5e186923e4c4c45b40911def87cca039ce8ef3f883b3a952c451f9e5259c6eeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 06:56:51 np0005604943 systemd[1]: Started libpod-conmon-5e186923e4c4c45b40911def87cca039ce8ef3f883b3a952c451f9e5259c6eeb.scope.
Feb  2 06:56:51 np0005604943 podman[247895]: 2026-02-02 11:56:51.596114537 +0000 UTC m=+0.038575804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:56:51 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:56:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5b534b4322302746082c7a63a5afb448fb731bf548a755f0bc394dc240c590/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:56:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5b534b4322302746082c7a63a5afb448fb731bf548a755f0bc394dc240c590/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:56:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5b534b4322302746082c7a63a5afb448fb731bf548a755f0bc394dc240c590/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:56:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5b534b4322302746082c7a63a5afb448fb731bf548a755f0bc394dc240c590/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:56:51 np0005604943 podman[247895]: 2026-02-02 11:56:51.730806479 +0000 UTC m=+0.173267746 container init 5e186923e4c4c45b40911def87cca039ce8ef3f883b3a952c451f9e5259c6eeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kowalevski, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 06:56:51 np0005604943 podman[247895]: 2026-02-02 11:56:51.737849151 +0000 UTC m=+0.180310388 container start 5e186923e4c4c45b40911def87cca039ce8ef3f883b3a952c451f9e5259c6eeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 06:56:51 np0005604943 podman[247895]: 2026-02-02 11:56:51.775046447 +0000 UTC m=+0.217507714 container attach 5e186923e4c4c45b40911def87cca039ce8ef3f883b3a952c451f9e5259c6eeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 06:56:51 np0005604943 nova_compute[238883]: 2026-02-02 11:56:51.982 238887 DEBUG nova.network.neutron [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Successfully updated port: ee11e2b4-e1be-49da-91ec-7ed9a8c91002 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 06:56:52 np0005604943 nova_compute[238883]: 2026-02-02 11:56:52.005 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "refresh_cache-5490d2e6-ef55-40d2-9077-0a99a07fb3e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:56:52 np0005604943 nova_compute[238883]: 2026-02-02 11:56:52.005 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquired lock "refresh_cache-5490d2e6-ef55-40d2-9077-0a99a07fb3e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:56:52 np0005604943 nova_compute[238883]: 2026-02-02 11:56:52.005 238887 DEBUG nova.network.neutron [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]: {
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:    "0": [
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:        {
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "devices": [
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "/dev/loop3"
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            ],
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "lv_name": "ceph_lv0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "lv_size": "21470642176",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "name": "ceph_lv0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "tags": {
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.cluster_name": "ceph",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.crush_device_class": "",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.encrypted": "0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.objectstore": "bluestore",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.osd_id": "0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.type": "block",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.vdo": "0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.with_tpm": "0"
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            },
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "type": "block",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "vg_name": "ceph_vg0"
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:        }
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:    ],
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:    "1": [
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:        {
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "devices": [
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "/dev/loop4"
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            ],
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "lv_name": "ceph_lv1",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "lv_size": "21470642176",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "name": "ceph_lv1",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "tags": {
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.cluster_name": "ceph",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.crush_device_class": "",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.encrypted": "0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.objectstore": "bluestore",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.osd_id": "1",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.type": "block",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.vdo": "0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.with_tpm": "0"
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            },
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "type": "block",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "vg_name": "ceph_vg1"
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:        }
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:    ],
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:    "2": [
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:        {
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "devices": [
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "/dev/loop5"
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            ],
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "lv_name": "ceph_lv2",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "lv_size": "21470642176",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "name": "ceph_lv2",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "tags": {
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.cluster_name": "ceph",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.crush_device_class": "",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.encrypted": "0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.objectstore": "bluestore",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.osd_id": "2",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.type": "block",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.vdo": "0",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:                "ceph.with_tpm": "0"
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            },
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "type": "block",
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:            "vg_name": "ceph_vg2"
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:        }
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]:    ]
Feb  2 06:56:52 np0005604943 beautiful_kowalevski[247922]: }
Feb  2 06:56:52 np0005604943 systemd[1]: libpod-5e186923e4c4c45b40911def87cca039ce8ef3f883b3a952c451f9e5259c6eeb.scope: Deactivated successfully.
Feb  2 06:56:52 np0005604943 podman[247931]: 2026-02-02 11:56:52.091448363 +0000 UTC m=+0.027190313 container died 5e186923e4c4c45b40911def87cca039ce8ef3f883b3a952c451f9e5259c6eeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:56:52 np0005604943 systemd[1]: var-lib-containers-storage-overlay-be5b534b4322302746082c7a63a5afb448fb731bf548a755f0bc394dc240c590-merged.mount: Deactivated successfully.
Feb  2 06:56:52 np0005604943 podman[247931]: 2026-02-02 11:56:52.170942976 +0000 UTC m=+0.106684906 container remove 5e186923e4c4c45b40911def87cca039ce8ef3f883b3a952c451f9e5259c6eeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:56:52 np0005604943 systemd[1]: libpod-conmon-5e186923e4c4c45b40911def87cca039ce8ef3f883b3a952c451f9e5259c6eeb.scope: Deactivated successfully.
Feb  2 06:56:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 97 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.6 MiB/s wr, 170 op/s
Feb  2 06:56:52 np0005604943 nova_compute[238883]: 2026-02-02 11:56:52.254 238887 DEBUG nova.compute.manager [req-ef3d5f38-31a9-4813-9797-4c348d10ae3d req-4ce2973d-e8f0-4b66-b0d7-9f3c3bd58d65 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Received event network-changed-ee11e2b4-e1be-49da-91ec-7ed9a8c91002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:56:52 np0005604943 nova_compute[238883]: 2026-02-02 11:56:52.255 238887 DEBUG nova.compute.manager [req-ef3d5f38-31a9-4813-9797-4c348d10ae3d req-4ce2973d-e8f0-4b66-b0d7-9f3c3bd58d65 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Refreshing instance network info cache due to event network-changed-ee11e2b4-e1be-49da-91ec-7ed9a8c91002. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:56:52 np0005604943 nova_compute[238883]: 2026-02-02 11:56:52.255 238887 DEBUG oslo_concurrency.lockutils [req-ef3d5f38-31a9-4813-9797-4c348d10ae3d req-4ce2973d-e8f0-4b66-b0d7-9f3c3bd58d65 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-5490d2e6-ef55-40d2-9077-0a99a07fb3e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:56:52 np0005604943 nova_compute[238883]: 2026-02-02 11:56:52.306 238887 DEBUG nova.network.neutron [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 06:56:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:56:52 np0005604943 podman[248010]: 2026-02-02 11:56:52.619236716 +0000 UTC m=+0.067268630 container create c807b86918557cea15ad5bc9e44ea8cf5d674d96e91b6d448b14fb84e6f48f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:56:52 np0005604943 systemd[1]: Started libpod-conmon-c807b86918557cea15ad5bc9e44ea8cf5d674d96e91b6d448b14fb84e6f48f10.scope.
Feb  2 06:56:52 np0005604943 podman[248010]: 2026-02-02 11:56:52.579472959 +0000 UTC m=+0.027504903 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:56:52 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:56:52 np0005604943 podman[248010]: 2026-02-02 11:56:52.722104587 +0000 UTC m=+0.170136511 container init c807b86918557cea15ad5bc9e44ea8cf5d674d96e91b6d448b14fb84e6f48f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb  2 06:56:52 np0005604943 podman[248010]: 2026-02-02 11:56:52.726766904 +0000 UTC m=+0.174798808 container start c807b86918557cea15ad5bc9e44ea8cf5d674d96e91b6d448b14fb84e6f48f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:56:52 np0005604943 quirky_rosalind[248026]: 167 167
Feb  2 06:56:52 np0005604943 systemd[1]: libpod-c807b86918557cea15ad5bc9e44ea8cf5d674d96e91b6d448b14fb84e6f48f10.scope: Deactivated successfully.
Feb  2 06:56:52 np0005604943 podman[248010]: 2026-02-02 11:56:52.769864522 +0000 UTC m=+0.217896436 container attach c807b86918557cea15ad5bc9e44ea8cf5d674d96e91b6d448b14fb84e6f48f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 06:56:52 np0005604943 podman[248010]: 2026-02-02 11:56:52.770325534 +0000 UTC m=+0.218357448 container died c807b86918557cea15ad5bc9e44ea8cf5d674d96e91b6d448b14fb84e6f48f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_rosalind, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 06:56:52 np0005604943 systemd[1]: var-lib-containers-storage-overlay-4a65f03b78cd6c339e695ffc63a2d386a765550cd49a6e6d2015b821fc64fc41-merged.mount: Deactivated successfully.
Feb  2 06:56:53 np0005604943 podman[248010]: 2026-02-02 11:56:53.014547518 +0000 UTC m=+0.462579412 container remove c807b86918557cea15ad5bc9e44ea8cf5d674d96e91b6d448b14fb84e6f48f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_rosalind, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 06:56:53 np0005604943 systemd[1]: libpod-conmon-c807b86918557cea15ad5bc9e44ea8cf5d674d96e91b6d448b14fb84e6f48f10.scope: Deactivated successfully.
Feb  2 06:56:53 np0005604943 podman[248052]: 2026-02-02 11:56:53.17929068 +0000 UTC m=+0.052194507 container create 59e4af2a1eebd06477c23315b62c3813452fb5c56064aaf9600691b5db8b662e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:56:53 np0005604943 nova_compute[238883]: 2026-02-02 11:56:53.242 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:53 np0005604943 systemd[1]: Started libpod-conmon-59e4af2a1eebd06477c23315b62c3813452fb5c56064aaf9600691b5db8b662e.scope.
Feb  2 06:56:53 np0005604943 podman[248052]: 2026-02-02 11:56:53.157867245 +0000 UTC m=+0.030771082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:56:53 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:56:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcf9fc16f553e08d4a7d335091885bc3ed420d02df70ec813db364469eb9293/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:56:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcf9fc16f553e08d4a7d335091885bc3ed420d02df70ec813db364469eb9293/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:56:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcf9fc16f553e08d4a7d335091885bc3ed420d02df70ec813db364469eb9293/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:56:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcf9fc16f553e08d4a7d335091885bc3ed420d02df70ec813db364469eb9293/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:56:53 np0005604943 podman[248052]: 2026-02-02 11:56:53.318881875 +0000 UTC m=+0.191785722 container init 59e4af2a1eebd06477c23315b62c3813452fb5c56064aaf9600691b5db8b662e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:56:53 np0005604943 podman[248052]: 2026-02-02 11:56:53.324909259 +0000 UTC m=+0.197813086 container start 59e4af2a1eebd06477c23315b62c3813452fb5c56064aaf9600691b5db8b662e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 06:56:53 np0005604943 podman[248052]: 2026-02-02 11:56:53.356291397 +0000 UTC m=+0.229195244 container attach 59e4af2a1eebd06477c23315b62c3813452fb5c56064aaf9600691b5db8b662e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Feb  2 06:56:53 np0005604943 nova_compute[238883]: 2026-02-02 11:56:53.648 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:53.648 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:56:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:53.652 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 06:56:54 np0005604943 lvm[248148]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:56:54 np0005604943 lvm[248148]: VG ceph_vg1 finished
Feb  2 06:56:54 np0005604943 lvm[248145]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:56:54 np0005604943 lvm[248145]: VG ceph_vg0 finished
Feb  2 06:56:54 np0005604943 lvm[248150]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:56:54 np0005604943 lvm[248150]: VG ceph_vg2 finished
Feb  2 06:56:54 np0005604943 lvm[248151]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:56:54 np0005604943 lvm[248151]: VG ceph_vg0 finished
Feb  2 06:56:54 np0005604943 hopeful_heisenberg[248069]: {}
Feb  2 06:56:54 np0005604943 systemd[1]: libpod-59e4af2a1eebd06477c23315b62c3813452fb5c56064aaf9600691b5db8b662e.scope: Deactivated successfully.
Feb  2 06:56:54 np0005604943 podman[248052]: 2026-02-02 11:56:54.142653845 +0000 UTC m=+1.015557672 container died 59e4af2a1eebd06477c23315b62c3813452fb5c56064aaf9600691b5db8b662e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:56:54 np0005604943 systemd[1]: libpod-59e4af2a1eebd06477c23315b62c3813452fb5c56064aaf9600691b5db8b662e.scope: Consumed 1.130s CPU time.
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.167 238887 DEBUG nova.network.neutron [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Updating instance_info_cache with network_info: [{"id": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "address": "fa:16:3e:bf:7d:e7", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee11e2b4-e1", "ovs_interfaceid": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:56:54 np0005604943 systemd[1]: var-lib-containers-storage-overlay-bdcf9fc16f553e08d4a7d335091885bc3ed420d02df70ec813db364469eb9293-merged.mount: Deactivated successfully.
Feb  2 06:56:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 1008 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 199 KiB/s rd, 94 MiB/s wr, 325 op/s
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.186 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Releasing lock "refresh_cache-5490d2e6-ef55-40d2-9077-0a99a07fb3e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.186 238887 DEBUG nova.compute.manager [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Instance network_info: |[{"id": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "address": "fa:16:3e:bf:7d:e7", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee11e2b4-e1", "ovs_interfaceid": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.187 238887 DEBUG oslo_concurrency.lockutils [req-ef3d5f38-31a9-4813-9797-4c348d10ae3d req-4ce2973d-e8f0-4b66-b0d7-9f3c3bd58d65 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-5490d2e6-ef55-40d2-9077-0a99a07fb3e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.187 238887 DEBUG nova.network.neutron [req-ef3d5f38-31a9-4813-9797-4c348d10ae3d req-4ce2973d-e8f0-4b66-b0d7-9f3c3bd58d65 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Refreshing network info cache for port ee11e2b4-e1be-49da-91ec-7ed9a8c91002 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.191 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Start _get_guest_xml network_info=[{"id": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "address": "fa:16:3e:bf:7d:e7", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee11e2b4-e1", "ovs_interfaceid": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'image_id': '21b263f0-00f1-47be-b8b1-e3c07da0a6a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 06:56:54 np0005604943 podman[248052]: 2026-02-02 11:56:54.194983815 +0000 UTC m=+1.067887642 container remove 59e4af2a1eebd06477c23315b62c3813452fb5c56064aaf9600691b5db8b662e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.197 238887 WARNING nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:56:54 np0005604943 systemd[1]: libpod-conmon-59e4af2a1eebd06477c23315b62c3813452fb5c56064aaf9600691b5db8b662e.scope: Deactivated successfully.
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.204 238887 DEBUG nova.virt.libvirt.host [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.205 238887 DEBUG nova.virt.libvirt.host [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.213 238887 DEBUG nova.virt.libvirt.host [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.213 238887 DEBUG nova.virt.libvirt.host [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.214 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.214 238887 DEBUG nova.virt.hardware [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.214 238887 DEBUG nova.virt.hardware [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.214 238887 DEBUG nova.virt.hardware [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.215 238887 DEBUG nova.virt.hardware [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.215 238887 DEBUG nova.virt.hardware [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.215 238887 DEBUG nova.virt.hardware [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.215 238887 DEBUG nova.virt.hardware [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.216 238887 DEBUG nova.virt.hardware [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.216 238887 DEBUG nova.virt.hardware [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.216 238887 DEBUG nova.virt.hardware [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.216 238887 DEBUG nova.virt.hardware [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.219 238887 DEBUG oslo_concurrency.processutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:56:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:56:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:56:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:56:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:56:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2292629018' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.835 238887 DEBUG oslo_concurrency.processutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.863 238887 DEBUG nova.storage.rbd_utils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] rbd image 5490d2e6-ef55-40d2-9077-0a99a07fb3e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:54 np0005604943 nova_compute[238883]: 2026-02-02 11:56:54.868 238887 DEBUG oslo_concurrency.processutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:55 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:56:55 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:56:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:56:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4233619427' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.468 238887 DEBUG nova.network.neutron [req-ef3d5f38-31a9-4813-9797-4c348d10ae3d req-4ce2973d-e8f0-4b66-b0d7-9f3c3bd58d65 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Updated VIF entry in instance network info cache for port ee11e2b4-e1be-49da-91ec-7ed9a8c91002. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.468 238887 DEBUG nova.network.neutron [req-ef3d5f38-31a9-4813-9797-4c348d10ae3d req-4ce2973d-e8f0-4b66-b0d7-9f3c3bd58d65 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Updating instance_info_cache with network_info: [{"id": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "address": "fa:16:3e:bf:7d:e7", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee11e2b4-e1", "ovs_interfaceid": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.482 238887 DEBUG oslo_concurrency.processutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.483 238887 DEBUG nova.virt.libvirt.vif [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:56:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1906583542',display_name='tempest-VolumesActionsTest-instance-1906583542',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1906583542',id=4,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e7640959e7c4de1a4850ecd1b55f37c',ramdisk_id='',reservation_id='r-luip5q5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-401080261',owner_user_name='tempest-VolumesActionsTest-401080261-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:56:50Z,user_data=None,user_id='37a4dd38356f4cbd937094eb4da6f5cb',uuid=5490d2e6-ef55-40d2-9077-0a99a07fb3e7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "address": "fa:16:3e:bf:7d:e7", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee11e2b4-e1", "ovs_interfaceid": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.483 238887 DEBUG nova.network.os_vif_util [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Converting VIF {"id": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "address": "fa:16:3e:bf:7d:e7", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee11e2b4-e1", "ovs_interfaceid": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.484 238887 DEBUG nova.network.os_vif_util [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7d:e7,bridge_name='br-int',has_traffic_filtering=True,id=ee11e2b4-e1be-49da-91ec-7ed9a8c91002,network=Network(efd07eae-76b7-411a-9564-96e7e46d25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee11e2b4-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.485 238887 DEBUG nova.objects.instance [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lazy-loading 'pci_devices' on Instance uuid 5490d2e6-ef55-40d2-9077-0a99a07fb3e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.487 238887 DEBUG oslo_concurrency.lockutils [req-ef3d5f38-31a9-4813-9797-4c348d10ae3d req-4ce2973d-e8f0-4b66-b0d7-9f3c3bd58d65 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-5490d2e6-ef55-40d2-9077-0a99a07fb3e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.501 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] End _get_guest_xml xml=<domain type="kvm">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  <uuid>5490d2e6-ef55-40d2-9077-0a99a07fb3e7</uuid>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  <name>instance-00000004</name>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <nova:name>tempest-VolumesActionsTest-instance-1906583542</nova:name>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 11:56:54</nova:creationTime>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:        <nova:user uuid="37a4dd38356f4cbd937094eb4da6f5cb">tempest-VolumesActionsTest-401080261-project-member</nova:user>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:        <nova:project uuid="4e7640959e7c4de1a4850ecd1b55f37c">tempest-VolumesActionsTest-401080261</nova:project>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <nova:root type="image" uuid="21b263f0-00f1-47be-b8b1-e3c07da0a6a2"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:        <nova:port uuid="ee11e2b4-e1be-49da-91ec-7ed9a8c91002">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <system>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <entry name="serial">5490d2e6-ef55-40d2-9077-0a99a07fb3e7</entry>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <entry name="uuid">5490d2e6-ef55-40d2-9077-0a99a07fb3e7</entry>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    </system>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  <os>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  </os>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  <features>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  </features>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  </clock>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  <devices>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/5490d2e6-ef55-40d2-9077-0a99a07fb3e7_disk">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/5490d2e6-ef55-40d2-9077-0a99a07fb3e7_disk.config">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:bf:7d:e7"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <target dev="tapee11e2b4-e1"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    </interface>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/5490d2e6-ef55-40d2-9077-0a99a07fb3e7/console.log" append="off"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    </serial>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <video>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    </video>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    </rng>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 06:56:55 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 06:56:55 np0005604943 nova_compute[238883]:  </devices>
Feb  2 06:56:55 np0005604943 nova_compute[238883]: </domain>
Feb  2 06:56:55 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.502 238887 DEBUG nova.compute.manager [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Preparing to wait for external event network-vif-plugged-ee11e2b4-e1be-49da-91ec-7ed9a8c91002 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.502 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.502 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.502 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.503 238887 DEBUG nova.virt.libvirt.vif [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:56:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1906583542',display_name='tempest-VolumesActionsTest-instance-1906583542',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1906583542',id=4,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e7640959e7c4de1a4850ecd1b55f37c',ramdisk_id='',reservation_id='r-luip5q5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-401080261',owner_user_name='tempest-VolumesActionsTest-401080261-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:56:50Z,user_data=None,user_id='37a4dd38356f4cbd937094eb4da6f5cb',uuid=5490d2e6-ef55-40d2-9077-0a99a07fb3e7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "address": "fa:16:3e:bf:7d:e7", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee11e2b4-e1", "ovs_interfaceid": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.504 238887 DEBUG nova.network.os_vif_util [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Converting VIF {"id": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "address": "fa:16:3e:bf:7d:e7", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee11e2b4-e1", "ovs_interfaceid": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.505 238887 DEBUG nova.network.os_vif_util [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7d:e7,bridge_name='br-int',has_traffic_filtering=True,id=ee11e2b4-e1be-49da-91ec-7ed9a8c91002,network=Network(efd07eae-76b7-411a-9564-96e7e46d25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee11e2b4-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.505 238887 DEBUG os_vif [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7d:e7,bridge_name='br-int',has_traffic_filtering=True,id=ee11e2b4-e1be-49da-91ec-7ed9a8c91002,network=Network(efd07eae-76b7-411a-9564-96e7e46d25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee11e2b4-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.506 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.506 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.507 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.511 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.511 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee11e2b4-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.512 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee11e2b4-e1, col_values=(('external_ids', {'iface-id': 'ee11e2b4-e1be-49da-91ec-7ed9a8c91002', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:7d:e7', 'vm-uuid': '5490d2e6-ef55-40d2-9077-0a99a07fb3e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.514 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:55 np0005604943 NetworkManager[49093]: <info>  [1770033415.5150] manager: (tapee11e2b4-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.516 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.519 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.522 238887 INFO os_vif [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7d:e7,bridge_name='br-int',has_traffic_filtering=True,id=ee11e2b4-e1be-49da-91ec-7ed9a8c91002,network=Network(efd07eae-76b7-411a-9564-96e7e46d25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee11e2b4-e1')#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.580 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.581 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.582 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] No VIF found with MAC fa:16:3e:bf:7d:e7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.582 238887 INFO nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Using config drive#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.600 238887 DEBUG nova.storage.rbd_utils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] rbd image 5490d2e6-ef55-40d2-9077-0a99a07fb3e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:55 np0005604943 nova_compute[238883]: 2026-02-02 11:56:55.954 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.009 238887 INFO nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Creating config drive at /var/lib/nova/instances/5490d2e6-ef55-40d2-9077-0a99a07fb3e7/disk.config#033[00m
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.013 238887 DEBUG oslo_concurrency.processutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5490d2e6-ef55-40d2-9077-0a99a07fb3e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpuqzhvupg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.138 238887 DEBUG oslo_concurrency.processutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5490d2e6-ef55-40d2-9077-0a99a07fb3e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpuqzhvupg" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.161 238887 DEBUG nova.storage.rbd_utils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] rbd image 5490d2e6-ef55-40d2-9077-0a99a07fb3e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.166 238887 DEBUG oslo_concurrency.processutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5490d2e6-ef55-40d2-9077-0a99a07fb3e7/disk.config 5490d2e6-ef55-40d2-9077-0a99a07fb3e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:56:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 1008 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 189 KiB/s rd, 89 MiB/s wr, 307 op/s
Feb  2 06:56:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Feb  2 06:56:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Feb  2 06:56:56 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.302 238887 DEBUG oslo_concurrency.processutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5490d2e6-ef55-40d2-9077-0a99a07fb3e7/disk.config 5490d2e6-ef55-40d2-9077-0a99a07fb3e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.303 238887 INFO nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Deleting local config drive /var/lib/nova/instances/5490d2e6-ef55-40d2-9077-0a99a07fb3e7/disk.config because it was imported into RBD.#033[00m
Feb  2 06:56:56 np0005604943 kernel: tapee11e2b4-e1: entered promiscuous mode
Feb  2 06:56:56 np0005604943 NetworkManager[49093]: <info>  [1770033416.3487] manager: (tapee11e2b4-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Feb  2 06:56:56 np0005604943 systemd-udevd[248144]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:56:56 np0005604943 ovn_controller[145056]: 2026-02-02T11:56:56Z|00044|binding|INFO|Claiming lport ee11e2b4-e1be-49da-91ec-7ed9a8c91002 for this chassis.
Feb  2 06:56:56 np0005604943 ovn_controller[145056]: 2026-02-02T11:56:56Z|00045|binding|INFO|ee11e2b4-e1be-49da-91ec-7ed9a8c91002: Claiming fa:16:3e:bf:7d:e7 10.100.0.13
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.350 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.359 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:7d:e7 10.100.0.13'], port_security=['fa:16:3e:bf:7d:e7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5490d2e6-ef55-40d2-9077-0a99a07fb3e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efd07eae-76b7-411a-9564-96e7e46d25ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e7640959e7c4de1a4850ecd1b55f37c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '713394b4-1bd6-46bb-a85c-8ab4d32885b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51ac8d6c-86ce-45d9-a8cf-5ff78d5d5bc9, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=ee11e2b4-e1be-49da-91ec-7ed9a8c91002) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.360 155011 INFO neutron.agent.ovn.metadata.agent [-] Port ee11e2b4-e1be-49da-91ec-7ed9a8c91002 in datapath efd07eae-76b7-411a-9564-96e7e46d25ba bound to our chassis#033[00m
Feb  2 06:56:56 np0005604943 NetworkManager[49093]: <info>  [1770033416.3615] device (tapee11e2b4-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 06:56:56 np0005604943 ovn_controller[145056]: 2026-02-02T11:56:56Z|00046|binding|INFO|Setting lport ee11e2b4-e1be-49da-91ec-7ed9a8c91002 ovn-installed in OVS
Feb  2 06:56:56 np0005604943 ovn_controller[145056]: 2026-02-02T11:56:56Z|00047|binding|INFO|Setting lport ee11e2b4-e1be-49da-91ec-7ed9a8c91002 up in Southbound
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.362 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.361 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network efd07eae-76b7-411a-9564-96e7e46d25ba#033[00m
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.363 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:56 np0005604943 NetworkManager[49093]: <info>  [1770033416.3639] device (tapee11e2b4-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.369 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8c74b2-d398-46c3-b406-162c433f994b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.370 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapefd07eae-71 in ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.372 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapefd07eae-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.372 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1c2485-4c20-4139-a26a-83220e05c719]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:56 np0005604943 systemd-machined[206973]: New machine qemu-4-instance-00000004.
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.372 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[60128b0a-45bd-47e1-acca-cb79ef444147]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:56 np0005604943 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.381 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[2432448a-8272-468c-a6a6-33604ddefe9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.392 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c18f3de9-f0ec-451b-94e4-31ecb81ddab5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.409 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[553093b8-2a63-469d-a794-266c872c584d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.413 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[295680e8-58fb-4218-bba0-31698f6722f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:56 np0005604943 NetworkManager[49093]: <info>  [1770033416.4142] manager: (tapefd07eae-70): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.432 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[76bcf02e-e8cf-4a12-a424-b4dcd70cb657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.435 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[b9008753-6470-4847-b699-ca9440fa4dd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:56 np0005604943 NetworkManager[49093]: <info>  [1770033416.4460] device (tapefd07eae-70): carrier: link connected
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.447 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef383cc-dabf-426a-ac18-d72fd6ab0baa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.459 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1f979788-6f7d-4db1-af71-529813985b41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefd07eae-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:73:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 386179, 'reachable_time': 33437, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248355, 'error': None, 'target': 'ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.469 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[6b25b4e2-1e55-4b26-b523-b5537b3e77ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:73f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 386179, 'tstamp': 386179}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248356, 'error': None, 'target': 'ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.480 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fa009651-db4f-4c34-91b7-b5e1dd9e31c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefd07eae-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:73:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 386179, 'reachable_time': 33437, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248357, 'error': None, 'target': 'ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.503 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[819966d4-9c0d-4bd1-8b4d-87d41cedf57b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.543 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[394492d6-2b74-4129-b38b-09f6127d74c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.544 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefd07eae-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.545 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.545 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefd07eae-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.547 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:56 np0005604943 kernel: tapefd07eae-70: entered promiscuous mode
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.549 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:56 np0005604943 NetworkManager[49093]: <info>  [1770033416.5495] manager: (tapefd07eae-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.550 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapefd07eae-70, col_values=(('external_ids', {'iface-id': '7a7f9bbc-6c88-4c95-b635-acbc93d76395'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.551 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:56 np0005604943 ovn_controller[145056]: 2026-02-02T11:56:56Z|00048|binding|INFO|Releasing lport 7a7f9bbc-6c88-4c95-b635-acbc93d76395 from this chassis (sb_readonly=0)
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.552 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/efd07eae-76b7-411a-9564-96e7e46d25ba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/efd07eae-76b7-411a-9564-96e7e46d25ba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.552 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f557ce-4c6a-4912-b2ae-ade43fbeb7e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.553 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-efd07eae-76b7-411a-9564-96e7e46d25ba
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/efd07eae-76b7-411a-9564-96e7e46d25ba.pid.haproxy
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID efd07eae-76b7-411a-9564-96e7e46d25ba
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 06:56:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:56:56.554 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba', 'env', 'PROCESS_TAG=haproxy-efd07eae-76b7-411a-9564-96e7e46d25ba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/efd07eae-76b7-411a-9564-96e7e46d25ba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.557 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.664 238887 DEBUG nova.compute.manager [req-23f4c6df-7618-4626-82e3-68106b1b304d req-3518ffda-db8e-43d7-8a94-093959811134 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Received event network-vif-plugged-ee11e2b4-e1be-49da-91ec-7ed9a8c91002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.665 238887 DEBUG oslo_concurrency.lockutils [req-23f4c6df-7618-4626-82e3-68106b1b304d req-3518ffda-db8e-43d7-8a94-093959811134 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.665 238887 DEBUG oslo_concurrency.lockutils [req-23f4c6df-7618-4626-82e3-68106b1b304d req-3518ffda-db8e-43d7-8a94-093959811134 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.665 238887 DEBUG oslo_concurrency.lockutils [req-23f4c6df-7618-4626-82e3-68106b1b304d req-3518ffda-db8e-43d7-8a94-093959811134 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:56 np0005604943 nova_compute[238883]: 2026-02-02 11:56:56.665 238887 DEBUG nova.compute.manager [req-23f4c6df-7618-4626-82e3-68106b1b304d req-3518ffda-db8e-43d7-8a94-093959811134 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Processing event network-vif-plugged-ee11e2b4-e1be-49da-91ec-7ed9a8c91002 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 06:56:56 np0005604943 podman[248389]: 2026-02-02 11:56:56.896469026 +0000 UTC m=+0.061366128 container create edcceada97f0ea98a4945d7709b64c615135f0038f9cda334b3b0a900c99fc75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb  2 06:56:56 np0005604943 systemd[1]: Started libpod-conmon-edcceada97f0ea98a4945d7709b64c615135f0038f9cda334b3b0a900c99fc75.scope.
Feb  2 06:56:56 np0005604943 podman[248389]: 2026-02-02 11:56:56.857969233 +0000 UTC m=+0.022866375 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 06:56:56 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:56:56 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad29b88a2dd16bf5653b3873fb6c1176365e8ba803c4bba87d3060228115f648/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 06:56:56 np0005604943 podman[248389]: 2026-02-02 11:56:56.981682794 +0000 UTC m=+0.146579906 container init edcceada97f0ea98a4945d7709b64c615135f0038f9cda334b3b0a900c99fc75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 06:56:56 np0005604943 podman[248389]: 2026-02-02 11:56:56.985813547 +0000 UTC m=+0.150710649 container start edcceada97f0ea98a4945d7709b64c615135f0038f9cda334b3b0a900c99fc75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Feb  2 06:56:57 np0005604943 neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba[248405]: [NOTICE]   (248409) : New worker (248411) forked
Feb  2 06:56:57 np0005604943 neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba[248405]: [NOTICE]   (248409) : Loading success.
Feb  2 06:56:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:56:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 504 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 192 KiB/s rd, 104 MiB/s wr, 338 op/s
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.210 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033403.2096305, 2611c633-f397-48e0-a70b-bc81c48cbb65 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.211 238887 INFO nova.compute.manager [-] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] VM Stopped (Lifecycle Event)#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.228 238887 DEBUG nova.compute.manager [None req-9ff0cd97-46a1-4caf-b94c-2e0c36b514e5 - - - - - -] [instance: 2611c633-f397-48e0-a70b-bc81c48cbb65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.399 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033418.3991284, 5490d2e6-ef55-40d2-9077-0a99a07fb3e7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.400 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] VM Started (Lifecycle Event)#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.402 238887 DEBUG nova.compute.manager [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.404 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.408 238887 INFO nova.virt.libvirt.driver [-] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Instance spawned successfully.#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.409 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.426 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.431 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.434 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.434 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.435 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.436 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.436 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.437 238887 DEBUG nova.virt.libvirt.driver [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.463 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.464 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033418.3993328, 5490d2e6-ef55-40d2-9077-0a99a07fb3e7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.464 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] VM Paused (Lifecycle Event)#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.501 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.506 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033418.4039848, 5490d2e6-ef55-40d2-9077-0a99a07fb3e7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.506 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] VM Resumed (Lifecycle Event)#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.514 238887 INFO nova.compute.manager [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Took 7.86 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.515 238887 DEBUG nova.compute.manager [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.525 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.530 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.559 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.572 238887 INFO nova.compute.manager [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Took 8.94 seconds to build instance.#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.591 238887 DEBUG oslo_concurrency.lockutils [None req-afdf5607-f68c-4a4c-a286-e34c24ab171b 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.757 238887 DEBUG nova.compute.manager [req-691395eb-0c22-4903-a64a-96fe72302005 req-5299d559-ab87-48a7-ba58-ca28fb199978 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Received event network-vif-plugged-ee11e2b4-e1be-49da-91ec-7ed9a8c91002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.757 238887 DEBUG oslo_concurrency.lockutils [req-691395eb-0c22-4903-a64a-96fe72302005 req-5299d559-ab87-48a7-ba58-ca28fb199978 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.757 238887 DEBUG oslo_concurrency.lockutils [req-691395eb-0c22-4903-a64a-96fe72302005 req-5299d559-ab87-48a7-ba58-ca28fb199978 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.757 238887 DEBUG oslo_concurrency.lockutils [req-691395eb-0c22-4903-a64a-96fe72302005 req-5299d559-ab87-48a7-ba58-ca28fb199978 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.757 238887 DEBUG nova.compute.manager [req-691395eb-0c22-4903-a64a-96fe72302005 req-5299d559-ab87-48a7-ba58-ca28fb199978 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] No waiting events found dispatching network-vif-plugged-ee11e2b4-e1be-49da-91ec-7ed9a8c91002 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:56:58 np0005604943 nova_compute[238883]: 2026-02-02 11:56:58.758 238887 WARNING nova.compute.manager [req-691395eb-0c22-4903-a64a-96fe72302005 req-5299d559-ab87-48a7-ba58-ca28fb199978 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Received unexpected event network-vif-plugged-ee11e2b4-e1be-49da-91ec-7ed9a8c91002 for instance with vm_state active and task_state None.#033[00m
Feb  2 06:57:00 np0005604943 podman[248464]: 2026-02-02 11:57:00.04235854 +0000 UTC m=+0.058853369 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:57:00 np0005604943 podman[248463]: 2026-02-02 11:57:00.067545479 +0000 UTC m=+0.084596383 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Feb  2 06:57:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 99 MiB/s wr, 363 op/s
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.338 238887 DEBUG oslo_concurrency.lockutils [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.339 238887 DEBUG oslo_concurrency.lockutils [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.339 238887 DEBUG oslo_concurrency.lockutils [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.339 238887 DEBUG oslo_concurrency.lockutils [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.339 238887 DEBUG oslo_concurrency.lockutils [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.341 238887 INFO nova.compute.manager [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Terminating instance#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.343 238887 DEBUG nova.compute.manager [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 06:57:00 np0005604943 kernel: tapee11e2b4-e1 (unregistering): left promiscuous mode
Feb  2 06:57:00 np0005604943 NetworkManager[49093]: <info>  [1770033420.3802] device (tapee11e2b4-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.385 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:00 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:00Z|00049|binding|INFO|Releasing lport ee11e2b4-e1be-49da-91ec-7ed9a8c91002 from this chassis (sb_readonly=0)
Feb  2 06:57:00 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:00Z|00050|binding|INFO|Setting lport ee11e2b4-e1be-49da-91ec-7ed9a8c91002 down in Southbound
Feb  2 06:57:00 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:00Z|00051|binding|INFO|Removing iface tapee11e2b4-e1 ovn-installed in OVS
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.388 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:00.392 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:7d:e7 10.100.0.13'], port_security=['fa:16:3e:bf:7d:e7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5490d2e6-ef55-40d2-9077-0a99a07fb3e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efd07eae-76b7-411a-9564-96e7e46d25ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e7640959e7c4de1a4850ecd1b55f37c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '713394b4-1bd6-46bb-a85c-8ab4d32885b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51ac8d6c-86ce-45d9-a8cf-5ff78d5d5bc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=ee11e2b4-e1be-49da-91ec-7ed9a8c91002) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:57:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:00.393 155011 INFO neutron.agent.ovn.metadata.agent [-] Port ee11e2b4-e1be-49da-91ec-7ed9a8c91002 in datapath efd07eae-76b7-411a-9564-96e7e46d25ba unbound from our chassis#033[00m
Feb  2 06:57:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:00.394 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network efd07eae-76b7-411a-9564-96e7e46d25ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 06:57:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:00.395 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fb20a08e-ce54-48b5-a02a-b9b13a1b78bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:00.395 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba namespace which is not needed anymore#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.397 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:00 np0005604943 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Feb  2 06:57:00 np0005604943 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 3.981s CPU time.
Feb  2 06:57:00 np0005604943 systemd-machined[206973]: Machine qemu-4-instance-00000004 terminated.
Feb  2 06:57:00 np0005604943 neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba[248405]: [NOTICE]   (248409) : haproxy version is 2.8.14-c23fe91
Feb  2 06:57:00 np0005604943 neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba[248405]: [NOTICE]   (248409) : path to executable is /usr/sbin/haproxy
Feb  2 06:57:00 np0005604943 neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba[248405]: [WARNING]  (248409) : Exiting Master process...
Feb  2 06:57:00 np0005604943 neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba[248405]: [WARNING]  (248409) : Exiting Master process...
Feb  2 06:57:00 np0005604943 neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba[248405]: [ALERT]    (248409) : Current worker (248411) exited with code 143 (Terminated)
Feb  2 06:57:00 np0005604943 neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba[248405]: [WARNING]  (248409) : All workers exited. Exiting... (0)
Feb  2 06:57:00 np0005604943 systemd[1]: libpod-edcceada97f0ea98a4945d7709b64c615135f0038f9cda334b3b0a900c99fc75.scope: Deactivated successfully.
Feb  2 06:57:00 np0005604943 podman[248543]: 2026-02-02 11:57:00.544039399 +0000 UTC m=+0.022305550 container died edcceada97f0ea98a4945d7709b64c615135f0038f9cda334b3b0a900c99fc75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.554 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:00 np0005604943 systemd[1]: var-lib-containers-storage-overlay-ad29b88a2dd16bf5653b3873fb6c1176365e8ba803c4bba87d3060228115f648-merged.mount: Deactivated successfully.
Feb  2 06:57:00 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-edcceada97f0ea98a4945d7709b64c615135f0038f9cda334b3b0a900c99fc75-userdata-shm.mount: Deactivated successfully.
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.582 238887 INFO nova.virt.libvirt.driver [-] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Instance destroyed successfully.#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.583 238887 DEBUG nova.objects.instance [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lazy-loading 'resources' on Instance uuid 5490d2e6-ef55-40d2-9077-0a99a07fb3e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:57:00 np0005604943 podman[248543]: 2026-02-02 11:57:00.590381085 +0000 UTC m=+0.068647216 container cleanup edcceada97f0ea98a4945d7709b64c615135f0038f9cda334b3b0a900c99fc75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:57:00 np0005604943 systemd[1]: libpod-conmon-edcceada97f0ea98a4945d7709b64c615135f0038f9cda334b3b0a900c99fc75.scope: Deactivated successfully.
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.608 238887 DEBUG nova.virt.libvirt.vif [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:56:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1906583542',display_name='tempest-VolumesActionsTest-instance-1906583542',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1906583542',id=4,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:56:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e7640959e7c4de1a4850ecd1b55f37c',ramdisk_id='',reservation_id='r-luip5q5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-401080261',owner_user_name='tempest-VolumesActionsTest-401080261-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:56:58Z,user_data=None,user_id='37a4dd38356f4cbd937094eb4da6f5cb',uuid=5490d2e6-ef55-40d2-9077-0a99a07fb3e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "address": "fa:16:3e:bf:7d:e7", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee11e2b4-e1", "ovs_interfaceid": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.609 238887 DEBUG nova.network.os_vif_util [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Converting VIF {"id": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "address": "fa:16:3e:bf:7d:e7", "network": {"id": "efd07eae-76b7-411a-9564-96e7e46d25ba", "bridge": "br-int", "label": "tempest-VolumesActionsTest-450561404-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e7640959e7c4de1a4850ecd1b55f37c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee11e2b4-e1", "ovs_interfaceid": "ee11e2b4-e1be-49da-91ec-7ed9a8c91002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.610 238887 DEBUG nova.network.os_vif_util [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7d:e7,bridge_name='br-int',has_traffic_filtering=True,id=ee11e2b4-e1be-49da-91ec-7ed9a8c91002,network=Network(efd07eae-76b7-411a-9564-96e7e46d25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee11e2b4-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.610 238887 DEBUG os_vif [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7d:e7,bridge_name='br-int',has_traffic_filtering=True,id=ee11e2b4-e1be-49da-91ec-7ed9a8c91002,network=Network(efd07eae-76b7-411a-9564-96e7e46d25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee11e2b4-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.612 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.612 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee11e2b4-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.614 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.615 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.617 238887 INFO os_vif [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7d:e7,bridge_name='br-int',has_traffic_filtering=True,id=ee11e2b4-e1be-49da-91ec-7ed9a8c91002,network=Network(efd07eae-76b7-411a-9564-96e7e46d25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee11e2b4-e1')#033[00m
Feb  2 06:57:00 np0005604943 podman[248570]: 2026-02-02 11:57:00.651350222 +0000 UTC m=+0.043160491 container remove edcceada97f0ea98a4945d7709b64c615135f0038f9cda334b3b0a900c99fc75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:57:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:00.654 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:00.656 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[602fa024-abc8-4543-b942-b038cf8065f9]: (4, ('Mon Feb  2 11:57:00 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba (edcceada97f0ea98a4945d7709b64c615135f0038f9cda334b3b0a900c99fc75)\nedcceada97f0ea98a4945d7709b64c615135f0038f9cda334b3b0a900c99fc75\nMon Feb  2 11:57:00 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba (edcceada97f0ea98a4945d7709b64c615135f0038f9cda334b3b0a900c99fc75)\nedcceada97f0ea98a4945d7709b64c615135f0038f9cda334b3b0a900c99fc75\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:00.658 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a1d1dc-f597-4303-85a6-1f293d50e439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:00.660 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefd07eae-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.661 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:00 np0005604943 kernel: tapefd07eae-70: left promiscuous mode
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.669 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:00.673 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[00415984-e98f-4bc8-8cd6-6cb84869c11d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:00.688 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[93078cb5-41d6-4220-8db0-ff064db896f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:00.690 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[976f7cf3-4dbc-46d6-9990-d59add49544c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:00.709 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1cea72e1-2723-4aa3-8163-71937879fdb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 386175, 'reachable_time': 19148, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248602, 'error': None, 'target': 'ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:00.713 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-efd07eae-76b7-411a-9564-96e7e46d25ba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 06:57:00 np0005604943 systemd[1]: run-netns-ovnmeta\x2defd07eae\x2d76b7\x2d411a\x2d9564\x2d96e7e46d25ba.mount: Deactivated successfully.
Feb  2 06:57:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:00.713 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[b6758c04-d55b-4bda-a1e9-8ac8decfff19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.852 238887 DEBUG nova.compute.manager [req-81c08511-8a18-4d3f-893c-1d89a8cf76d2 req-1398c8eb-48f5-4f9f-8769-6bc6ce4e9f73 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Received event network-vif-unplugged-ee11e2b4-e1be-49da-91ec-7ed9a8c91002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.852 238887 DEBUG oslo_concurrency.lockutils [req-81c08511-8a18-4d3f-893c-1d89a8cf76d2 req-1398c8eb-48f5-4f9f-8769-6bc6ce4e9f73 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.853 238887 DEBUG oslo_concurrency.lockutils [req-81c08511-8a18-4d3f-893c-1d89a8cf76d2 req-1398c8eb-48f5-4f9f-8769-6bc6ce4e9f73 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.853 238887 DEBUG oslo_concurrency.lockutils [req-81c08511-8a18-4d3f-893c-1d89a8cf76d2 req-1398c8eb-48f5-4f9f-8769-6bc6ce4e9f73 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.853 238887 DEBUG nova.compute.manager [req-81c08511-8a18-4d3f-893c-1d89a8cf76d2 req-1398c8eb-48f5-4f9f-8769-6bc6ce4e9f73 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] No waiting events found dispatching network-vif-unplugged-ee11e2b4-e1be-49da-91ec-7ed9a8c91002 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.853 238887 DEBUG nova.compute.manager [req-81c08511-8a18-4d3f-893c-1d89a8cf76d2 req-1398c8eb-48f5-4f9f-8769-6bc6ce4e9f73 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Received event network-vif-unplugged-ee11e2b4-e1be-49da-91ec-7ed9a8c91002 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.854 238887 DEBUG nova.compute.manager [req-81c08511-8a18-4d3f-893c-1d89a8cf76d2 req-1398c8eb-48f5-4f9f-8769-6bc6ce4e9f73 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Received event network-vif-plugged-ee11e2b4-e1be-49da-91ec-7ed9a8c91002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.854 238887 DEBUG oslo_concurrency.lockutils [req-81c08511-8a18-4d3f-893c-1d89a8cf76d2 req-1398c8eb-48f5-4f9f-8769-6bc6ce4e9f73 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.854 238887 DEBUG oslo_concurrency.lockutils [req-81c08511-8a18-4d3f-893c-1d89a8cf76d2 req-1398c8eb-48f5-4f9f-8769-6bc6ce4e9f73 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.854 238887 DEBUG oslo_concurrency.lockutils [req-81c08511-8a18-4d3f-893c-1d89a8cf76d2 req-1398c8eb-48f5-4f9f-8769-6bc6ce4e9f73 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.855 238887 DEBUG nova.compute.manager [req-81c08511-8a18-4d3f-893c-1d89a8cf76d2 req-1398c8eb-48f5-4f9f-8769-6bc6ce4e9f73 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] No waiting events found dispatching network-vif-plugged-ee11e2b4-e1be-49da-91ec-7ed9a8c91002 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.855 238887 WARNING nova.compute.manager [req-81c08511-8a18-4d3f-893c-1d89a8cf76d2 req-1398c8eb-48f5-4f9f-8769-6bc6ce4e9f73 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Received unexpected event network-vif-plugged-ee11e2b4-e1be-49da-91ec-7ed9a8c91002 for instance with vm_state active and task_state deleting.#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.875 238887 INFO nova.virt.libvirt.driver [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Deleting instance files /var/lib/nova/instances/5490d2e6-ef55-40d2-9077-0a99a07fb3e7_del#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.876 238887 INFO nova.virt.libvirt.driver [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Deletion of /var/lib/nova/instances/5490d2e6-ef55-40d2-9077-0a99a07fb3e7_del complete#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.921 238887 INFO nova.compute.manager [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Took 0.58 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.921 238887 DEBUG oslo.service.loopingcall [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.921 238887 DEBUG nova.compute.manager [-] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.922 238887 DEBUG nova.network.neutron [-] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 06:57:00 np0005604943 nova_compute[238883]: 2026-02-02 11:57:00.955 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:01 np0005604943 nova_compute[238883]: 2026-02-02 11:57:01.418 238887 DEBUG nova.network.neutron [-] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:57:01 np0005604943 nova_compute[238883]: 2026-02-02 11:57:01.433 238887 INFO nova.compute.manager [-] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Took 0.51 seconds to deallocate network for instance.#033[00m
Feb  2 06:57:01 np0005604943 nova_compute[238883]: 2026-02-02 11:57:01.470 238887 DEBUG oslo_concurrency.lockutils [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:01 np0005604943 nova_compute[238883]: 2026-02-02 11:57:01.470 238887 DEBUG oslo_concurrency.lockutils [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:01 np0005604943 nova_compute[238883]: 2026-02-02 11:57:01.535 238887 DEBUG oslo_concurrency.processutils [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:57:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2309560601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:57:02 np0005604943 nova_compute[238883]: 2026-02-02 11:57:02.083 238887 DEBUG oslo_concurrency.processutils [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:02 np0005604943 nova_compute[238883]: 2026-02-02 11:57:02.089 238887 DEBUG nova.compute.provider_tree [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:57:02 np0005604943 nova_compute[238883]: 2026-02-02 11:57:02.106 238887 DEBUG nova.scheduler.client.report [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:57:02 np0005604943 nova_compute[238883]: 2026-02-02 11:57:02.130 238887 DEBUG oslo_concurrency.lockutils [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:02 np0005604943 nova_compute[238883]: 2026-02-02 11:57:02.153 238887 INFO nova.scheduler.client.report [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Deleted allocations for instance 5490d2e6-ef55-40d2-9077-0a99a07fb3e7#033[00m
Feb  2 06:57:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 99 MiB/s wr, 363 op/s
Feb  2 06:57:02 np0005604943 nova_compute[238883]: 2026-02-02 11:57:02.239 238887 DEBUG oslo_concurrency.lockutils [None req-3d3e50c1-113b-4ac1-bb95-de3fd5177d78 37a4dd38356f4cbd937094eb4da6f5cb 4e7640959e7c4de1a4850ecd1b55f37c - - default default] Lock "5490d2e6-ef55-40d2-9077-0a99a07fb3e7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:57:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Feb  2 06:57:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Feb  2 06:57:02 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Feb  2 06:57:02 np0005604943 nova_compute[238883]: 2026-02-02 11:57:02.941 238887 DEBUG nova.compute.manager [req-96993294-3914-427e-b1a3-41233d235fe7 req-e7cd8f4f-08ee-4248-987d-955dab5328f6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Received event network-vif-deleted-ee11e2b4-e1be-49da-91ec-7ed9a8c91002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 43 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 13 MiB/s wr, 296 op/s
Feb  2 06:57:05 np0005604943 nova_compute[238883]: 2026-02-02 11:57:05.616 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:05 np0005604943 nova_compute[238883]: 2026-02-02 11:57:05.956 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 43 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 11 MiB/s wr, 239 op/s
Feb  2 06:57:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:57:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 125 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.8 MiB/s wr, 228 op/s
Feb  2 06:57:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:57:09
Feb  2 06:57:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:57:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:57:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'backups', 'vms', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', '.mgr', 'default.rgw.control']
Feb  2 06:57:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:57:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:10.021 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:10.022 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:10.022 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 160 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 9.3 MiB/s wr, 196 op/s
Feb  2 06:57:10 np0005604943 nova_compute[238883]: 2026-02-02 11:57:10.619 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:57:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:57:10 np0005604943 nova_compute[238883]: 2026-02-02 11:57:10.959 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 160 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 9.3 MiB/s wr, 196 op/s
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:57:12.482701) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033432482756, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 512, "num_deletes": 256, "total_data_size": 448257, "memory_usage": 459432, "flush_reason": "Manual Compaction"}
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033432486218, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 444454, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19688, "largest_seqno": 20199, "table_properties": {"data_size": 441556, "index_size": 870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6810, "raw_average_key_size": 18, "raw_value_size": 435548, "raw_average_value_size": 1167, "num_data_blocks": 39, "num_entries": 373, "num_filter_entries": 373, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770033407, "oldest_key_time": 1770033407, "file_creation_time": 1770033432, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 3540 microseconds, and 1531 cpu microseconds.
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:57:12.486251) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 444454 bytes OK
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:57:12.486263) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:57:12.487700) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:57:12.487714) EVENT_LOG_v1 {"time_micros": 1770033432487711, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:57:12.487731) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 445233, prev total WAL file size 445233, number of live WAL files 2.
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:57:12.488070) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(434KB)], [44(7252KB)]
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033432488120, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7870538, "oldest_snapshot_seqno": -1}
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4477 keys, 7744472 bytes, temperature: kUnknown
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033432520252, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7744472, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7712585, "index_size": 19617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 110931, "raw_average_key_size": 24, "raw_value_size": 7629724, "raw_average_value_size": 1704, "num_data_blocks": 817, "num_entries": 4477, "num_filter_entries": 4477, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770033432, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:57:12.520492) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7744472 bytes
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:57:12.521748) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 244.1 rd, 240.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 7.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(35.1) write-amplify(17.4) OK, records in: 5004, records dropped: 527 output_compression: NoCompression
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:57:12.521763) EVENT_LOG_v1 {"time_micros": 1770033432521756, "job": 22, "event": "compaction_finished", "compaction_time_micros": 32240, "compaction_time_cpu_micros": 13325, "output_level": 6, "num_output_files": 1, "total_output_size": 7744472, "num_input_records": 5004, "num_output_records": 4477, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033432521888, "job": 22, "event": "table_file_deletion", "file_number": 46}
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033432522439, "job": 22, "event": "table_file_deletion", "file_number": 44}
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:57:12.487966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:57:12.522534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:57:12.522539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:57:12.522542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:57:12.522544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:57:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:57:12.522546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:57:13 np0005604943 nova_compute[238883]: 2026-02-02 11:57:13.480 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:13 np0005604943 nova_compute[238883]: 2026-02-02 11:57:13.481 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:13 np0005604943 nova_compute[238883]: 2026-02-02 11:57:13.502 238887 DEBUG nova.compute.manager [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 06:57:13 np0005604943 nova_compute[238883]: 2026-02-02 11:57:13.586 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:13 np0005604943 nova_compute[238883]: 2026-02-02 11:57:13.587 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:13 np0005604943 nova_compute[238883]: 2026-02-02 11:57:13.597 238887 DEBUG nova.virt.hardware [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 06:57:13 np0005604943 nova_compute[238883]: 2026-02-02 11:57:13.598 238887 INFO nova.compute.claims [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 06:57:13 np0005604943 nova_compute[238883]: 2026-02-02 11:57:13.729 238887 DEBUG oslo_concurrency.processutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 504 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 37 MiB/s wr, 165 op/s
Feb  2 06:57:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:57:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3242046162' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.243 238887 DEBUG oslo_concurrency.processutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.248 238887 DEBUG nova.compute.provider_tree [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.272 238887 DEBUG nova.scheduler.client.report [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.302 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.303 238887 DEBUG nova.compute.manager [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.362 238887 DEBUG nova.compute.manager [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.363 238887 DEBUG nova.network.neutron [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.401 238887 INFO nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.432 238887 DEBUG nova.compute.manager [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.528 238887 DEBUG nova.compute.manager [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.529 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.529 238887 INFO nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Creating image(s)#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.549 238887 DEBUG nova.storage.rbd_utils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] rbd image 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.571 238887 DEBUG nova.storage.rbd_utils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] rbd image 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.592 238887 DEBUG nova.storage.rbd_utils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] rbd image 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.594 238887 DEBUG oslo_concurrency.processutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.629 238887 DEBUG nova.policy [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4846ccd205b54116a828ad91820ef58d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c061a009eae241049a1e3a1c35aa2503', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.650 238887 DEBUG oslo_concurrency.processutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.650 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.651 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.651 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.680 238887 DEBUG nova.storage.rbd_utils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] rbd image 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:14 np0005604943 nova_compute[238883]: 2026-02-02 11:57:14.682 238887 DEBUG oslo_concurrency.processutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:15 np0005604943 nova_compute[238883]: 2026-02-02 11:57:15.088 238887 DEBUG oslo_concurrency.processutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:15 np0005604943 nova_compute[238883]: 2026-02-02 11:57:15.148 238887 DEBUG nova.storage.rbd_utils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] resizing rbd image 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 06:57:15 np0005604943 nova_compute[238883]: 2026-02-02 11:57:15.230 238887 DEBUG nova.objects.instance [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:57:15 np0005604943 nova_compute[238883]: 2026-02-02 11:57:15.257 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 06:57:15 np0005604943 nova_compute[238883]: 2026-02-02 11:57:15.257 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Ensure instance console log exists: /var/lib/nova/instances/9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 06:57:15 np0005604943 nova_compute[238883]: 2026-02-02 11:57:15.257 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:15 np0005604943 nova_compute[238883]: 2026-02-02 11:57:15.258 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:15 np0005604943 nova_compute[238883]: 2026-02-02 11:57:15.258 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:15 np0005604943 nova_compute[238883]: 2026-02-02 11:57:15.581 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033420.5801737, 5490d2e6-ef55-40d2-9077-0a99a07fb3e7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:57:15 np0005604943 nova_compute[238883]: 2026-02-02 11:57:15.581 238887 INFO nova.compute.manager [-] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] VM Stopped (Lifecycle Event)#033[00m
Feb  2 06:57:15 np0005604943 nova_compute[238883]: 2026-02-02 11:57:15.611 238887 DEBUG nova.compute.manager [None req-019bb680-f82f-4a52-ba02-b6b7181bc09e - - - - - -] [instance: 5490d2e6-ef55-40d2-9077-0a99a07fb3e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:57:15 np0005604943 nova_compute[238883]: 2026-02-02 11:57:15.622 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:15 np0005604943 nova_compute[238883]: 2026-02-02 11:57:15.899 238887 DEBUG nova.network.neutron [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Successfully created port: ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 06:57:15 np0005604943 nova_compute[238883]: 2026-02-02 11:57:15.962 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 504 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 36 MiB/s wr, 94 op/s
Feb  2 06:57:16 np0005604943 nova_compute[238883]: 2026-02-02 11:57:16.602 238887 DEBUG nova.network.neutron [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Successfully updated port: ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 06:57:16 np0005604943 nova_compute[238883]: 2026-02-02 11:57:16.617 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "refresh_cache-9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:57:16 np0005604943 nova_compute[238883]: 2026-02-02 11:57:16.618 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquired lock "refresh_cache-9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:57:16 np0005604943 nova_compute[238883]: 2026-02-02 11:57:16.618 238887 DEBUG nova.network.neutron [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 06:57:16 np0005604943 nova_compute[238883]: 2026-02-02 11:57:16.698 238887 DEBUG nova.compute.manager [req-450d5998-3b89-4b09-90d4-cea374c1337d req-47988781-1cc4-41f5-800c-72235c2381c3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Received event network-changed-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:16 np0005604943 nova_compute[238883]: 2026-02-02 11:57:16.698 238887 DEBUG nova.compute.manager [req-450d5998-3b89-4b09-90d4-cea374c1337d req-47988781-1cc4-41f5-800c-72235c2381c3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Refreshing instance network info cache due to event network-changed-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:57:16 np0005604943 nova_compute[238883]: 2026-02-02 11:57:16.699 238887 DEBUG oslo_concurrency.lockutils [req-450d5998-3b89-4b09-90d4-cea374c1337d req-47988781-1cc4-41f5-800c-72235c2381c3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:57:16 np0005604943 nova_compute[238883]: 2026-02-02 11:57:16.784 238887 DEBUG nova.network.neutron [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 06:57:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:57:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 785 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 58 MiB/s wr, 114 op/s
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.647 238887 DEBUG nova.network.neutron [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Updating instance_info_cache with network_info: [{"id": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "address": "fa:16:3e:61:f9:36", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebc43340-2b", "ovs_interfaceid": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.665 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Releasing lock "refresh_cache-9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.665 238887 DEBUG nova.compute.manager [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Instance network_info: |[{"id": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "address": "fa:16:3e:61:f9:36", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebc43340-2b", "ovs_interfaceid": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.665 238887 DEBUG oslo_concurrency.lockutils [req-450d5998-3b89-4b09-90d4-cea374c1337d req-47988781-1cc4-41f5-800c-72235c2381c3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.666 238887 DEBUG nova.network.neutron [req-450d5998-3b89-4b09-90d4-cea374c1337d req-47988781-1cc4-41f5-800c-72235c2381c3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Refreshing network info cache for port ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.668 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Start _get_guest_xml network_info=[{"id": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "address": "fa:16:3e:61:f9:36", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebc43340-2b", "ovs_interfaceid": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'image_id': '21b263f0-00f1-47be-b8b1-e3c07da0a6a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.671 238887 WARNING nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.675 238887 DEBUG nova.virt.libvirt.host [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.675 238887 DEBUG nova.virt.libvirt.host [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.680 238887 DEBUG nova.virt.libvirt.host [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.680 238887 DEBUG nova.virt.libvirt.host [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.681 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.681 238887 DEBUG nova.virt.hardware [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.681 238887 DEBUG nova.virt.hardware [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.681 238887 DEBUG nova.virt.hardware [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.682 238887 DEBUG nova.virt.hardware [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.682 238887 DEBUG nova.virt.hardware [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.682 238887 DEBUG nova.virt.hardware [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.682 238887 DEBUG nova.virt.hardware [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.682 238887 DEBUG nova.virt.hardware [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.682 238887 DEBUG nova.virt.hardware [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.683 238887 DEBUG nova.virt.hardware [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.683 238887 DEBUG nova.virt.hardware [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 06:57:19 np0005604943 nova_compute[238883]: 2026-02-02 11:57:19.685 238887 DEBUG oslo_concurrency.processutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 878 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 61 MiB/s wr, 110 op/s
Feb  2 06:57:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:57:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1270334089' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.313 238887 DEBUG oslo_concurrency.processutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.333 238887 DEBUG nova.storage.rbd_utils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] rbd image 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.337 238887 DEBUG oslo_concurrency.processutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.627 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:57:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3302435902' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.840 238887 DEBUG oslo_concurrency.processutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.842 238887 DEBUG nova.virt.libvirt.vif [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:57:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-233799635',display_name='tempest-VolumesSnapshotTestJSON-instance-233799635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-233799635',id=5,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHb1ZEaIb8UZnHykAYp8EjqNATdm5jdbLuafEvNxVV1vyzKYWrK1BW5Doc1xOOlNSAWHW3YeBnTyxM8UeJU92Fn0f4HjOOs4ewzJZPOUJYDbLHigfQvvW8aA+1/eu17SoQ==',key_name='tempest-keypair-1798394470',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c061a009eae241049a1e3a1c35aa2503',ramdisk_id='',reservation_id='r-1rxqures',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-2018180325',owner_user_name='tempest-VolumesSnapshotTestJSON-2018180325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:57:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4846ccd205b54116a828ad91820ef58d',uuid=9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "address": "fa:16:3e:61:f9:36", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebc43340-2b", "ovs_interfaceid": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.842 238887 DEBUG nova.network.os_vif_util [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Converting VIF {"id": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "address": "fa:16:3e:61:f9:36", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebc43340-2b", "ovs_interfaceid": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.843 238887 DEBUG nova.network.os_vif_util [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:f9:36,bridge_name='br-int',has_traffic_filtering=True,id=ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c,network=Network(edd3a331-b14a-4730-a21c-7fc793b77005),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebc43340-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.844 238887 DEBUG nova.objects.instance [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.861 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] End _get_guest_xml xml=<domain type="kvm">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  <uuid>9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9</uuid>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  <name>instance-00000005</name>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <nova:name>tempest-VolumesSnapshotTestJSON-instance-233799635</nova:name>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 11:57:19</nova:creationTime>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:        <nova:user uuid="4846ccd205b54116a828ad91820ef58d">tempest-VolumesSnapshotTestJSON-2018180325-project-member</nova:user>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:        <nova:project uuid="c061a009eae241049a1e3a1c35aa2503">tempest-VolumesSnapshotTestJSON-2018180325</nova:project>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <nova:root type="image" uuid="21b263f0-00f1-47be-b8b1-e3c07da0a6a2"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:        <nova:port uuid="ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <system>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <entry name="serial">9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9</entry>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <entry name="uuid">9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9</entry>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    </system>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  <os>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  </os>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  <features>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  </features>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  </clock>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  <devices>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9_disk">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9_disk.config">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:61:f9:36"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <target dev="tapebc43340-2b"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    </interface>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9/console.log" append="off"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    </serial>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <video>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    </video>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    </rng>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 06:57:20 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 06:57:20 np0005604943 nova_compute[238883]:  </devices>
Feb  2 06:57:20 np0005604943 nova_compute[238883]: </domain>
Feb  2 06:57:20 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.863 238887 DEBUG nova.compute.manager [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Preparing to wait for external event network-vif-plugged-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.864 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.864 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.865 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.866 238887 DEBUG nova.virt.libvirt.vif [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:57:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-233799635',display_name='tempest-VolumesSnapshotTestJSON-instance-233799635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-233799635',id=5,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHb1ZEaIb8UZnHykAYp8EjqNATdm5jdbLuafEvNxVV1vyzKYWrK1BW5Doc1xOOlNSAWHW3YeBnTyxM8UeJU92Fn0f4HjOOs4ewzJZPOUJYDbLHigfQvvW8aA+1/eu17SoQ==',key_name='tempest-keypair-1798394470',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c061a009eae241049a1e3a1c35aa2503',ramdisk_id='',reservation_id='r-1rxqures',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-2018180325',owner_user_name='tempest-VolumesSnapshotTestJSON-2018180325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:57:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4846ccd205b54116a828ad91820ef58d',uuid=9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "address": "fa:16:3e:61:f9:36", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebc43340-2b", "ovs_interfaceid": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.866 238887 DEBUG nova.network.os_vif_util [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Converting VIF {"id": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "address": "fa:16:3e:61:f9:36", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebc43340-2b", "ovs_interfaceid": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.867 238887 DEBUG nova.network.os_vif_util [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:f9:36,bridge_name='br-int',has_traffic_filtering=True,id=ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c,network=Network(edd3a331-b14a-4730-a21c-7fc793b77005),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebc43340-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.867 238887 DEBUG os_vif [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:f9:36,bridge_name='br-int',has_traffic_filtering=True,id=ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c,network=Network(edd3a331-b14a-4730-a21c-7fc793b77005),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebc43340-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.868 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.868 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.868 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.871 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.871 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebc43340-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.871 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapebc43340-2b, col_values=(('external_ids', {'iface-id': 'ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:f9:36', 'vm-uuid': '9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.872 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:20 np0005604943 NetworkManager[49093]: <info>  [1770033440.8741] manager: (tapebc43340-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.875 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.877 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.878 238887 INFO os_vif [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:f9:36,bridge_name='br-int',has_traffic_filtering=True,id=ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c,network=Network(edd3a331-b14a-4730-a21c-7fc793b77005),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebc43340-2b')#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.929 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.929 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.930 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] No VIF found with MAC fa:16:3e:61:f9:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.930 238887 INFO nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Using config drive#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.950 238887 DEBUG nova.storage.rbd_utils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] rbd image 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:20 np0005604943 nova_compute[238883]: 2026-02-02 11:57:20.964 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003468261578900123 of space, bias 1.0, pg target 0.10404784736700369 quantized to 32 (current 32)
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003521726056453096 of space, bias 1.0, pg target 0.10565178169359288 quantized to 32 (current 32)
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.8522368973413234e-07 of space, bias 1.0, pg target 0.0001155671069202397 quantized to 32 (current 32)
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.012778034323227626 of space, bias 1.0, pg target 3.833410296968288 quantized to 32 (current 32)
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2139522013211223e-06 of space, bias 4.0, pg target 0.0014421752151694933 quantized to 16 (current 16)
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011331864133619846 quantized to 32 (current 32)
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012465050546981834 quantized to 32 (current 32)
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:57:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015109152178159797 quantized to 32 (current 32)
Feb  2 06:57:21 np0005604943 nova_compute[238883]: 2026-02-02 11:57:21.666 238887 INFO nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Creating config drive at /var/lib/nova/instances/9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9/disk.config#033[00m
Feb  2 06:57:21 np0005604943 nova_compute[238883]: 2026-02-02 11:57:21.671 238887 DEBUG oslo_concurrency.processutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp4gon0ee3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:21 np0005604943 nova_compute[238883]: 2026-02-02 11:57:21.791 238887 DEBUG oslo_concurrency.processutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp4gon0ee3" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:21 np0005604943 nova_compute[238883]: 2026-02-02 11:57:21.822 238887 DEBUG nova.storage.rbd_utils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] rbd image 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:21 np0005604943 nova_compute[238883]: 2026-02-02 11:57:21.827 238887 DEBUG oslo_concurrency.processutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9/disk.config 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:21 np0005604943 nova_compute[238883]: 2026-02-02 11:57:21.914 238887 DEBUG nova.network.neutron [req-450d5998-3b89-4b09-90d4-cea374c1337d req-47988781-1cc4-41f5-800c-72235c2381c3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Updated VIF entry in instance network info cache for port ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:57:21 np0005604943 nova_compute[238883]: 2026-02-02 11:57:21.915 238887 DEBUG nova.network.neutron [req-450d5998-3b89-4b09-90d4-cea374c1337d req-47988781-1cc4-41f5-800c-72235c2381c3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Updating instance_info_cache with network_info: [{"id": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "address": "fa:16:3e:61:f9:36", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebc43340-2b", "ovs_interfaceid": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:57:21 np0005604943 nova_compute[238883]: 2026-02-02 11:57:21.935 238887 DEBUG oslo_concurrency.lockutils [req-450d5998-3b89-4b09-90d4-cea374c1337d req-47988781-1cc4-41f5-800c-72235c2381c3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:57:21 np0005604943 nova_compute[238883]: 2026-02-02 11:57:21.948 238887 DEBUG oslo_concurrency.processutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9/disk.config 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:21 np0005604943 nova_compute[238883]: 2026-02-02 11:57:21.948 238887 INFO nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Deleting local config drive /var/lib/nova/instances/9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9/disk.config because it was imported into RBD.#033[00m
Feb  2 06:57:21 np0005604943 kernel: tapebc43340-2b: entered promiscuous mode
Feb  2 06:57:21 np0005604943 NetworkManager[49093]: <info>  [1770033441.9874] manager: (tapebc43340-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Feb  2 06:57:21 np0005604943 nova_compute[238883]: 2026-02-02 11:57:21.988 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:21 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:21Z|00052|binding|INFO|Claiming lport ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c for this chassis.
Feb  2 06:57:21 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:21Z|00053|binding|INFO|ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c: Claiming fa:16:3e:61:f9:36 10.100.0.9
Feb  2 06:57:21 np0005604943 nova_compute[238883]: 2026-02-02 11:57:21.991 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:21 np0005604943 nova_compute[238883]: 2026-02-02 11:57:21.994 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.002 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:f9:36 10.100.0.9'], port_security=['fa:16:3e:61:f9:36 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-edd3a331-b14a-4730-a21c-7fc793b77005', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c061a009eae241049a1e3a1c35aa2503', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9af29c12-9f42-4791-af1d-67ddceeec2d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be5764d7-de7f-4844-afc6-7eadee6d6d3c, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.003 155011 INFO neutron.agent.ovn.metadata.agent [-] Port ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c in datapath edd3a331-b14a-4730-a21c-7fc793b77005 bound to our chassis#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.004 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network edd3a331-b14a-4730-a21c-7fc793b77005#033[00m
Feb  2 06:57:22 np0005604943 systemd-machined[206973]: New machine qemu-5-instance-00000005.
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.013 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[bc57862e-8af8-49b9-beb6-609ccf0d0ec5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.014 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapedd3a331-b1 in ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.016 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapedd3a331-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.016 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[f9230efc-7404-4be0-a477-dca10b8baa43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.017 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ad6cdc-3e55-42c1-81ef-6a2d71c05a71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.022 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:22 np0005604943 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Feb  2 06:57:22 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:22Z|00054|binding|INFO|Setting lport ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c ovn-installed in OVS
Feb  2 06:57:22 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:22Z|00055|binding|INFO|Setting lport ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c up in Southbound
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.027 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.029 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[f08beccb-306b-4179-8a4b-169da2cd91aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:22 np0005604943 systemd-udevd[248952]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:57:22 np0005604943 NetworkManager[49093]: <info>  [1770033442.0414] device (tapebc43340-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 06:57:22 np0005604943 NetworkManager[49093]: <info>  [1770033442.0421] device (tapebc43340-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.051 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[0a3520f0-d2e1-4dba-82ee-7ae3115d2c48]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.074 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[0124dcad-3934-45d0-af7c-b6c3cf045692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.077 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5f3d39-0261-44ae-b040-a05be561fa48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:22 np0005604943 NetworkManager[49093]: <info>  [1770033442.0784] manager: (tapedd3a331-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.114 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[c16d5541-a51b-496e-825b-26883302c185]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.117 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[d7bdc29c-f278-4fa5-b5cb-7db926369d8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:22 np0005604943 NetworkManager[49093]: <info>  [1770033442.1352] device (tapedd3a331-b0): carrier: link connected
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.138 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[2ebaa643-4a06-459c-bc1d-5e4c169f81ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.152 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[22c7476f-cfc2-491d-a857-bd8479d5fa9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapedd3a331-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:f4:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388748, 'reachable_time': 32300, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248983, 'error': None, 'target': 'ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.164 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0ff86c-5637-4ec8-96b0-ef37257884c2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:f4cf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 388748, 'tstamp': 388748}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248984, 'error': None, 'target': 'ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.178 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[f912cb78-1e96-4093-aea5-f2a1a176266b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapedd3a331-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:f4:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388748, 'reachable_time': 32300, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248993, 'error': None, 'target': 'ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 878 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 58 MiB/s wr, 92 op/s
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.205 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d7fe7d-80e3-4e29-b695-b2c9c462047a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.242 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1e60fa4e-f66d-43ad-9af8-98ddcc7e1c78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.243 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedd3a331-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.243 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.244 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapedd3a331-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:22 np0005604943 kernel: tapedd3a331-b0: entered promiscuous mode
Feb  2 06:57:22 np0005604943 NetworkManager[49093]: <info>  [1770033442.2461] manager: (tapedd3a331-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.245 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.248 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapedd3a331-b0, col_values=(('external_ids', {'iface-id': 'b2fa0ea4-27d8-4ad2-be31-b707a8a3d0e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:22 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:22Z|00056|binding|INFO|Releasing lport b2fa0ea4-27d8-4ad2-be31-b707a8a3d0e4 from this chassis (sb_readonly=0)
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.255 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/edd3a331-b14a-4730-a21c-7fc793b77005.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/edd3a331-b14a-4730-a21c-7fc793b77005.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.255 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.256 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[71d97af6-358e-453f-9d7b-6345437aa66f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.257 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-edd3a331-b14a-4730-a21c-7fc793b77005
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/edd3a331-b14a-4730-a21c-7fc793b77005.pid.haproxy
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID edd3a331-b14a-4730-a21c-7fc793b77005
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 06:57:22 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:22.258 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005', 'env', 'PROCESS_TAG=haproxy-edd3a331-b14a-4730-a21c-7fc793b77005', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/edd3a331-b14a-4730-a21c-7fc793b77005.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.313 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033442.3129194, 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.313 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] VM Started (Lifecycle Event)#033[00m
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.333 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.335 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033442.3147542, 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.335 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] VM Paused (Lifecycle Event)#033[00m
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.356 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.359 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.381 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:57:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:57:22 np0005604943 podman[249059]: 2026-02-02 11:57:22.584293719 +0000 UTC m=+0.047049574 container create 75fcbfedef73d12694224841abfe49684e384a5f38a32a170e8e60bce98e19b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb  2 06:57:22 np0005604943 systemd[1]: Started libpod-conmon-75fcbfedef73d12694224841abfe49684e384a5f38a32a170e8e60bce98e19b9.scope.
Feb  2 06:57:22 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:57:22 np0005604943 podman[249059]: 2026-02-02 11:57:22.557921398 +0000 UTC m=+0.020677283 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 06:57:22 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a8c93410ff91e7082a42287a7b49fc8bb509e3ad5ab13178c411d5a575fedfd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 06:57:22 np0005604943 podman[249059]: 2026-02-02 11:57:22.671241241 +0000 UTC m=+0.133997156 container init 75fcbfedef73d12694224841abfe49684e384a5f38a32a170e8e60bce98e19b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:57:22 np0005604943 podman[249059]: 2026-02-02 11:57:22.678720077 +0000 UTC m=+0.141475972 container start 75fcbfedef73d12694224841abfe49684e384a5f38a32a170e8e60bce98e19b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 06:57:22 np0005604943 neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005[249074]: [NOTICE]   (249078) : New worker (249080) forked
Feb  2 06:57:22 np0005604943 neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005[249074]: [NOTICE]   (249078) : Loading success.
Feb  2 06:57:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Feb  2 06:57:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Feb  2 06:57:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.996 238887 DEBUG nova.compute.manager [req-55ec475e-d561-4d0b-8952-ea9c9a578420 req-0da4850c-9653-4c0e-9732-42cc2d04633d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Received event network-vif-plugged-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.997 238887 DEBUG oslo_concurrency.lockutils [req-55ec475e-d561-4d0b-8952-ea9c9a578420 req-0da4850c-9653-4c0e-9732-42cc2d04633d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.997 238887 DEBUG oslo_concurrency.lockutils [req-55ec475e-d561-4d0b-8952-ea9c9a578420 req-0da4850c-9653-4c0e-9732-42cc2d04633d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.997 238887 DEBUG oslo_concurrency.lockutils [req-55ec475e-d561-4d0b-8952-ea9c9a578420 req-0da4850c-9653-4c0e-9732-42cc2d04633d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.997 238887 DEBUG nova.compute.manager [req-55ec475e-d561-4d0b-8952-ea9c9a578420 req-0da4850c-9653-4c0e-9732-42cc2d04633d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Processing event network-vif-plugged-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 06:57:22 np0005604943 nova_compute[238883]: 2026-02-02 11:57:22.998 238887 DEBUG nova.compute.manager [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.002 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033443.001996, 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.002 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] VM Resumed (Lifecycle Event)#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.004 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.008 238887 INFO nova.virt.libvirt.driver [-] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Instance spawned successfully.#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.009 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.026 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.032 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.035 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.035 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.035 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.036 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.036 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.037 238887 DEBUG nova.virt.libvirt.driver [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.071 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.123 238887 INFO nova.compute.manager [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Took 8.60 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.124 238887 DEBUG nova.compute.manager [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.350 238887 INFO nova.compute.manager [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Took 9.80 seconds to build instance.#033[00m
Feb  2 06:57:23 np0005604943 nova_compute[238883]: 2026-02-02 11:57:23.369 238887 DEBUG oslo_concurrency.lockutils [None req-87aa4f41-030b-4530-b2ee-e80c3e505cb3 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 566 KiB/s rd, 61 MiB/s wr, 134 op/s
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.030 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Acquiring lock "643f1632-51eb-4ee3-a152-cea78635d59c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.030 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.052 238887 DEBUG nova.compute.manager [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.073 238887 DEBUG nova.compute.manager [req-00d0531a-6f49-4657-8b24-61d2388b504c req-3345f855-951e-4225-a334-c5d8f56b2d23 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Received event network-vif-plugged-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.073 238887 DEBUG oslo_concurrency.lockutils [req-00d0531a-6f49-4657-8b24-61d2388b504c req-3345f855-951e-4225-a334-c5d8f56b2d23 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.074 238887 DEBUG oslo_concurrency.lockutils [req-00d0531a-6f49-4657-8b24-61d2388b504c req-3345f855-951e-4225-a334-c5d8f56b2d23 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.074 238887 DEBUG oslo_concurrency.lockutils [req-00d0531a-6f49-4657-8b24-61d2388b504c req-3345f855-951e-4225-a334-c5d8f56b2d23 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.074 238887 DEBUG nova.compute.manager [req-00d0531a-6f49-4657-8b24-61d2388b504c req-3345f855-951e-4225-a334-c5d8f56b2d23 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] No waiting events found dispatching network-vif-plugged-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.074 238887 WARNING nova.compute.manager [req-00d0531a-6f49-4657-8b24-61d2388b504c req-3345f855-951e-4225-a334-c5d8f56b2d23 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Received unexpected event network-vif-plugged-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c for instance with vm_state active and task_state None.#033[00m
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.240 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.240 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.247 238887 DEBUG nova.virt.hardware [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.247 238887 INFO nova.compute.claims [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.424 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:25 np0005604943 NetworkManager[49093]: <info>  [1770033445.4254] manager: (patch-br-int-to-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/37)
Feb  2 06:57:25 np0005604943 NetworkManager[49093]: <info>  [1770033445.4257] device (patch-br-int-to-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 06:57:25 np0005604943 NetworkManager[49093]: <warn>  [1770033445.4258] device (patch-br-int-to-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 06:57:25 np0005604943 NetworkManager[49093]: <info>  [1770033445.4265] manager: (patch-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/38)
Feb  2 06:57:25 np0005604943 NetworkManager[49093]: <info>  [1770033445.4267] device (patch-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 06:57:25 np0005604943 NetworkManager[49093]: <warn>  [1770033445.4267] device (patch-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 06:57:25 np0005604943 NetworkManager[49093]: <info>  [1770033445.4273] manager: (patch-br-int-to-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Feb  2 06:57:25 np0005604943 NetworkManager[49093]: <info>  [1770033445.4280] manager: (patch-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Feb  2 06:57:25 np0005604943 NetworkManager[49093]: <info>  [1770033445.4284] device (patch-br-int-to-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb  2 06:57:25 np0005604943 NetworkManager[49093]: <info>  [1770033445.4288] device (patch-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.484 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:25 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:25Z|00057|binding|INFO|Releasing lport b2fa0ea4-27d8-4ad2-be31-b707a8a3d0e4 from this chassis (sb_readonly=0)
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.497 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.523 238887 DEBUG oslo_concurrency.processutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Feb  2 06:57:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Feb  2 06:57:25 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.905 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:25 np0005604943 nova_compute[238883]: 2026-02-02 11:57:25.967 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:57:26 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3167280773' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.074 238887 DEBUG oslo_concurrency.processutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.078 238887 DEBUG nova.compute.provider_tree [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.096 238887 DEBUG nova.scheduler.client.report [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.122 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.122 238887 DEBUG nova.compute.manager [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.182 238887 DEBUG nova.compute.manager [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.182 238887 DEBUG nova.network.neutron [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 06:57:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 689 KiB/s rd, 42 MiB/s wr, 136 op/s
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.205 238887 INFO nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.222 238887 DEBUG nova.compute.manager [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.303 238887 DEBUG nova.compute.manager [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.304 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.304 238887 INFO nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Creating image(s)#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.322 238887 DEBUG nova.storage.rbd_utils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] rbd image 643f1632-51eb-4ee3-a152-cea78635d59c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.343 238887 DEBUG nova.storage.rbd_utils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] rbd image 643f1632-51eb-4ee3-a152-cea78635d59c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.360 238887 DEBUG nova.storage.rbd_utils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] rbd image 643f1632-51eb-4ee3-a152-cea78635d59c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.363 238887 DEBUG oslo_concurrency.processutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.404 238887 DEBUG nova.policy [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e144ee25c2b84ec5a1aecb69ceec619d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a7e9957088fe43eaae10f11401fe89c4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.408 238887 DEBUG oslo_concurrency.processutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.409 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Acquiring lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.410 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.410 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.431 238887 DEBUG nova.storage.rbd_utils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] rbd image 643f1632-51eb-4ee3-a152-cea78635d59c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.434 238887 DEBUG oslo_concurrency.processutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 643f1632-51eb-4ee3-a152-cea78635d59c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.637 238887 DEBUG oslo_concurrency.processutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 643f1632-51eb-4ee3-a152-cea78635d59c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.664 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.711 238887 DEBUG nova.storage.rbd_utils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] resizing rbd image 643f1632-51eb-4ee3-a152-cea78635d59c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.741 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.741 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.741 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.742 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.742 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:57:26 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3716141542' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:57:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:57:26 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3716141542' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.803 238887 DEBUG nova.objects.instance [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lazy-loading 'migration_context' on Instance uuid 643f1632-51eb-4ee3-a152-cea78635d59c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.817 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.818 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Ensure instance console log exists: /var/lib/nova/instances/643f1632-51eb-4ee3-a152-cea78635d59c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.818 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.819 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:26 np0005604943 nova_compute[238883]: 2026-02-02 11:57:26.819 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.103 238887 DEBUG nova.network.neutron [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Successfully created port: 9b14b0fc-0160-4715-bd45-6a8ec1128754 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.258 238887 DEBUG nova.compute.manager [req-3b8e45b6-e0a9-4367-94b1-ecf3202ed2cc req-e8dd27ed-24ef-432d-8d28-71e1869f27da 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Received event network-changed-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.259 238887 DEBUG nova.compute.manager [req-3b8e45b6-e0a9-4367-94b1-ecf3202ed2cc req-e8dd27ed-24ef-432d-8d28-71e1869f27da 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Refreshing instance network info cache due to event network-changed-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.260 238887 DEBUG oslo_concurrency.lockutils [req-3b8e45b6-e0a9-4367-94b1-ecf3202ed2cc req-e8dd27ed-24ef-432d-8d28-71e1869f27da 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.260 238887 DEBUG oslo_concurrency.lockutils [req-3b8e45b6-e0a9-4367-94b1-ecf3202ed2cc req-e8dd27ed-24ef-432d-8d28-71e1869f27da 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.261 238887 DEBUG nova.network.neutron [req-3b8e45b6-e0a9-4367-94b1-ecf3202ed2cc req-e8dd27ed-24ef-432d-8d28-71e1869f27da 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Refreshing network info cache for port ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:57:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:57:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2652078641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.287 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.352 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.352 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 06:57:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.490 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.491 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4587MB free_disk=59.9673070833087GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.491 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.492 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.573 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.573 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance 643f1632-51eb-4ee3-a152-cea78635d59c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.573 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.574 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.626 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.689 238887 DEBUG nova.network.neutron [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Successfully updated port: 9b14b0fc-0160-4715-bd45-6a8ec1128754 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.708 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Acquiring lock "refresh_cache-643f1632-51eb-4ee3-a152-cea78635d59c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.708 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Acquired lock "refresh_cache-643f1632-51eb-4ee3-a152-cea78635d59c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.709 238887 DEBUG nova.network.neutron [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 06:57:27 np0005604943 nova_compute[238883]: 2026-02-02 11:57:27.869 238887 DEBUG nova.network.neutron [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 06:57:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:57:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3945036092' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.181 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.191 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:57:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 384 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 37 MiB/s wr, 217 op/s
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.209 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.236 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.237 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.695 238887 DEBUG nova.network.neutron [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Updating instance_info_cache with network_info: [{"id": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "address": "fa:16:3e:cc:bc:aa", "network": {"id": "6b1c6eff-f6e6-4af3-aa02-11290c8b6c83", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-678478278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7e9957088fe43eaae10f11401fe89c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b14b0fc-01", "ovs_interfaceid": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.715 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Releasing lock "refresh_cache-643f1632-51eb-4ee3-a152-cea78635d59c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.716 238887 DEBUG nova.compute.manager [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Instance network_info: |[{"id": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "address": "fa:16:3e:cc:bc:aa", "network": {"id": "6b1c6eff-f6e6-4af3-aa02-11290c8b6c83", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-678478278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7e9957088fe43eaae10f11401fe89c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b14b0fc-01", "ovs_interfaceid": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.718 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Start _get_guest_xml network_info=[{"id": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "address": "fa:16:3e:cc:bc:aa", "network": {"id": "6b1c6eff-f6e6-4af3-aa02-11290c8b6c83", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-678478278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7e9957088fe43eaae10f11401fe89c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b14b0fc-01", "ovs_interfaceid": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'image_id': '21b263f0-00f1-47be-b8b1-e3c07da0a6a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.723 238887 WARNING nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.728 238887 DEBUG nova.virt.libvirt.host [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.729 238887 DEBUG nova.virt.libvirt.host [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.732 238887 DEBUG nova.virt.libvirt.host [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.732 238887 DEBUG nova.virt.libvirt.host [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.733 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.733 238887 DEBUG nova.virt.hardware [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.733 238887 DEBUG nova.virt.hardware [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.733 238887 DEBUG nova.virt.hardware [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.734 238887 DEBUG nova.virt.hardware [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.734 238887 DEBUG nova.virt.hardware [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.734 238887 DEBUG nova.virt.hardware [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.734 238887 DEBUG nova.virt.hardware [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.734 238887 DEBUG nova.virt.hardware [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.735 238887 DEBUG nova.virt.hardware [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.735 238887 DEBUG nova.virt.hardware [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.735 238887 DEBUG nova.virt.hardware [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.738 238887 DEBUG oslo_concurrency.processutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.895 238887 DEBUG nova.network.neutron [req-3b8e45b6-e0a9-4367-94b1-ecf3202ed2cc req-e8dd27ed-24ef-432d-8d28-71e1869f27da 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Updated VIF entry in instance network info cache for port ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.896 238887 DEBUG nova.network.neutron [req-3b8e45b6-e0a9-4367-94b1-ecf3202ed2cc req-e8dd27ed-24ef-432d-8d28-71e1869f27da 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Updating instance_info_cache with network_info: [{"id": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "address": "fa:16:3e:61:f9:36", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebc43340-2b", "ovs_interfaceid": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:57:28 np0005604943 nova_compute[238883]: 2026-02-02 11:57:28.924 238887 DEBUG oslo_concurrency.lockutils [req-3b8e45b6-e0a9-4367-94b1-ecf3202ed2cc req-e8dd27ed-24ef-432d-8d28-71e1869f27da 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.215 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.216 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.217 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.255 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb  2 06:57:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:57:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3671953301' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.283 238887 DEBUG oslo_concurrency.processutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.304 238887 DEBUG nova.storage.rbd_utils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] rbd image 643f1632-51eb-4ee3-a152-cea78635d59c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.308 238887 DEBUG oslo_concurrency.processutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.348 238887 DEBUG nova.compute.manager [req-f350173c-c51e-4bd1-b4cf-09178a2af62b req-b3d6cb68-e99f-42c4-aef6-7f47bd7e0cf7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Received event network-changed-9b14b0fc-0160-4715-bd45-6a8ec1128754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.348 238887 DEBUG nova.compute.manager [req-f350173c-c51e-4bd1-b4cf-09178a2af62b req-b3d6cb68-e99f-42c4-aef6-7f47bd7e0cf7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Refreshing instance network info cache due to event network-changed-9b14b0fc-0160-4715-bd45-6a8ec1128754. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.349 238887 DEBUG oslo_concurrency.lockutils [req-f350173c-c51e-4bd1-b4cf-09178a2af62b req-b3d6cb68-e99f-42c4-aef6-7f47bd7e0cf7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-643f1632-51eb-4ee3-a152-cea78635d59c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.349 238887 DEBUG oslo_concurrency.lockutils [req-f350173c-c51e-4bd1-b4cf-09178a2af62b req-b3d6cb68-e99f-42c4-aef6-7f47bd7e0cf7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-643f1632-51eb-4ee3-a152-cea78635d59c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.349 238887 DEBUG nova.network.neutron [req-f350173c-c51e-4bd1-b4cf-09178a2af62b req-b3d6cb68-e99f-42c4-aef6-7f47bd7e0cf7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Refreshing network info cache for port 9b14b0fc-0160-4715-bd45-6a8ec1128754 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.748 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "refresh_cache-9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.748 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquired lock "refresh_cache-9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.749 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.749 238887 DEBUG nova.objects.instance [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:57:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:57:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2005010532' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.857 238887 DEBUG oslo_concurrency.processutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.858 238887 DEBUG nova.virt.libvirt.vif [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:57:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1409568483',display_name='tempest-VolumesExtendAttachedTest-instance-1409568483',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1409568483',id=6,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP+uUo4VUyfglFi+aS2qETcNEtGj44RWS0a2Pk2h7vf5brP5LrWipTQCrkXaRZFBTx21OoM2zKs+JQwCJmwwZKA2GQ11phRMnZCJt8nktB8rm8WWFPRDL6V2IIklbXlWzA==',key_name='tempest-keypair-1569122066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a7e9957088fe43eaae10f11401fe89c4',ramdisk_id='',reservation_id='r-0pdj2d24',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-1942788377',owner_user_name='tempest-VolumesExtendAttachedTest-1942788377-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:57:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e144ee25c2b84ec5a1aecb69ceec619d',uuid=643f1632-51eb-4ee3-a152-cea78635d59c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "address": "fa:16:3e:cc:bc:aa", "network": {"id": "6b1c6eff-f6e6-4af3-aa02-11290c8b6c83", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-678478278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7e9957088fe43eaae10f11401fe89c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b14b0fc-01", "ovs_interfaceid": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.859 238887 DEBUG nova.network.os_vif_util [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Converting VIF {"id": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "address": "fa:16:3e:cc:bc:aa", "network": {"id": "6b1c6eff-f6e6-4af3-aa02-11290c8b6c83", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-678478278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7e9957088fe43eaae10f11401fe89c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b14b0fc-01", "ovs_interfaceid": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.860 238887 DEBUG nova.network.os_vif_util [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:bc:aa,bridge_name='br-int',has_traffic_filtering=True,id=9b14b0fc-0160-4715-bd45-6a8ec1128754,network=Network(6b1c6eff-f6e6-4af3-aa02-11290c8b6c83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b14b0fc-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.861 238887 DEBUG nova.objects.instance [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 643f1632-51eb-4ee3-a152-cea78635d59c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.882 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] End _get_guest_xml xml=<domain type="kvm">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  <uuid>643f1632-51eb-4ee3-a152-cea78635d59c</uuid>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  <name>instance-00000006</name>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <nova:name>tempest-VolumesExtendAttachedTest-instance-1409568483</nova:name>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 11:57:28</nova:creationTime>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:        <nova:user uuid="e144ee25c2b84ec5a1aecb69ceec619d">tempest-VolumesExtendAttachedTest-1942788377-project-member</nova:user>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:        <nova:project uuid="a7e9957088fe43eaae10f11401fe89c4">tempest-VolumesExtendAttachedTest-1942788377</nova:project>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <nova:root type="image" uuid="21b263f0-00f1-47be-b8b1-e3c07da0a6a2"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:        <nova:port uuid="9b14b0fc-0160-4715-bd45-6a8ec1128754">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <system>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <entry name="serial">643f1632-51eb-4ee3-a152-cea78635d59c</entry>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <entry name="uuid">643f1632-51eb-4ee3-a152-cea78635d59c</entry>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    </system>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  <os>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  </os>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  <features>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  </features>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  </clock>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  <devices>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/643f1632-51eb-4ee3-a152-cea78635d59c_disk">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/643f1632-51eb-4ee3-a152-cea78635d59c_disk.config">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:cc:bc:aa"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <target dev="tap9b14b0fc-01"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    </interface>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/643f1632-51eb-4ee3-a152-cea78635d59c/console.log" append="off"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    </serial>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <video>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    </video>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    </rng>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 06:57:29 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 06:57:29 np0005604943 nova_compute[238883]:  </devices>
Feb  2 06:57:29 np0005604943 nova_compute[238883]: </domain>
Feb  2 06:57:29 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.883 238887 DEBUG nova.compute.manager [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Preparing to wait for external event network-vif-plugged-9b14b0fc-0160-4715-bd45-6a8ec1128754 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.883 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Acquiring lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.883 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.883 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.884 238887 DEBUG nova.virt.libvirt.vif [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:57:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1409568483',display_name='tempest-VolumesExtendAttachedTest-instance-1409568483',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1409568483',id=6,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP+uUo4VUyfglFi+aS2qETcNEtGj44RWS0a2Pk2h7vf5brP5LrWipTQCrkXaRZFBTx21OoM2zKs+JQwCJmwwZKA2GQ11phRMnZCJt8nktB8rm8WWFPRDL6V2IIklbXlWzA==',key_name='tempest-keypair-1569122066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a7e9957088fe43eaae10f11401fe89c4',ramdisk_id='',reservation_id='r-0pdj2d24',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-1942788377',owner_user_name='tempest-VolumesExtendAttachedTest-1942788377-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:57:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e144ee25c2b84ec5a1aecb69ceec619d',uuid=643f1632-51eb-4ee3-a152-cea78635d59c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "address": "fa:16:3e:cc:bc:aa", "network": {"id": "6b1c6eff-f6e6-4af3-aa02-11290c8b6c83", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-678478278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7e9957088fe43eaae10f11401fe89c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b14b0fc-01", "ovs_interfaceid": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.884 238887 DEBUG nova.network.os_vif_util [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Converting VIF {"id": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "address": "fa:16:3e:cc:bc:aa", "network": {"id": "6b1c6eff-f6e6-4af3-aa02-11290c8b6c83", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-678478278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7e9957088fe43eaae10f11401fe89c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b14b0fc-01", "ovs_interfaceid": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.885 238887 DEBUG nova.network.os_vif_util [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:bc:aa,bridge_name='br-int',has_traffic_filtering=True,id=9b14b0fc-0160-4715-bd45-6a8ec1128754,network=Network(6b1c6eff-f6e6-4af3-aa02-11290c8b6c83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b14b0fc-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.885 238887 DEBUG os_vif [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:bc:aa,bridge_name='br-int',has_traffic_filtering=True,id=9b14b0fc-0160-4715-bd45-6a8ec1128754,network=Network(6b1c6eff-f6e6-4af3-aa02-11290c8b6c83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b14b0fc-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.886 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.886 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.886 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.889 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.889 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b14b0fc-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.890 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b14b0fc-01, col_values=(('external_ids', {'iface-id': '9b14b0fc-0160-4715-bd45-6a8ec1128754', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:bc:aa', 'vm-uuid': '643f1632-51eb-4ee3-a152-cea78635d59c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.892 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:29 np0005604943 NetworkManager[49093]: <info>  [1770033449.8936] manager: (tap9b14b0fc-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.895 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.898 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.899 238887 INFO os_vif [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:bc:aa,bridge_name='br-int',has_traffic_filtering=True,id=9b14b0fc-0160-4715-bd45-6a8ec1128754,network=Network(6b1c6eff-f6e6-4af3-aa02-11290c8b6c83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b14b0fc-01')#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.949 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.949 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.950 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] No VIF found with MAC fa:16:3e:cc:bc:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.950 238887 INFO nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Using config drive#033[00m
Feb  2 06:57:29 np0005604943 nova_compute[238883]: 2026-02-02 11:57:29.969 238887 DEBUG nova.storage.rbd_utils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] rbd image 643f1632-51eb-4ee3-a152-cea78635d59c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 180 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 38 MiB/s wr, 264 op/s
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.413 238887 INFO nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Creating config drive at /var/lib/nova/instances/643f1632-51eb-4ee3-a152-cea78635d59c/disk.config#033[00m
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.419 238887 DEBUG oslo_concurrency.processutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/643f1632-51eb-4ee3-a152-cea78635d59c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpoob05hsu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.537 238887 DEBUG oslo_concurrency.processutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/643f1632-51eb-4ee3-a152-cea78635d59c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpoob05hsu" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.565 238887 DEBUG nova.storage.rbd_utils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] rbd image 643f1632-51eb-4ee3-a152-cea78635d59c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.569 238887 DEBUG oslo_concurrency.processutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/643f1632-51eb-4ee3-a152-cea78635d59c/disk.config 643f1632-51eb-4ee3-a152-cea78635d59c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.690 238887 DEBUG oslo_concurrency.processutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/643f1632-51eb-4ee3-a152-cea78635d59c/disk.config 643f1632-51eb-4ee3-a152-cea78635d59c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.691 238887 INFO nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Deleting local config drive /var/lib/nova/instances/643f1632-51eb-4ee3-a152-cea78635d59c/disk.config because it was imported into RBD.#033[00m
Feb  2 06:57:30 np0005604943 NetworkManager[49093]: <info>  [1770033450.7230] manager: (tap9b14b0fc-01): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Feb  2 06:57:30 np0005604943 kernel: tap9b14b0fc-01: entered promiscuous mode
Feb  2 06:57:30 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:30Z|00058|binding|INFO|Claiming lport 9b14b0fc-0160-4715-bd45-6a8ec1128754 for this chassis.
Feb  2 06:57:30 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:30Z|00059|binding|INFO|9b14b0fc-0160-4715-bd45-6a8ec1128754: Claiming fa:16:3e:cc:bc:aa 10.100.0.7
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.728 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:30 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:30Z|00060|binding|INFO|Setting lport 9b14b0fc-0160-4715-bd45-6a8ec1128754 ovn-installed in OVS
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.736 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:30 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:30Z|00061|binding|INFO|Setting lport 9b14b0fc-0160-4715-bd45-6a8ec1128754 up in Southbound
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.738 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:bc:aa 10.100.0.7'], port_security=['fa:16:3e:cc:bc:aa 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '643f1632-51eb-4ee3-a152-cea78635d59c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a7e9957088fe43eaae10f11401fe89c4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '81c35f8d-4fc6-4e28-a844-0d02fb39bbac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17dd8f22-1dbd-4081-bcaa-6cbfe492bdad, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=9b14b0fc-0160-4715-bd45-6a8ec1128754) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.739 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 9b14b0fc-0160-4715-bd45-6a8ec1128754 in datapath 6b1c6eff-f6e6-4af3-aa02-11290c8b6c83 bound to our chassis#033[00m
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.739 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.741 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6b1c6eff-f6e6-4af3-aa02-11290c8b6c83#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.749 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[83ad210d-55c7-45d5-b881-7964111d6760]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.750 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6b1c6eff-f1 in ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.751 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6b1c6eff-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.751 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[931bc493-821c-40f4-ba79-b76225cec1ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.752 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[e31c0d73-2471-4a14-8efb-e431aa16ec79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.761 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[765f1126-0695-4633-929a-b71102b39183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:30 np0005604943 systemd-machined[206973]: New machine qemu-6-instance-00000006.
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.773 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a31261fa-ab33-445e-910d-b179aee4cdd3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:30 np0005604943 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.797 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[ce451628-596a-4eda-b331-eb5476cb296d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:30 np0005604943 systemd-udevd[249490]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.801 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[22416e8a-91b5-41cc-b08a-a5b24a305dbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:30 np0005604943 NetworkManager[49093]: <info>  [1770033450.8024] manager: (tap6b1c6eff-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Feb  2 06:57:30 np0005604943 systemd-udevd[249493]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:57:30 np0005604943 NetworkManager[49093]: <info>  [1770033450.8151] device (tap9b14b0fc-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 06:57:30 np0005604943 NetworkManager[49093]: <info>  [1770033450.8155] device (tap9b14b0fc-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.828 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9900e5-8513-4a15-925a-8e07deb8177a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.831 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[3a2a97a2-862f-4a83-b2f4-2e6bc7b8275d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:30 np0005604943 podman[249454]: 2026-02-02 11:57:30.835596986 +0000 UTC m=+0.079396444 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb  2 06:57:30 np0005604943 podman[249456]: 2026-02-02 11:57:30.840866743 +0000 UTC m=+0.084791474 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  2 06:57:30 np0005604943 NetworkManager[49093]: <info>  [1770033450.8446] device (tap6b1c6eff-f0): carrier: link connected
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.847 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[df3cccfb-c57b-4885-a097-5e786b3da02d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.857 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[76516ba4-b7ac-4b0a-9b9f-42a1f43d2f35]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6b1c6eff-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:44:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389619, 'reachable_time': 40064, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249529, 'error': None, 'target': 'ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.869 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a57de824-5db1-4428-ac28-cd9d1a98b0d0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:447b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389619, 'tstamp': 389619}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249530, 'error': None, 'target': 'ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.877 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9f056697-2e0a-49f8-b7f1-a4b659091b51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6b1c6eff-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:44:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389619, 'reachable_time': 40064, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249531, 'error': None, 'target': 'ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.894 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed33404-f956-4eb8-91be-357bca949d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.921 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fae894ba-075e-46d2-be5e-ea5ba5b13529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.922 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b1c6eff-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.922 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.923 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b1c6eff-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.924 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:30 np0005604943 kernel: tap6b1c6eff-f0: entered promiscuous mode
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.926 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:30 np0005604943 NetworkManager[49093]: <info>  [1770033450.9268] manager: (tap6b1c6eff-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.927 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6b1c6eff-f0, col_values=(('external_ids', {'iface-id': '62bd25a1-d81c-41a6-b140-ac403d57fe36'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.928 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:30 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:30Z|00062|binding|INFO|Releasing lport 62bd25a1-d81c-41a6-b140-ac403d57fe36 from this chassis (sb_readonly=0)
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.929 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.929 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6b1c6eff-f6e6-4af3-aa02-11290c8b6c83.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6b1c6eff-f6e6-4af3-aa02-11290c8b6c83.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.930 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d34342-a704-42c4-80ae-3cd876030c01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.931 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/6b1c6eff-f6e6-4af3-aa02-11290c8b6c83.pid.haproxy
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID 6b1c6eff-f6e6-4af3-aa02-11290c8b6c83
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 06:57:30 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:30.932 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83', 'env', 'PROCESS_TAG=haproxy-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6b1c6eff-f6e6-4af3-aa02-11290c8b6c83.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.934 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.968 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.986 238887 DEBUG nova.network.neutron [req-f350173c-c51e-4bd1-b4cf-09178a2af62b req-b3d6cb68-e99f-42c4-aef6-7f47bd7e0cf7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Updated VIF entry in instance network info cache for port 9b14b0fc-0160-4715-bd45-6a8ec1128754. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:57:30 np0005604943 nova_compute[238883]: 2026-02-02 11:57:30.987 238887 DEBUG nova.network.neutron [req-f350173c-c51e-4bd1-b4cf-09178a2af62b req-b3d6cb68-e99f-42c4-aef6-7f47bd7e0cf7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Updating instance_info_cache with network_info: [{"id": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "address": "fa:16:3e:cc:bc:aa", "network": {"id": "6b1c6eff-f6e6-4af3-aa02-11290c8b6c83", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-678478278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7e9957088fe43eaae10f11401fe89c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b14b0fc-01", "ovs_interfaceid": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.006 238887 DEBUG oslo_concurrency.lockutils [req-f350173c-c51e-4bd1-b4cf-09178a2af62b req-b3d6cb68-e99f-42c4-aef6-7f47bd7e0cf7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-643f1632-51eb-4ee3-a152-cea78635d59c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:57:31 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:31Z|00063|binding|INFO|Releasing lport 62bd25a1-d81c-41a6-b140-ac403d57fe36 from this chassis (sb_readonly=0)
Feb  2 06:57:31 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:31Z|00064|binding|INFO|Releasing lport b2fa0ea4-27d8-4ad2-be31-b707a8a3d0e4 from this chassis (sb_readonly=0)
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.034 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.091 238887 DEBUG nova.compute.manager [req-1214a4af-5ca0-44d0-9320-09dc5df6ebf8 req-e7d56d87-19c9-4b3e-9fd5-c42b0ea2c914 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Received event network-vif-plugged-9b14b0fc-0160-4715-bd45-6a8ec1128754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.092 238887 DEBUG oslo_concurrency.lockutils [req-1214a4af-5ca0-44d0-9320-09dc5df6ebf8 req-e7d56d87-19c9-4b3e-9fd5-c42b0ea2c914 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.092 238887 DEBUG oslo_concurrency.lockutils [req-1214a4af-5ca0-44d0-9320-09dc5df6ebf8 req-e7d56d87-19c9-4b3e-9fd5-c42b0ea2c914 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.092 238887 DEBUG oslo_concurrency.lockutils [req-1214a4af-5ca0-44d0-9320-09dc5df6ebf8 req-e7d56d87-19c9-4b3e-9fd5-c42b0ea2c914 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.094 238887 DEBUG nova.compute.manager [req-1214a4af-5ca0-44d0-9320-09dc5df6ebf8 req-e7d56d87-19c9-4b3e-9fd5-c42b0ea2c914 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Processing event network-vif-plugged-9b14b0fc-0160-4715-bd45-6a8ec1128754 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 06:57:31 np0005604943 podman[249564]: 2026-02-02 11:57:31.244216424 +0000 UTC m=+0.047096977 container create 8440f63611f43d1a2d936ed7e560e833ea82bf2cfa4aa13b874eaa81e0f0a9b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Feb  2 06:57:31 np0005604943 systemd[1]: Started libpod-conmon-8440f63611f43d1a2d936ed7e560e833ea82bf2cfa4aa13b874eaa81e0f0a9b5.scope.
Feb  2 06:57:31 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:57:31 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0293fcea6f3ed88d3b1e8f4a3fa7d8db52d06486f20cc5eb19f207acf485c27/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 06:57:31 np0005604943 podman[249564]: 2026-02-02 11:57:31.220516592 +0000 UTC m=+0.023397165 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 06:57:31 np0005604943 podman[249564]: 2026-02-02 11:57:31.318336987 +0000 UTC m=+0.121217550 container init 8440f63611f43d1a2d936ed7e560e833ea82bf2cfa4aa13b874eaa81e0f0a9b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:57:31 np0005604943 podman[249564]: 2026-02-02 11:57:31.322442495 +0000 UTC m=+0.125323038 container start 8440f63611f43d1a2d936ed7e560e833ea82bf2cfa4aa13b874eaa81e0f0a9b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 06:57:31 np0005604943 neutron-haproxy-ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83[249579]: [NOTICE]   (249590) : New worker (249603) forked
Feb  2 06:57:31 np0005604943 neutron-haproxy-ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83[249579]: [NOTICE]   (249590) : Loading success.
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.487 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033451.48681, 643f1632-51eb-4ee3-a152-cea78635d59c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.488 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] VM Started (Lifecycle Event)#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.490 238887 DEBUG nova.compute.manager [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.492 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.494 238887 INFO nova.virt.libvirt.driver [-] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Instance spawned successfully.#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.495 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.520 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.527 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.530 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.531 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.532 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.532 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.532 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.533 238887 DEBUG nova.virt.libvirt.driver [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.568 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.569 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033451.4869914, 643f1632-51eb-4ee3-a152-cea78635d59c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.569 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] VM Paused (Lifecycle Event)#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.589 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.591 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033451.4922328, 643f1632-51eb-4ee3-a152-cea78635d59c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.592 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] VM Resumed (Lifecycle Event)#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.599 238887 INFO nova.compute.manager [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Took 5.30 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.600 238887 DEBUG nova.compute.manager [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.610 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.614 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.641 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.666 238887 INFO nova.compute.manager [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Took 6.57 seconds to build instance.#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.681 238887 DEBUG oslo_concurrency.lockutils [None req-e3138fd4-01b5-4e9d-96b6-2cc1cb1a01d2 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.729 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Updating instance_info_cache with network_info: [{"id": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "address": "fa:16:3e:61:f9:36", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebc43340-2b", "ovs_interfaceid": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.747 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Releasing lock "refresh_cache-9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.748 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.748 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.749 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.749 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.749 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.750 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.750 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.750 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.751 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.751 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.768 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.769 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:57:31 np0005604943 nova_compute[238883]: 2026-02-02 11:57:31.770 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  2 06:57:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 180 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 22 MiB/s wr, 220 op/s
Feb  2 06:57:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:57:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Feb  2 06:57:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Feb  2 06:57:32 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Feb  2 06:57:32 np0005604943 nova_compute[238883]: 2026-02-02 11:57:32.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:57:32 np0005604943 nova_compute[238883]: 2026-02-02 11:57:32.665 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:57:33 np0005604943 nova_compute[238883]: 2026-02-02 11:57:33.197 238887 DEBUG nova.compute.manager [req-4ef1573f-1e70-42d4-ba04-f9018b467e33 req-a4eda975-97da-41c6-b577-a3fc4a9ef4cc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Received event network-vif-plugged-9b14b0fc-0160-4715-bd45-6a8ec1128754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:33 np0005604943 nova_compute[238883]: 2026-02-02 11:57:33.198 238887 DEBUG oslo_concurrency.lockutils [req-4ef1573f-1e70-42d4-ba04-f9018b467e33 req-a4eda975-97da-41c6-b577-a3fc4a9ef4cc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:33 np0005604943 nova_compute[238883]: 2026-02-02 11:57:33.198 238887 DEBUG oslo_concurrency.lockutils [req-4ef1573f-1e70-42d4-ba04-f9018b467e33 req-a4eda975-97da-41c6-b577-a3fc4a9ef4cc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:33 np0005604943 nova_compute[238883]: 2026-02-02 11:57:33.198 238887 DEBUG oslo_concurrency.lockutils [req-4ef1573f-1e70-42d4-ba04-f9018b467e33 req-a4eda975-97da-41c6-b577-a3fc4a9ef4cc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:33 np0005604943 nova_compute[238883]: 2026-02-02 11:57:33.198 238887 DEBUG nova.compute.manager [req-4ef1573f-1e70-42d4-ba04-f9018b467e33 req-a4eda975-97da-41c6-b577-a3fc4a9ef4cc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] No waiting events found dispatching network-vif-plugged-9b14b0fc-0160-4715-bd45-6a8ec1128754 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:57:33 np0005604943 nova_compute[238883]: 2026-02-02 11:57:33.199 238887 WARNING nova.compute.manager [req-4ef1573f-1e70-42d4-ba04-f9018b467e33 req-a4eda975-97da-41c6-b577-a3fc4a9ef4cc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Received unexpected event network-vif-plugged-9b14b0fc-0160-4715-bd45-6a8ec1128754 for instance with vm_state active and task_state None.#033[00m
Feb  2 06:57:33 np0005604943 nova_compute[238883]: 2026-02-02 11:57:33.668 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:57:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 183 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 5.9 MiB/s wr, 279 op/s
Feb  2 06:57:34 np0005604943 nova_compute[238883]: 2026-02-02 11:57:34.894 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:35 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:35Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:f9:36 10.100.0.9
Feb  2 06:57:35 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:35Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:f9:36 10.100.0.9
Feb  2 06:57:35 np0005604943 nova_compute[238883]: 2026-02-02 11:57:35.325 238887 DEBUG nova.compute.manager [req-6010eec0-32c6-4d45-9586-31cb7e982a07 req-38211962-ac51-4690-916a-a195b4d0b55a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Received event network-changed-9b14b0fc-0160-4715-bd45-6a8ec1128754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:35 np0005604943 nova_compute[238883]: 2026-02-02 11:57:35.325 238887 DEBUG nova.compute.manager [req-6010eec0-32c6-4d45-9586-31cb7e982a07 req-38211962-ac51-4690-916a-a195b4d0b55a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Refreshing instance network info cache due to event network-changed-9b14b0fc-0160-4715-bd45-6a8ec1128754. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:57:35 np0005604943 nova_compute[238883]: 2026-02-02 11:57:35.325 238887 DEBUG oslo_concurrency.lockutils [req-6010eec0-32c6-4d45-9586-31cb7e982a07 req-38211962-ac51-4690-916a-a195b4d0b55a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-643f1632-51eb-4ee3-a152-cea78635d59c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:57:35 np0005604943 nova_compute[238883]: 2026-02-02 11:57:35.326 238887 DEBUG oslo_concurrency.lockutils [req-6010eec0-32c6-4d45-9586-31cb7e982a07 req-38211962-ac51-4690-916a-a195b4d0b55a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-643f1632-51eb-4ee3-a152-cea78635d59c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:57:35 np0005604943 nova_compute[238883]: 2026-02-02 11:57:35.326 238887 DEBUG nova.network.neutron [req-6010eec0-32c6-4d45-9586-31cb7e982a07 req-38211962-ac51-4690-916a-a195b4d0b55a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Refreshing network info cache for port 9b14b0fc-0160-4715-bd45-6a8ec1128754 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:57:35 np0005604943 nova_compute[238883]: 2026-02-02 11:57:35.970 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 183 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.0 MiB/s wr, 236 op/s
Feb  2 06:57:37 np0005604943 nova_compute[238883]: 2026-02-02 11:57:37.047 238887 DEBUG nova.network.neutron [req-6010eec0-32c6-4d45-9586-31cb7e982a07 req-38211962-ac51-4690-916a-a195b4d0b55a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Updated VIF entry in instance network info cache for port 9b14b0fc-0160-4715-bd45-6a8ec1128754. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:57:37 np0005604943 nova_compute[238883]: 2026-02-02 11:57:37.049 238887 DEBUG nova.network.neutron [req-6010eec0-32c6-4d45-9586-31cb7e982a07 req-38211962-ac51-4690-916a-a195b4d0b55a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Updating instance_info_cache with network_info: [{"id": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "address": "fa:16:3e:cc:bc:aa", "network": {"id": "6b1c6eff-f6e6-4af3-aa02-11290c8b6c83", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-678478278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7e9957088fe43eaae10f11401fe89c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b14b0fc-01", "ovs_interfaceid": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:57:37 np0005604943 nova_compute[238883]: 2026-02-02 11:57:37.073 238887 DEBUG oslo_concurrency.lockutils [req-6010eec0-32c6-4d45-9586-31cb7e982a07 req-38211962-ac51-4690-916a-a195b4d0b55a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-643f1632-51eb-4ee3-a152-cea78635d59c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:57:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:57:38 np0005604943 nova_compute[238883]: 2026-02-02 11:57:38.127 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 212 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 185 op/s
Feb  2 06:57:39 np0005604943 nova_compute[238883]: 2026-02-02 11:57:39.897 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 213 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 160 op/s
Feb  2 06:57:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:57:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:57:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:57:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:57:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:57:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:57:40 np0005604943 nova_compute[238883]: 2026-02-02 11:57:40.978 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:41 np0005604943 nova_compute[238883]: 2026-02-02 11:57:41.539 238887 DEBUG oslo_concurrency.lockutils [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:41 np0005604943 nova_compute[238883]: 2026-02-02 11:57:41.540 238887 DEBUG oslo_concurrency.lockutils [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:41 np0005604943 nova_compute[238883]: 2026-02-02 11:57:41.554 238887 DEBUG nova.objects.instance [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lazy-loading 'flavor' on Instance uuid 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:57:41 np0005604943 nova_compute[238883]: 2026-02-02 11:57:41.590 238887 INFO nova.virt.libvirt.driver [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Ignoring supplied device name: /dev/vdb#033[00m
Feb  2 06:57:41 np0005604943 nova_compute[238883]: 2026-02-02 11:57:41.608 238887 DEBUG oslo_concurrency.lockutils [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:41 np0005604943 nova_compute[238883]: 2026-02-02 11:57:41.967 238887 DEBUG oslo_concurrency.lockutils [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:41 np0005604943 nova_compute[238883]: 2026-02-02 11:57:41.968 238887 DEBUG oslo_concurrency.lockutils [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:41 np0005604943 nova_compute[238883]: 2026-02-02 11:57:41.968 238887 INFO nova.compute.manager [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Attaching volume 539a75fe-1001-4f8c-81ab-40858fd6839c to /dev/vdb#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.174 238887 DEBUG os_brick.utils [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.176 238887 INFO oslo.privsep.daemon [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpney6i09_/privsep.sock']#033[00m
Feb  2 06:57:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 213 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 160 op/s
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.226 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.824 238887 INFO oslo.privsep.daemon [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.700 249642 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.704 249642 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.706 249642 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.706 249642 INFO oslo.privsep.daemon [-] privsep daemon running as pid 249642#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.827 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[1913558c-3291-4f27-9852-fc36fd6be195]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.913 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.924 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.925 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[9635cbd0-a6fd-435b-ab78-abb4b297b42f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.926 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.932 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.932 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[ec17befd-3898-41c7-9a5c-6bbd05afe49b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.934 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.942 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.943 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf55141-648e-415e-880a-dac7dbed894d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.944 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[37a48e7d-3156-4aab-9658-cf5522870bca]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.945 238887 DEBUG oslo_concurrency.processutils [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.957 238887 DEBUG oslo_concurrency.processutils [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "nvme version" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.960 238887 DEBUG os_brick.initiator.connectors.lightos [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.961 238887 DEBUG os_brick.initiator.connectors.lightos [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.961 238887 DEBUG os_brick.initiator.connectors.lightos [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.961 238887 DEBUG os_brick.utils [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] <== get_connector_properties: return (786ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 06:57:42 np0005604943 nova_compute[238883]: 2026-02-02 11:57:42.962 238887 DEBUG nova.virt.block_device [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Updating existing volume attachment record: 947b0bee-cfa5-4850-9274-c996d43eacb0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 06:57:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:57:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1254956099' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:57:43 np0005604943 nova_compute[238883]: 2026-02-02 11:57:43.878 238887 DEBUG oslo_concurrency.lockutils [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:43 np0005604943 nova_compute[238883]: 2026-02-02 11:57:43.879 238887 DEBUG oslo_concurrency.lockutils [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:43 np0005604943 nova_compute[238883]: 2026-02-02 11:57:43.880 238887 DEBUG oslo_concurrency.lockutils [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:43 np0005604943 nova_compute[238883]: 2026-02-02 11:57:43.885 238887 DEBUG nova.objects.instance [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lazy-loading 'flavor' on Instance uuid 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:57:43 np0005604943 nova_compute[238883]: 2026-02-02 11:57:43.919 238887 DEBUG nova.virt.libvirt.driver [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Attempting to attach volume 539a75fe-1001-4f8c-81ab-40858fd6839c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 06:57:43 np0005604943 nova_compute[238883]: 2026-02-02 11:57:43.922 238887 DEBUG nova.virt.libvirt.guest [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 06:57:43 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:57:43 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-539a75fe-1001-4f8c-81ab-40858fd6839c">
Feb  2 06:57:43 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:57:43 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:57:43 np0005604943 nova_compute[238883]:  <auth username="openstack">
Feb  2 06:57:43 np0005604943 nova_compute[238883]:    <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:57:43 np0005604943 nova_compute[238883]:  </auth>
Feb  2 06:57:43 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:57:43 np0005604943 nova_compute[238883]:  <serial>539a75fe-1001-4f8c-81ab-40858fd6839c</serial>
Feb  2 06:57:43 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:57:43 np0005604943 nova_compute[238883]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 06:57:44 np0005604943 nova_compute[238883]: 2026-02-02 11:57:44.045 238887 DEBUG nova.virt.libvirt.driver [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:57:44 np0005604943 nova_compute[238883]: 2026-02-02 11:57:44.045 238887 DEBUG nova.virt.libvirt.driver [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:57:44 np0005604943 nova_compute[238883]: 2026-02-02 11:57:44.045 238887 DEBUG nova.virt.libvirt.driver [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:57:44 np0005604943 nova_compute[238883]: 2026-02-02 11:57:44.045 238887 DEBUG nova.virt.libvirt.driver [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] No VIF found with MAC fa:16:3e:61:f9:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 06:57:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 242 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 163 op/s
Feb  2 06:57:44 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:44Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cc:bc:aa 10.100.0.7
Feb  2 06:57:44 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:44Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:bc:aa 10.100.0.7
Feb  2 06:57:44 np0005604943 nova_compute[238883]: 2026-02-02 11:57:44.321 238887 DEBUG oslo_concurrency.lockutils [None req-1df0841b-1639-40ba-a081-07122a3e9534 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:57:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2720921714' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:57:44 np0005604943 nova_compute[238883]: 2026-02-02 11:57:44.900 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:57:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2720921714' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:57:46 np0005604943 nova_compute[238883]: 2026-02-02 11:57:46.023 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 242 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 682 KiB/s rd, 3.8 MiB/s wr, 98 op/s
Feb  2 06:57:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Feb  2 06:57:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Feb  2 06:57:46 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Feb  2 06:57:47 np0005604943 nova_compute[238883]: 2026-02-02 11:57:47.349 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "e3333751-86a5-40df-9180-a0c8153f06a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:47 np0005604943 nova_compute[238883]: 2026-02-02 11:57:47.350 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:47 np0005604943 nova_compute[238883]: 2026-02-02 11:57:47.364 238887 DEBUG nova.compute.manager [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 06:57:47 np0005604943 nova_compute[238883]: 2026-02-02 11:57:47.434 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:47 np0005604943 nova_compute[238883]: 2026-02-02 11:57:47.435 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:47 np0005604943 nova_compute[238883]: 2026-02-02 11:57:47.444 238887 DEBUG nova.virt.hardware [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 06:57:47 np0005604943 nova_compute[238883]: 2026-02-02 11:57:47.444 238887 INFO nova.compute.claims [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 06:57:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:57:47 np0005604943 nova_compute[238883]: 2026-02-02 11:57:47.571 238887 DEBUG oslo_concurrency.processutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Feb  2 06:57:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Feb  2 06:57:47 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Feb  2 06:57:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:57:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/385056529' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.117 238887 DEBUG oslo_concurrency.processutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.121 238887 DEBUG nova.compute.provider_tree [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.137 238887 DEBUG nova.scheduler.client.report [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.163 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.163 238887 DEBUG nova.compute.manager [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 06:57:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 246 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 343 KiB/s rd, 3.2 MiB/s wr, 109 op/s
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.211 238887 DEBUG nova.compute.manager [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.211 238887 DEBUG nova.network.neutron [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.230 238887 INFO nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.252 238887 DEBUG nova.compute.manager [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.334 238887 DEBUG nova.compute.manager [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.335 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.335 238887 INFO nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Creating image(s)#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.359 238887 DEBUG nova.storage.rbd_utils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] rbd image e3333751-86a5-40df-9180-a0c8153f06a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.380 238887 DEBUG nova.storage.rbd_utils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] rbd image e3333751-86a5-40df-9180-a0c8153f06a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.402 238887 DEBUG nova.storage.rbd_utils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] rbd image e3333751-86a5-40df-9180-a0c8153f06a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.407 238887 DEBUG oslo_concurrency.processutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.422 238887 DEBUG nova.policy [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '619ce2f20dd849f6a462d2162bcccc7a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '61afd70cadc143c2a9c65f6cec8dc9e8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.454 238887 DEBUG oslo_concurrency.processutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.454 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.455 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.455 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.473 238887 DEBUG nova.storage.rbd_utils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] rbd image e3333751-86a5-40df-9180-a0c8153f06a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.478 238887 DEBUG oslo_concurrency.processutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 e3333751-86a5-40df-9180-a0c8153f06a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Feb  2 06:57:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Feb  2 06:57:48 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.716 238887 DEBUG oslo_concurrency.processutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 e3333751-86a5-40df-9180-a0c8153f06a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.764 238887 DEBUG nova.storage.rbd_utils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] resizing rbd image e3333751-86a5-40df-9180-a0c8153f06a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.834 238887 DEBUG nova.objects.instance [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lazy-loading 'migration_context' on Instance uuid e3333751-86a5-40df-9180-a0c8153f06a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.853 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.854 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Ensure instance console log exists: /var/lib/nova/instances/e3333751-86a5-40df-9180-a0c8153f06a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.854 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.854 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:48 np0005604943 nova_compute[238883]: 2026-02-02 11:57:48.855 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:49 np0005604943 nova_compute[238883]: 2026-02-02 11:57:49.044 238887 DEBUG nova.network.neutron [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Successfully created port: 2e53d89b-c3e1-480c-af8c-98b7e9b8d425 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 06:57:49 np0005604943 nova_compute[238883]: 2026-02-02 11:57:49.904 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.189 238887 DEBUG nova.network.neutron [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Successfully updated port: 2e53d89b-c3e1-480c-af8c-98b7e9b8d425 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 06:57:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 257 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 241 KiB/s rd, 440 KiB/s wr, 96 op/s
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.207 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "refresh_cache-e3333751-86a5-40df-9180-a0c8153f06a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.208 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquired lock "refresh_cache-e3333751-86a5-40df-9180-a0c8153f06a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.208 238887 DEBUG nova.network.neutron [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.308 238887 DEBUG nova.compute.manager [req-8ffc0053-9ec0-4f22-bd49-4afe6ee0f619 req-b492a0db-a53a-4be8-92b6-b29d3cacd55c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Received event network-changed-2e53d89b-c3e1-480c-af8c-98b7e9b8d425 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.309 238887 DEBUG nova.compute.manager [req-8ffc0053-9ec0-4f22-bd49-4afe6ee0f619 req-b492a0db-a53a-4be8-92b6-b29d3cacd55c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Refreshing instance network info cache due to event network-changed-2e53d89b-c3e1-480c-af8c-98b7e9b8d425. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.309 238887 DEBUG oslo_concurrency.lockutils [req-8ffc0053-9ec0-4f22-bd49-4afe6ee0f619 req-b492a0db-a53a-4be8-92b6-b29d3cacd55c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-e3333751-86a5-40df-9180-a0c8153f06a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.560 238887 DEBUG oslo_concurrency.lockutils [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Acquiring lock "643f1632-51eb-4ee3-a152-cea78635d59c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.560 238887 DEBUG oslo_concurrency.lockutils [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.592 238887 DEBUG nova.objects.instance [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lazy-loading 'flavor' on Instance uuid 643f1632-51eb-4ee3-a152-cea78635d59c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.623 238887 INFO nova.virt.libvirt.driver [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Ignoring supplied device name: /dev/vdb#033[00m
Feb  2 06:57:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.643 238887 DEBUG oslo_concurrency.lockutils [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.645 238887 DEBUG nova.network.neutron [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 06:57:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Feb  2 06:57:50 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.853 238887 DEBUG oslo_concurrency.lockutils [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Acquiring lock "643f1632-51eb-4ee3-a152-cea78635d59c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.854 238887 DEBUG oslo_concurrency.lockutils [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.854 238887 INFO nova.compute.manager [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Attaching volume f0082ec7-882a-4b7a-ad82-09ee0345ab7d to /dev/vdb#033[00m
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.992 238887 DEBUG os_brick.utils [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 06:57:50 np0005604943 nova_compute[238883]: 2026-02-02 11:57:50.994 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.008 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.009 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[379b1618-3e3e-40b9-b4e5-466c3759c342]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.010 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.019 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.019 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[579dc7fb-a962-41b9-bef6-9f0eb1dc07af]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.022 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.025 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.032 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.032 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[286637c8-3c7b-4db9-8baa-b34b4c732774]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.034 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2ddb37-9641-4257-978b-ff71cf712126]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.034 238887 DEBUG oslo_concurrency.processutils [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.048 238887 DEBUG oslo_concurrency.processutils [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] CMD "nvme version" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.050 238887 DEBUG os_brick.initiator.connectors.lightos [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.050 238887 DEBUG os_brick.initiator.connectors.lightos [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.051 238887 DEBUG os_brick.initiator.connectors.lightos [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.051 238887 DEBUG os_brick.utils [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] <== get_connector_properties: return (58ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.052 238887 DEBUG nova.virt.block_device [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Updating existing volume attachment record: a6edaeed-b6f2-48b6-9351-6afd652e8062 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.362 238887 DEBUG nova.network.neutron [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Updating instance_info_cache with network_info: [{"id": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "address": "fa:16:3e:b7:29:95", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e53d89b-c3", "ovs_interfaceid": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.394 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Releasing lock "refresh_cache-e3333751-86a5-40df-9180-a0c8153f06a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.395 238887 DEBUG nova.compute.manager [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Instance network_info: |[{"id": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "address": "fa:16:3e:b7:29:95", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e53d89b-c3", "ovs_interfaceid": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.395 238887 DEBUG oslo_concurrency.lockutils [req-8ffc0053-9ec0-4f22-bd49-4afe6ee0f619 req-b492a0db-a53a-4be8-92b6-b29d3cacd55c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-e3333751-86a5-40df-9180-a0c8153f06a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.396 238887 DEBUG nova.network.neutron [req-8ffc0053-9ec0-4f22-bd49-4afe6ee0f619 req-b492a0db-a53a-4be8-92b6-b29d3cacd55c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Refreshing network info cache for port 2e53d89b-c3e1-480c-af8c-98b7e9b8d425 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.400 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Start _get_guest_xml network_info=[{"id": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "address": "fa:16:3e:b7:29:95", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e53d89b-c3", "ovs_interfaceid": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'image_id': '21b263f0-00f1-47be-b8b1-e3c07da0a6a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.406 238887 WARNING nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.411 238887 DEBUG nova.virt.libvirt.host [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.411 238887 DEBUG nova.virt.libvirt.host [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.414 238887 DEBUG nova.virt.libvirt.host [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.414 238887 DEBUG nova.virt.libvirt.host [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.415 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.415 238887 DEBUG nova.virt.hardware [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.415 238887 DEBUG nova.virt.hardware [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.415 238887 DEBUG nova.virt.hardware [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.416 238887 DEBUG nova.virt.hardware [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.416 238887 DEBUG nova.virt.hardware [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.416 238887 DEBUG nova.virt.hardware [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.416 238887 DEBUG nova.virt.hardware [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.416 238887 DEBUG nova.virt.hardware [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.417 238887 DEBUG nova.virt.hardware [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.417 238887 DEBUG nova.virt.hardware [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.417 238887 DEBUG nova.virt.hardware [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.421 238887 DEBUG oslo_concurrency.processutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:57:51 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/385128630' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.895 238887 DEBUG nova.objects.instance [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lazy-loading 'flavor' on Instance uuid 643f1632-51eb-4ee3-a152-cea78635d59c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.917 238887 DEBUG nova.virt.libvirt.driver [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Attempting to attach volume f0082ec7-882a-4b7a-ad82-09ee0345ab7d with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.919 238887 DEBUG nova.virt.libvirt.guest [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 06:57:51 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:57:51 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-f0082ec7-882a-4b7a-ad82-09ee0345ab7d">
Feb  2 06:57:51 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:57:51 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:57:51 np0005604943 nova_compute[238883]:  <auth username="openstack">
Feb  2 06:57:51 np0005604943 nova_compute[238883]:    <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:57:51 np0005604943 nova_compute[238883]:  </auth>
Feb  2 06:57:51 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:57:51 np0005604943 nova_compute[238883]:  <serial>f0082ec7-882a-4b7a-ad82-09ee0345ab7d</serial>
Feb  2 06:57:51 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:57:51 np0005604943 nova_compute[238883]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 06:57:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:57:51 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2140613171' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.944 238887 DEBUG oslo_concurrency.processutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.969 238887 DEBUG nova.storage.rbd_utils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] rbd image e3333751-86a5-40df-9180-a0c8153f06a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:51 np0005604943 nova_compute[238883]: 2026-02-02 11:57:51.975 238887 DEBUG oslo_concurrency.processutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.021 238887 DEBUG nova.virt.libvirt.driver [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.021 238887 DEBUG nova.virt.libvirt.driver [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.022 238887 DEBUG nova.virt.libvirt.driver [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.022 238887 DEBUG nova.virt.libvirt.driver [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] No VIF found with MAC fa:16:3e:cc:bc:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.204 238887 DEBUG oslo_concurrency.lockutils [None req-1156b667-ac6a-47dc-b1af-47ae1862ad09 e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 257 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 257 KiB/s rd, 469 KiB/s wr, 103 op/s
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.366 238887 DEBUG nova.network.neutron [req-8ffc0053-9ec0-4f22-bd49-4afe6ee0f619 req-b492a0db-a53a-4be8-92b6-b29d3cacd55c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Updated VIF entry in instance network info cache for port 2e53d89b-c3e1-480c-af8c-98b7e9b8d425. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.367 238887 DEBUG nova.network.neutron [req-8ffc0053-9ec0-4f22-bd49-4afe6ee0f619 req-b492a0db-a53a-4be8-92b6-b29d3cacd55c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Updating instance_info_cache with network_info: [{"id": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "address": "fa:16:3e:b7:29:95", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e53d89b-c3", "ovs_interfaceid": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.380 238887 DEBUG oslo_concurrency.lockutils [req-8ffc0053-9ec0-4f22-bd49-4afe6ee0f619 req-b492a0db-a53a-4be8-92b6-b29d3cacd55c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-e3333751-86a5-40df-9180-a0c8153f06a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:57:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:57:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:57:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3843857275' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.519 238887 DEBUG oslo_concurrency.processutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.521 238887 DEBUG nova.virt.libvirt.vif [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:57:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-395658677',display_name='tempest-VolumesBackupsTest-instance-395658677',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-395658677',id=7,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH2E7G7yXreKh+R/5pZvHxH41KpQ1/cpxT1k5tVX9U3p92cG1tl6U58Hl2cMaNmii3kF0ulyFdE8uKaIFXxXHpjnBCsHQnsvTg/if5l+M1u7+7jeXkdUA5ba6jhNDG/1eQ==',key_name='tempest-keypair-259832784',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61afd70cadc143c2a9c65f6cec8dc9e8',ramdisk_id='',reservation_id='r-32l91c70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1949354358',owner_user_name='tempest-VolumesBackupsTest-1949354358-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:57:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='619ce2f20dd849f6a462d2162bcccc7a',uuid=e3333751-86a5-40df-9180-a0c8153f06a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "address": "fa:16:3e:b7:29:95", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e53d89b-c3", "ovs_interfaceid": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.521 238887 DEBUG nova.network.os_vif_util [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Converting VIF {"id": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "address": "fa:16:3e:b7:29:95", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e53d89b-c3", "ovs_interfaceid": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.522 238887 DEBUG nova.network.os_vif_util [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:29:95,bridge_name='br-int',has_traffic_filtering=True,id=2e53d89b-c3e1-480c-af8c-98b7e9b8d425,network=Network(302d1601-7819-4001-9e16-ee97183eb73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e53d89b-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.523 238887 DEBUG nova.objects.instance [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lazy-loading 'pci_devices' on Instance uuid e3333751-86a5-40df-9180-a0c8153f06a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.538 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] End _get_guest_xml xml=<domain type="kvm">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  <uuid>e3333751-86a5-40df-9180-a0c8153f06a4</uuid>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  <name>instance-00000007</name>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <nova:name>tempest-VolumesBackupsTest-instance-395658677</nova:name>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 11:57:51</nova:creationTime>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:        <nova:user uuid="619ce2f20dd849f6a462d2162bcccc7a">tempest-VolumesBackupsTest-1949354358-project-member</nova:user>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:        <nova:project uuid="61afd70cadc143c2a9c65f6cec8dc9e8">tempest-VolumesBackupsTest-1949354358</nova:project>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <nova:root type="image" uuid="21b263f0-00f1-47be-b8b1-e3c07da0a6a2"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:        <nova:port uuid="2e53d89b-c3e1-480c-af8c-98b7e9b8d425">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <system>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <entry name="serial">e3333751-86a5-40df-9180-a0c8153f06a4</entry>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <entry name="uuid">e3333751-86a5-40df-9180-a0c8153f06a4</entry>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    </system>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  <os>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  </os>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  <features>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  </features>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  </clock>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  <devices>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/e3333751-86a5-40df-9180-a0c8153f06a4_disk">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/e3333751-86a5-40df-9180-a0c8153f06a4_disk.config">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:b7:29:95"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <target dev="tap2e53d89b-c3"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    </interface>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/e3333751-86a5-40df-9180-a0c8153f06a4/console.log" append="off"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    </serial>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <video>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    </video>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    </rng>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 06:57:52 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 06:57:52 np0005604943 nova_compute[238883]:  </devices>
Feb  2 06:57:52 np0005604943 nova_compute[238883]: </domain>
Feb  2 06:57:52 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.538 238887 DEBUG nova.compute.manager [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Preparing to wait for external event network-vif-plugged-2e53d89b-c3e1-480c-af8c-98b7e9b8d425 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.539 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.539 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.540 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.541 238887 DEBUG nova.virt.libvirt.vif [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:57:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-395658677',display_name='tempest-VolumesBackupsTest-instance-395658677',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-395658677',id=7,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH2E7G7yXreKh+R/5pZvHxH41KpQ1/cpxT1k5tVX9U3p92cG1tl6U58Hl2cMaNmii3kF0ulyFdE8uKaIFXxXHpjnBCsHQnsvTg/if5l+M1u7+7jeXkdUA5ba6jhNDG/1eQ==',key_name='tempest-keypair-259832784',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61afd70cadc143c2a9c65f6cec8dc9e8',ramdisk_id='',reservation_id='r-32l91c70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1949354358',owner_user_name='tempest-VolumesBackupsTest-1949354358-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:57:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='619ce2f20dd849f6a462d2162bcccc7a',uuid=e3333751-86a5-40df-9180-a0c8153f06a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "address": "fa:16:3e:b7:29:95", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e53d89b-c3", "ovs_interfaceid": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.541 238887 DEBUG nova.network.os_vif_util [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Converting VIF {"id": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "address": "fa:16:3e:b7:29:95", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e53d89b-c3", "ovs_interfaceid": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.542 238887 DEBUG nova.network.os_vif_util [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:29:95,bridge_name='br-int',has_traffic_filtering=True,id=2e53d89b-c3e1-480c-af8c-98b7e9b8d425,network=Network(302d1601-7819-4001-9e16-ee97183eb73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e53d89b-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.543 238887 DEBUG os_vif [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:29:95,bridge_name='br-int',has_traffic_filtering=True,id=2e53d89b-c3e1-480c-af8c-98b7e9b8d425,network=Network(302d1601-7819-4001-9e16-ee97183eb73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e53d89b-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.544 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.544 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.545 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.550 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.550 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e53d89b-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.551 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2e53d89b-c3, col_values=(('external_ids', {'iface-id': '2e53d89b-c3e1-480c-af8c-98b7e9b8d425', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:29:95', 'vm-uuid': 'e3333751-86a5-40df-9180-a0c8153f06a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.553 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:52 np0005604943 NetworkManager[49093]: <info>  [1770033472.5539] manager: (tap2e53d89b-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.555 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.558 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.559 238887 INFO os_vif [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:29:95,bridge_name='br-int',has_traffic_filtering=True,id=2e53d89b-c3e1-480c-af8c-98b7e9b8d425,network=Network(302d1601-7819-4001-9e16-ee97183eb73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e53d89b-c3')#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.614 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.615 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.615 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] No VIF found with MAC fa:16:3e:b7:29:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.616 238887 INFO nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Using config drive#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.640 238887 DEBUG nova.storage.rbd_utils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] rbd image e3333751-86a5-40df-9180-a0c8153f06a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Feb  2 06:57:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Feb  2 06:57:52 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.873 238887 INFO nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Creating config drive at /var/lib/nova/instances/e3333751-86a5-40df-9180-a0c8153f06a4/disk.config#033[00m
Feb  2 06:57:52 np0005604943 nova_compute[238883]: 2026-02-02 11:57:52.878 238887 DEBUG oslo_concurrency.processutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e3333751-86a5-40df-9180-a0c8153f06a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp307m1b7q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.002 238887 DEBUG oslo_concurrency.processutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e3333751-86a5-40df-9180-a0c8153f06a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp307m1b7q" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.032 238887 DEBUG nova.storage.rbd_utils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] rbd image e3333751-86a5-40df-9180-a0c8153f06a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.037 238887 DEBUG oslo_concurrency.processutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e3333751-86a5-40df-9180-a0c8153f06a4/disk.config e3333751-86a5-40df-9180-a0c8153f06a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.170 238887 DEBUG oslo_concurrency.processutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e3333751-86a5-40df-9180-a0c8153f06a4/disk.config e3333751-86a5-40df-9180-a0c8153f06a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.170 238887 INFO nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Deleting local config drive /var/lib/nova/instances/e3333751-86a5-40df-9180-a0c8153f06a4/disk.config because it was imported into RBD.#033[00m
Feb  2 06:57:53 np0005604943 kernel: tap2e53d89b-c3: entered promiscuous mode
Feb  2 06:57:53 np0005604943 NetworkManager[49093]: <info>  [1770033473.2149] manager: (tap2e53d89b-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Feb  2 06:57:53 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:53Z|00065|binding|INFO|Claiming lport 2e53d89b-c3e1-480c-af8c-98b7e9b8d425 for this chassis.
Feb  2 06:57:53 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:53Z|00066|binding|INFO|2e53d89b-c3e1-480c-af8c-98b7e9b8d425: Claiming fa:16:3e:b7:29:95 10.100.0.9
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.216 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:53 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:53Z|00067|binding|INFO|Setting lport 2e53d89b-c3e1-480c-af8c-98b7e9b8d425 ovn-installed in OVS
Feb  2 06:57:53 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:53Z|00068|binding|INFO|Setting lport 2e53d89b-c3e1-480c-af8c-98b7e9b8d425 up in Southbound
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.222 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:29:95 10.100.0.9'], port_security=['fa:16:3e:b7:29:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e3333751-86a5-40df-9180-a0c8153f06a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-302d1601-7819-4001-9e16-ee97183eb73b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61afd70cadc143c2a9c65f6cec8dc9e8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf550be2-fb79-4050-9dfb-2bfa7b384f11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb72e047-676c-4da5-9d5d-6a9b44c0057a, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=2e53d89b-c3e1-480c-af8c-98b7e9b8d425) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.225 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.227 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.227 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 2e53d89b-c3e1-480c-af8c-98b7e9b8d425 in datapath 302d1601-7819-4001-9e16-ee97183eb73b bound to our chassis#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.231 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 302d1601-7819-4001-9e16-ee97183eb73b#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.244 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[3a453a1c-c4d7-4a2e-9c3d-f2bf78920bbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.245 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap302d1601-71 in ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 06:57:53 np0005604943 systemd-machined[206973]: New machine qemu-7-instance-00000007.
Feb  2 06:57:53 np0005604943 systemd-udevd[250022]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.248 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap302d1601-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.248 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d2120873-db51-4d32-8e66-6c822fad2b69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.249 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[75fae50a-13b8-4b3d-880b-aa829e2ac281]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:53 np0005604943 NetworkManager[49093]: <info>  [1770033473.2608] device (tap2e53d89b-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.259 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[962463b3-64aa-4460-9873-d431cce02a1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:53 np0005604943 NetworkManager[49093]: <info>  [1770033473.2619] device (tap2e53d89b-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 06:57:53 np0005604943 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.273 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[6493222e-2f49-44cb-b8b5-f7c476d1e074]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.306 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[dad8fb1a-65f8-4d87-87cb-c098c10df0a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:53 np0005604943 NetworkManager[49093]: <info>  [1770033473.3139] manager: (tap302d1601-70): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.313 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d6974f47-a7b4-4740-a96a-2ebfb7737314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:53 np0005604943 systemd-udevd[250025]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.343 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[44d08704-9a2d-4346-88fe-1faca2b18af3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.346 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[7143fd82-a2f3-4b6e-8062-72e92d71ec9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:53 np0005604943 NetworkManager[49093]: <info>  [1770033473.3632] device (tap302d1601-70): carrier: link connected
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.366 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[4e6414ce-c454-4bd2-8c79-0119966ae4c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.377 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4bac1f-caa1-41e9-b8c2-011acb50b78a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap302d1601-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:b2:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 391871, 'reachable_time': 28636, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250054, 'error': None, 'target': 'ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.395 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7faeae31-75c0-41a3-8f60-458d818ccfe6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0c:b2d7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 391871, 'tstamp': 391871}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250055, 'error': None, 'target': 'ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.410 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[6494bdf9-6ed7-4bd6-b1a9-8c07c0430293]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap302d1601-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:b2:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 391871, 'reachable_time': 28636, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250056, 'error': None, 'target': 'ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.425 238887 DEBUG nova.compute.manager [req-695dd1ca-592b-4da6-a23f-174cadfd6300 req-a1082d5d-61d2-4065-bf14-44dbfa43e481 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Received event network-vif-plugged-2e53d89b-c3e1-480c-af8c-98b7e9b8d425 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.426 238887 DEBUG oslo_concurrency.lockutils [req-695dd1ca-592b-4da6-a23f-174cadfd6300 req-a1082d5d-61d2-4065-bf14-44dbfa43e481 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.426 238887 DEBUG oslo_concurrency.lockutils [req-695dd1ca-592b-4da6-a23f-174cadfd6300 req-a1082d5d-61d2-4065-bf14-44dbfa43e481 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.426 238887 DEBUG oslo_concurrency.lockutils [req-695dd1ca-592b-4da6-a23f-174cadfd6300 req-a1082d5d-61d2-4065-bf14-44dbfa43e481 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.426 238887 DEBUG nova.compute.manager [req-695dd1ca-592b-4da6-a23f-174cadfd6300 req-a1082d5d-61d2-4065-bf14-44dbfa43e481 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Processing event network-vif-plugged-2e53d89b-c3e1-480c-af8c-98b7e9b8d425 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.438 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[72fe83e5-e31b-4c9f-aca7-ee4e0539a054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.501 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[f2bc85f6-2fee-48aa-acbd-bafca1f0bf78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.503 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap302d1601-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.503 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.504 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap302d1601-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:53 np0005604943 NetworkManager[49093]: <info>  [1770033473.5064] manager: (tap302d1601-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.505 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:53 np0005604943 kernel: tap302d1601-70: entered promiscuous mode
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.514 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap302d1601-70, col_values=(('external_ids', {'iface-id': '7f7a24e7-2e36-4c1c-8857-8367e857534f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:53 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:53Z|00069|binding|INFO|Releasing lport 7f7a24e7-2e36-4c1c-8857-8367e857534f from this chassis (sb_readonly=0)
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.516 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.522 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.522 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/302d1601-7819-4001-9e16-ee97183eb73b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/302d1601-7819-4001-9e16-ee97183eb73b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.523 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[47f8385d-9227-412d-826f-2fd79073d4f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.524 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-302d1601-7819-4001-9e16-ee97183eb73b
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/302d1601-7819-4001-9e16-ee97183eb73b.pid.haproxy
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID 302d1601-7819-4001-9e16-ee97183eb73b
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 06:57:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:53.524 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b', 'env', 'PROCESS_TAG=haproxy-302d1601-7819-4001-9e16-ee97183eb73b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/302d1601-7819-4001-9e16-ee97183eb73b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.695 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033473.6954658, e3333751-86a5-40df-9180-a0c8153f06a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.697 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] VM Started (Lifecycle Event)#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.702 238887 DEBUG nova.compute.manager [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.706 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.710 238887 INFO nova.virt.libvirt.driver [-] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Instance spawned successfully.#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.710 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 06:57:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.727 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.735 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:57:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.738 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.739 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.739 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.739 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.740 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.740 238887 DEBUG nova.virt.libvirt.driver [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:57:53 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.773 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.774 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033473.6955526, e3333751-86a5-40df-9180-a0c8153f06a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.774 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] VM Paused (Lifecycle Event)#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.803 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.807 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033473.7055266, e3333751-86a5-40df-9180-a0c8153f06a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.808 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] VM Resumed (Lifecycle Event)#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.826 238887 INFO nova.compute.manager [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Took 5.49 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.826 238887 DEBUG nova.compute.manager [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.827 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.834 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.866 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:57:53 np0005604943 podman[250129]: 2026-02-02 11:57:53.873875941 +0000 UTC m=+0.054244984 container create d9249a0c356ae9607953b61c440e8f603da2f5f8ad4cde6f2901f54242fc8fa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.893 238887 INFO nova.compute.manager [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Took 6.48 seconds to build instance.#033[00m
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.912 238887 DEBUG oslo_concurrency.lockutils [None req-a19e22ea-c383-4bbb-bfb5-9f1df3440b87 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:53 np0005604943 systemd[1]: Started libpod-conmon-d9249a0c356ae9607953b61c440e8f603da2f5f8ad4cde6f2901f54242fc8fa9.scope.
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.937 238887 DEBUG nova.compute.manager [req-9dbe300e-a870-49e0-8ecc-be6335bea9c4 req-8f9c60ff-d4bd-4acf-8fde-25f22fa09ed3 c0d8e826e9e84c8a96887ca462a1c1b7 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Received event volume-extended-f0082ec7-882a-4b7a-ad82-09ee0345ab7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:53 np0005604943 podman[250129]: 2026-02-02 11:57:53.844544832 +0000 UTC m=+0.024913895 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 06:57:53 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.952 238887 DEBUG nova.compute.manager [req-9dbe300e-a870-49e0-8ecc-be6335bea9c4 req-8f9c60ff-d4bd-4acf-8fde-25f22fa09ed3 c0d8e826e9e84c8a96887ca462a1c1b7 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Handling volume-extended event for volume f0082ec7-882a-4b7a-ad82-09ee0345ab7d extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896#033[00m
Feb  2 06:57:53 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79175fe41426ce05f180faf58ba431d42d067fcc9c0bcb5a55aa74b24f9ab9b3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 06:57:53 np0005604943 podman[250129]: 2026-02-02 11:57:53.964498389 +0000 UTC m=+0.144867452 container init d9249a0c356ae9607953b61c440e8f603da2f5f8ad4cde6f2901f54242fc8fa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 06:57:53 np0005604943 nova_compute[238883]: 2026-02-02 11:57:53.964 238887 INFO nova.compute.manager [req-9dbe300e-a870-49e0-8ecc-be6335bea9c4 req-8f9c60ff-d4bd-4acf-8fde-25f22fa09ed3 c0d8e826e9e84c8a96887ca462a1c1b7 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Cinder extended volume f0082ec7-882a-4b7a-ad82-09ee0345ab7d; extending it to detect new size#033[00m
Feb  2 06:57:53 np0005604943 podman[250129]: 2026-02-02 11:57:53.969111429 +0000 UTC m=+0.149480472 container start d9249a0c356ae9607953b61c440e8f603da2f5f8ad4cde6f2901f54242fc8fa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:57:53 np0005604943 neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b[250145]: [NOTICE]   (250149) : New worker (250151) forked
Feb  2 06:57:53 np0005604943 neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b[250145]: [NOTICE]   (250149) : Loading success.
Feb  2 06:57:54 np0005604943 nova_compute[238883]: 2026-02-02 11:57:54.073 238887 DEBUG nova.virt.libvirt.driver [req-9dbe300e-a870-49e0-8ecc-be6335bea9c4 req-8f9c60ff-d4bd-4acf-8fde-25f22fa09ed3 c0d8e826e9e84c8a96887ca462a1c1b7 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Resizing target device vdb to 2147483648 _resize_attached_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2756#033[00m
Feb  2 06:57:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 293 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 3.9 MiB/s wr, 160 op/s
Feb  2 06:57:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:57:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:57:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:57:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:57:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:57:54 np0005604943 nova_compute[238883]: 2026-02-02 11:57:54.942 238887 DEBUG oslo_concurrency.lockutils [None req-6637bd50-b617-467e-8034-b346e97c687d 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:54 np0005604943 nova_compute[238883]: 2026-02-02 11:57:54.942 238887 DEBUG oslo_concurrency.lockutils [None req-6637bd50-b617-467e-8034-b346e97c687d 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:57:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:57:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:57:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:57:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:57:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:57:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:57:54 np0005604943 nova_compute[238883]: 2026-02-02 11:57:54.960 238887 INFO nova.compute.manager [None req-6637bd50-b617-467e-8034-b346e97c687d 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Detaching volume 539a75fe-1001-4f8c-81ab-40858fd6839c#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.003 238887 DEBUG oslo_concurrency.lockutils [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.097 238887 INFO nova.virt.block_device [None req-6637bd50-b617-467e-8034-b346e97c687d 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Attempting to driver detach volume 539a75fe-1001-4f8c-81ab-40858fd6839c from mountpoint /dev/vdb#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.107 238887 DEBUG nova.virt.libvirt.driver [None req-6637bd50-b617-467e-8034-b346e97c687d 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Attempting to detach device vdb from instance 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.108 238887 DEBUG nova.virt.libvirt.guest [None req-6637bd50-b617-467e-8034-b346e97c687d 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-539a75fe-1001-4f8c-81ab-40858fd6839c">
Feb  2 06:57:55 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <serial>539a75fe-1001-4f8c-81ab-40858fd6839c</serial>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 06:57:55 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:57:55 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.116 238887 INFO nova.virt.libvirt.driver [None req-6637bd50-b617-467e-8034-b346e97c687d 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Successfully detached device vdb from instance 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 from the persistent domain config.#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.116 238887 DEBUG nova.virt.libvirt.driver [None req-6637bd50-b617-467e-8034-b346e97c687d 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.117 238887 DEBUG nova.virt.libvirt.guest [None req-6637bd50-b617-467e-8034-b346e97c687d 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-539a75fe-1001-4f8c-81ab-40858fd6839c">
Feb  2 06:57:55 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <serial>539a75fe-1001-4f8c-81ab-40858fd6839c</serial>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 06:57:55 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:57:55 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.177 238887 DEBUG nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Received event <DeviceRemovedEvent: 1770033475.176878, 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.182 238887 DEBUG nova.virt.libvirt.driver [None req-6637bd50-b617-467e-8034-b346e97c687d 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.184 238887 DEBUG oslo_concurrency.lockutils [None req-c592f7bb-d38a-49eb-913d-507268b77b8e e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Acquiring lock "643f1632-51eb-4ee3-a152-cea78635d59c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.185 238887 DEBUG oslo_concurrency.lockutils [None req-c592f7bb-d38a-49eb-913d-507268b77b8e e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.187 238887 INFO nova.virt.libvirt.driver [None req-6637bd50-b617-467e-8034-b346e97c687d 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Successfully detached device vdb from instance 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 from the live domain config.#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.201 238887 INFO nova.compute.manager [None req-c592f7bb-d38a-49eb-913d-507268b77b8e e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Detaching volume f0082ec7-882a-4b7a-ad82-09ee0345ab7d#033[00m
Feb  2 06:57:55 np0005604943 podman[250305]: 2026-02-02 11:57:55.309874367 +0000 UTC m=+0.036246572 container create ff5173782bc6e43a501da44d4bcdf895f30848db5e86d823229671dc7413ff83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamarr, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 06:57:55 np0005604943 systemd[1]: Started libpod-conmon-ff5173782bc6e43a501da44d4bcdf895f30848db5e86d823229671dc7413ff83.scope.
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.350 238887 INFO nova.virt.block_device [None req-c592f7bb-d38a-49eb-913d-507268b77b8e e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Attempting to driver detach volume f0082ec7-882a-4b7a-ad82-09ee0345ab7d from mountpoint /dev/vdb#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.352 238887 DEBUG nova.objects.instance [None req-6637bd50-b617-467e-8034-b346e97c687d 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lazy-loading 'flavor' on Instance uuid 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.360 238887 DEBUG nova.virt.libvirt.driver [None req-c592f7bb-d38a-49eb-913d-507268b77b8e e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Attempting to detach device vdb from instance 643f1632-51eb-4ee3-a152-cea78635d59c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.361 238887 DEBUG nova.virt.libvirt.guest [None req-c592f7bb-d38a-49eb-913d-507268b77b8e e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-f0082ec7-882a-4b7a-ad82-09ee0345ab7d">
Feb  2 06:57:55 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <serial>f0082ec7-882a-4b7a-ad82-09ee0345ab7d</serial>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 06:57:55 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:57:55 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.366 238887 INFO nova.virt.libvirt.driver [None req-c592f7bb-d38a-49eb-913d-507268b77b8e e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Successfully detached device vdb from instance 643f1632-51eb-4ee3-a152-cea78635d59c from the persistent domain config.#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.367 238887 DEBUG nova.virt.libvirt.driver [None req-c592f7bb-d38a-49eb-913d-507268b77b8e e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 643f1632-51eb-4ee3-a152-cea78635d59c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.367 238887 DEBUG nova.virt.libvirt.guest [None req-c592f7bb-d38a-49eb-913d-507268b77b8e e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-f0082ec7-882a-4b7a-ad82-09ee0345ab7d">
Feb  2 06:57:55 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <serial>f0082ec7-882a-4b7a-ad82-09ee0345ab7d</serial>
Feb  2 06:57:55 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 06:57:55 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:57:55 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 06:57:55 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:57:55 np0005604943 podman[250305]: 2026-02-02 11:57:55.293875557 +0000 UTC m=+0.020247802 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.390 238887 DEBUG oslo_concurrency.lockutils [None req-6637bd50-b617-467e-8034-b346e97c687d 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:55 np0005604943 podman[250305]: 2026-02-02 11:57:55.392602677 +0000 UTC m=+0.118974912 container init ff5173782bc6e43a501da44d4bcdf895f30848db5e86d823229671dc7413ff83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.393 238887 DEBUG oslo_concurrency.lockutils [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.390s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.393 238887 DEBUG oslo_concurrency.lockutils [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.393 238887 DEBUG oslo_concurrency.lockutils [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.394 238887 DEBUG oslo_concurrency.lockutils [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.395 238887 INFO nova.compute.manager [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Terminating instance#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.396 238887 DEBUG nova.compute.manager [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 06:57:55 np0005604943 podman[250305]: 2026-02-02 11:57:55.39998456 +0000 UTC m=+0.126356775 container start ff5173782bc6e43a501da44d4bcdf895f30848db5e86d823229671dc7413ff83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 06:57:55 np0005604943 podman[250305]: 2026-02-02 11:57:55.403338968 +0000 UTC m=+0.129711203 container attach ff5173782bc6e43a501da44d4bcdf895f30848db5e86d823229671dc7413ff83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 06:57:55 np0005604943 cool_lamarr[250322]: 167 167
Feb  2 06:57:55 np0005604943 systemd[1]: libpod-ff5173782bc6e43a501da44d4bcdf895f30848db5e86d823229671dc7413ff83.scope: Deactivated successfully.
Feb  2 06:57:55 np0005604943 podman[250305]: 2026-02-02 11:57:55.410065074 +0000 UTC m=+0.136437309 container died ff5173782bc6e43a501da44d4bcdf895f30848db5e86d823229671dc7413ff83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamarr, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:57:55 np0005604943 systemd[1]: var-lib-containers-storage-overlay-5269f82be05d4bae61dc93ba8b5198575aa3d22eeca5126addff1a042bff122f-merged.mount: Deactivated successfully.
Feb  2 06:57:55 np0005604943 kernel: tapebc43340-2b (unregistering): left promiscuous mode
Feb  2 06:57:55 np0005604943 NetworkManager[49093]: <info>  [1770033475.4443] device (tapebc43340-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 06:57:55 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:55Z|00070|binding|INFO|Releasing lport ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c from this chassis (sb_readonly=0)
Feb  2 06:57:55 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:55Z|00071|binding|INFO|Setting lport ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c down in Southbound
Feb  2 06:57:55 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:55Z|00072|binding|INFO|Removing iface tapebc43340-2b ovn-installed in OVS
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.452 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.454 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:55 np0005604943 podman[250305]: 2026-02-02 11:57:55.458591998 +0000 UTC m=+0.184964213 container remove ff5173782bc6e43a501da44d4bcdf895f30848db5e86d823229671dc7413ff83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:57:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:55.460 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:f9:36 10.100.0.9'], port_security=['fa:16:3e:61:f9:36 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-edd3a331-b14a-4730-a21c-7fc793b77005', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c061a009eae241049a1e3a1c35aa2503', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9af29c12-9f42-4791-af1d-67ddceeec2d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be5764d7-de7f-4844-afc6-7eadee6d6d3c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.460 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:55.462 155011 INFO neutron.agent.ovn.metadata.agent [-] Port ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c in datapath edd3a331-b14a-4730-a21c-7fc793b77005 unbound from our chassis#033[00m
Feb  2 06:57:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:55.463 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network edd3a331-b14a-4730-a21c-7fc793b77005, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 06:57:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:55.464 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6fb1f3-e5cf-484e-b2cc-a790729d135f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:55.464 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005 namespace which is not needed anymore#033[00m
Feb  2 06:57:55 np0005604943 systemd[1]: libpod-conmon-ff5173782bc6e43a501da44d4bcdf895f30848db5e86d823229671dc7413ff83.scope: Deactivated successfully.
Feb  2 06:57:55 np0005604943 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Feb  2 06:57:55 np0005604943 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 12.283s CPU time.
Feb  2 06:57:55 np0005604943 systemd-machined[206973]: Machine qemu-5-instance-00000005 terminated.
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.501 238887 DEBUG nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Received event <DeviceRemovedEvent: 1770033475.5016408, 643f1632-51eb-4ee3-a152-cea78635d59c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.503 238887 DEBUG nova.virt.libvirt.driver [None req-c592f7bb-d38a-49eb-913d-507268b77b8e e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 643f1632-51eb-4ee3-a152-cea78635d59c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.508 238887 INFO nova.virt.libvirt.driver [None req-c592f7bb-d38a-49eb-913d-507268b77b8e e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Successfully detached device vdb from instance 643f1632-51eb-4ee3-a152-cea78635d59c from the live domain config.#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.521 238887 DEBUG nova.compute.manager [req-b64f45cc-aa1e-4fd4-b390-61660bd32564 req-87e08af4-82fe-4cfa-a858-28aa36784b6f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Received event network-vif-plugged-2e53d89b-c3e1-480c-af8c-98b7e9b8d425 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.521 238887 DEBUG oslo_concurrency.lockutils [req-b64f45cc-aa1e-4fd4-b390-61660bd32564 req-87e08af4-82fe-4cfa-a858-28aa36784b6f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.521 238887 DEBUG oslo_concurrency.lockutils [req-b64f45cc-aa1e-4fd4-b390-61660bd32564 req-87e08af4-82fe-4cfa-a858-28aa36784b6f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.521 238887 DEBUG oslo_concurrency.lockutils [req-b64f45cc-aa1e-4fd4-b390-61660bd32564 req-87e08af4-82fe-4cfa-a858-28aa36784b6f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.522 238887 DEBUG nova.compute.manager [req-b64f45cc-aa1e-4fd4-b390-61660bd32564 req-87e08af4-82fe-4cfa-a858-28aa36784b6f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] No waiting events found dispatching network-vif-plugged-2e53d89b-c3e1-480c-af8c-98b7e9b8d425 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.522 238887 WARNING nova.compute.manager [req-b64f45cc-aa1e-4fd4-b390-61660bd32564 req-87e08af4-82fe-4cfa-a858-28aa36784b6f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Received unexpected event network-vif-plugged-2e53d89b-c3e1-480c-af8c-98b7e9b8d425 for instance with vm_state active and task_state None.#033[00m
Feb  2 06:57:55 np0005604943 neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005[249074]: [NOTICE]   (249078) : haproxy version is 2.8.14-c23fe91
Feb  2 06:57:55 np0005604943 neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005[249074]: [NOTICE]   (249078) : path to executable is /usr/sbin/haproxy
Feb  2 06:57:55 np0005604943 neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005[249074]: [WARNING]  (249078) : Exiting Master process...
Feb  2 06:57:55 np0005604943 neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005[249074]: [ALERT]    (249078) : Current worker (249080) exited with code 143 (Terminated)
Feb  2 06:57:55 np0005604943 neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005[249074]: [WARNING]  (249078) : All workers exited. Exiting... (0)
Feb  2 06:57:55 np0005604943 systemd[1]: libpod-75fcbfedef73d12694224841abfe49684e384a5f38a32a170e8e60bce98e19b9.scope: Deactivated successfully.
Feb  2 06:57:55 np0005604943 podman[250368]: 2026-02-02 11:57:55.611617741 +0000 UTC m=+0.054713056 container create ec2c7871bbb5a4c58f4eb4cab4e5e1eb2ea5ee2cb5242a344d3fa69140258f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.627 238887 INFO nova.virt.libvirt.driver [-] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Instance destroyed successfully.#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.628 238887 DEBUG nova.objects.instance [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lazy-loading 'resources' on Instance uuid 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:57:55 np0005604943 podman[250367]: 2026-02-02 11:57:55.635049886 +0000 UTC m=+0.081169780 container died 75fcbfedef73d12694224841abfe49684e384a5f38a32a170e8e60bce98e19b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.643 238887 DEBUG nova.virt.libvirt.vif [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:57:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-233799635',display_name='tempest-VolumesSnapshotTestJSON-instance-233799635',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-233799635',id=5,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHb1ZEaIb8UZnHykAYp8EjqNATdm5jdbLuafEvNxVV1vyzKYWrK1BW5Doc1xOOlNSAWHW3YeBnTyxM8UeJU92Fn0f4HjOOs4ewzJZPOUJYDbLHigfQvvW8aA+1/eu17SoQ==',key_name='tempest-keypair-1798394470',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:57:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c061a009eae241049a1e3a1c35aa2503',ramdisk_id='',reservation_id='r-1rxqures',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-2018180325',owner_user_name='tempest-VolumesSnapshotTestJSON-2018180325-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:57:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4846ccd205b54116a828ad91820ef58d',uuid=9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "address": "fa:16:3e:61:f9:36", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebc43340-2b", "ovs_interfaceid": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.643 238887 DEBUG nova.network.os_vif_util [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Converting VIF {"id": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "address": "fa:16:3e:61:f9:36", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebc43340-2b", "ovs_interfaceid": "ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.644 238887 DEBUG nova.network.os_vif_util [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:61:f9:36,bridge_name='br-int',has_traffic_filtering=True,id=ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c,network=Network(edd3a331-b14a-4730-a21c-7fc793b77005),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebc43340-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.645 238887 DEBUG os_vif [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:f9:36,bridge_name='br-int',has_traffic_filtering=True,id=ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c,network=Network(edd3a331-b14a-4730-a21c-7fc793b77005),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebc43340-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.646 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.647 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc43340-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.649 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.650 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.652 238887 INFO os_vif [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:f9:36,bridge_name='br-int',has_traffic_filtering=True,id=ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c,network=Network(edd3a331-b14a-4730-a21c-7fc793b77005),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebc43340-2b')#033[00m
Feb  2 06:57:55 np0005604943 systemd[1]: Started libpod-conmon-ec2c7871bbb5a4c58f4eb4cab4e5e1eb2ea5ee2cb5242a344d3fa69140258f53.scope.
Feb  2 06:57:55 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-75fcbfedef73d12694224841abfe49684e384a5f38a32a170e8e60bce98e19b9-userdata-shm.mount: Deactivated successfully.
Feb  2 06:57:55 np0005604943 systemd[1]: var-lib-containers-storage-overlay-4a8c93410ff91e7082a42287a7b49fc8bb509e3ad5ab13178c411d5a575fedfd-merged.mount: Deactivated successfully.
Feb  2 06:57:55 np0005604943 podman[250367]: 2026-02-02 11:57:55.675755924 +0000 UTC m=+0.121875798 container cleanup 75fcbfedef73d12694224841abfe49684e384a5f38a32a170e8e60bce98e19b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  2 06:57:55 np0005604943 podman[250368]: 2026-02-02 11:57:55.577909117 +0000 UTC m=+0.021004452 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.678 238887 DEBUG nova.compute.manager [req-df88de5f-c6b6-4393-923e-b2f226154910 req-0224c784-1a92-4a2d-8f18-c1c08341a6de 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Received event network-vif-unplugged-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.678 238887 DEBUG oslo_concurrency.lockutils [req-df88de5f-c6b6-4393-923e-b2f226154910 req-0224c784-1a92-4a2d-8f18-c1c08341a6de 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.678 238887 DEBUG oslo_concurrency.lockutils [req-df88de5f-c6b6-4393-923e-b2f226154910 req-0224c784-1a92-4a2d-8f18-c1c08341a6de 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.679 238887 DEBUG oslo_concurrency.lockutils [req-df88de5f-c6b6-4393-923e-b2f226154910 req-0224c784-1a92-4a2d-8f18-c1c08341a6de 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.679 238887 DEBUG nova.compute.manager [req-df88de5f-c6b6-4393-923e-b2f226154910 req-0224c784-1a92-4a2d-8f18-c1c08341a6de 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] No waiting events found dispatching network-vif-unplugged-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.679 238887 DEBUG nova.compute.manager [req-df88de5f-c6b6-4393-923e-b2f226154910 req-0224c784-1a92-4a2d-8f18-c1c08341a6de 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Received event network-vif-unplugged-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 06:57:55 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:57:55 np0005604943 systemd[1]: libpod-conmon-75fcbfedef73d12694224841abfe49684e384a5f38a32a170e8e60bce98e19b9.scope: Deactivated successfully.
Feb  2 06:57:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc6fef9f2dabd683113ebcad7f6065d13048f350647a86302a200239f2d479a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:57:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc6fef9f2dabd683113ebcad7f6065d13048f350647a86302a200239f2d479a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:57:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc6fef9f2dabd683113ebcad7f6065d13048f350647a86302a200239f2d479a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:57:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc6fef9f2dabd683113ebcad7f6065d13048f350647a86302a200239f2d479a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:57:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc6fef9f2dabd683113ebcad7f6065d13048f350647a86302a200239f2d479a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.697 238887 DEBUG nova.objects.instance [None req-c592f7bb-d38a-49eb-913d-507268b77b8e e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lazy-loading 'flavor' on Instance uuid 643f1632-51eb-4ee3-a152-cea78635d59c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:57:55 np0005604943 podman[250368]: 2026-02-02 11:57:55.708999596 +0000 UTC m=+0.152094931 container init ec2c7871bbb5a4c58f4eb4cab4e5e1eb2ea5ee2cb5242a344d3fa69140258f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_lewin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 06:57:55 np0005604943 podman[250368]: 2026-02-02 11:57:55.71640795 +0000 UTC m=+0.159503265 container start ec2c7871bbb5a4c58f4eb4cab4e5e1eb2ea5ee2cb5242a344d3fa69140258f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.732 238887 DEBUG oslo_concurrency.lockutils [None req-c592f7bb-d38a-49eb-913d-507268b77b8e e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:55 np0005604943 podman[250368]: 2026-02-02 11:57:55.7366407 +0000 UTC m=+0.179736045 container attach ec2c7871bbb5a4c58f4eb4cab4e5e1eb2ea5ee2cb5242a344d3fa69140258f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_lewin, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:57:55 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:57:55 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:57:55 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:57:55 np0005604943 podman[250441]: 2026-02-02 11:57:55.784724822 +0000 UTC m=+0.087690221 container remove 75fcbfedef73d12694224841abfe49684e384a5f38a32a170e8e60bce98e19b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb  2 06:57:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:55.790 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[85bfc304-e0e7-456c-87cf-1b8ef83cfb35]: (4, ('Mon Feb  2 11:57:55 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005 (75fcbfedef73d12694224841abfe49684e384a5f38a32a170e8e60bce98e19b9)\n75fcbfedef73d12694224841abfe49684e384a5f38a32a170e8e60bce98e19b9\nMon Feb  2 11:57:55 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005 (75fcbfedef73d12694224841abfe49684e384a5f38a32a170e8e60bce98e19b9)\n75fcbfedef73d12694224841abfe49684e384a5f38a32a170e8e60bce98e19b9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:55.792 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[af7e99d5-2b0a-4aea-88fa-5a0479e33044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:55.795 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedd3a331-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.799 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:55 np0005604943 kernel: tapedd3a331-b0: left promiscuous mode
Feb  2 06:57:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:55.804 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[017f3941-f515-409c-99e6-edf23ccd8488]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.810 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:55.818 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[43fded19-5b49-4649-b2a6-8791cdc36fe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:55.820 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9ff43767-3c06-45e7-b359-d92bd9e44f95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:55.834 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d3404440-93e5-444d-8658-e2798c26b379]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388741, 'reachable_time': 30132, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250460, 'error': None, 'target': 'ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:55.837 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 06:57:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:55.837 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf19742-8131-4205-937f-e0de3c2424f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.980 238887 INFO nova.virt.libvirt.driver [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Deleting instance files /var/lib/nova/instances/9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9_del#033[00m
Feb  2 06:57:55 np0005604943 nova_compute[238883]: 2026-02-02 11:57:55.980 238887 INFO nova.virt.libvirt.driver [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Deletion of /var/lib/nova/instances/9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9_del complete#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.072 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.078 238887 INFO nova.compute.manager [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.079 238887 DEBUG oslo.service.loopingcall [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.079 238887 DEBUG nova.compute.manager [-] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.079 238887 DEBUG nova.network.neutron [-] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 06:57:56 np0005604943 determined_lewin[250422]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:57:56 np0005604943 determined_lewin[250422]: --> All data devices are unavailable
Feb  2 06:57:56 np0005604943 systemd[1]: libpod-ec2c7871bbb5a4c58f4eb4cab4e5e1eb2ea5ee2cb5242a344d3fa69140258f53.scope: Deactivated successfully.
Feb  2 06:57:56 np0005604943 podman[250368]: 2026-02-02 11:57:56.17544158 +0000 UTC m=+0.618536895 container died ec2c7871bbb5a4c58f4eb4cab4e5e1eb2ea5ee2cb5242a344d3fa69140258f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_lewin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:57:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 293 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 3.3 MiB/s wr, 120 op/s
Feb  2 06:57:56 np0005604943 podman[250368]: 2026-02-02 11:57:56.211828015 +0000 UTC m=+0.654923330 container remove ec2c7871bbb5a4c58f4eb4cab4e5e1eb2ea5ee2cb5242a344d3fa69140258f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:57:56 np0005604943 systemd[1]: libpod-conmon-ec2c7871bbb5a4c58f4eb4cab4e5e1eb2ea5ee2cb5242a344d3fa69140258f53.scope: Deactivated successfully.
Feb  2 06:57:56 np0005604943 systemd[1]: var-lib-containers-storage-overlay-4cc6fef9f2dabd683113ebcad7f6065d13048f350647a86302a200239f2d479a-merged.mount: Deactivated successfully.
Feb  2 06:57:56 np0005604943 systemd[1]: run-netns-ovnmeta\x2dedd3a331\x2db14a\x2d4730\x2da21c\x2d7fc793b77005.mount: Deactivated successfully.
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.437 238887 DEBUG oslo_concurrency.lockutils [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Acquiring lock "643f1632-51eb-4ee3-a152-cea78635d59c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.438 238887 DEBUG oslo_concurrency.lockutils [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.439 238887 DEBUG oslo_concurrency.lockutils [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Acquiring lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.439 238887 DEBUG oslo_concurrency.lockutils [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.439 238887 DEBUG oslo_concurrency.lockutils [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.440 238887 INFO nova.compute.manager [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Terminating instance#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.441 238887 DEBUG nova.compute.manager [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 06:57:56 np0005604943 kernel: tap9b14b0fc-01 (unregistering): left promiscuous mode
Feb  2 06:57:56 np0005604943 NetworkManager[49093]: <info>  [1770033476.4911] device (tap9b14b0fc-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.498 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:56 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:56Z|00073|binding|INFO|Releasing lport 9b14b0fc-0160-4715-bd45-6a8ec1128754 from this chassis (sb_readonly=0)
Feb  2 06:57:56 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:56Z|00074|binding|INFO|Setting lport 9b14b0fc-0160-4715-bd45-6a8ec1128754 down in Southbound
Feb  2 06:57:56 np0005604943 ovn_controller[145056]: 2026-02-02T11:57:56Z|00075|binding|INFO|Removing iface tap9b14b0fc-01 ovn-installed in OVS
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.506 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:56.513 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:bc:aa 10.100.0.7'], port_security=['fa:16:3e:cc:bc:aa 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '643f1632-51eb-4ee3-a152-cea78635d59c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a7e9957088fe43eaae10f11401fe89c4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '81c35f8d-4fc6-4e28-a844-0d02fb39bbac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.204'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17dd8f22-1dbd-4081-bcaa-6cbfe492bdad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=9b14b0fc-0160-4715-bd45-6a8ec1128754) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:57:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:56.514 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 9b14b0fc-0160-4715-bd45-6a8ec1128754 in datapath 6b1c6eff-f6e6-4af3-aa02-11290c8b6c83 unbound from our chassis#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.515 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:56.515 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6b1c6eff-f6e6-4af3-aa02-11290c8b6c83, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 06:57:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:56.516 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[78ae6a04-b037-45e2-b766-0d1e2cea45f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:56.517 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83 namespace which is not needed anymore#033[00m
Feb  2 06:57:56 np0005604943 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Feb  2 06:57:56 np0005604943 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 12.658s CPU time.
Feb  2 06:57:56 np0005604943 systemd-machined[206973]: Machine qemu-6-instance-00000006 terminated.
Feb  2 06:57:56 np0005604943 neutron-haproxy-ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83[249579]: [NOTICE]   (249590) : haproxy version is 2.8.14-c23fe91
Feb  2 06:57:56 np0005604943 neutron-haproxy-ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83[249579]: [NOTICE]   (249590) : path to executable is /usr/sbin/haproxy
Feb  2 06:57:56 np0005604943 neutron-haproxy-ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83[249579]: [WARNING]  (249590) : Exiting Master process...
Feb  2 06:57:56 np0005604943 neutron-haproxy-ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83[249579]: [ALERT]    (249590) : Current worker (249603) exited with code 143 (Terminated)
Feb  2 06:57:56 np0005604943 neutron-haproxy-ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83[249579]: [WARNING]  (249590) : All workers exited. Exiting... (0)
Feb  2 06:57:56 np0005604943 podman[250576]: 2026-02-02 11:57:56.630468315 +0000 UTC m=+0.033066658 container create 044a50bebd787df8b613a6a3e596d7f6ab555b60c0d1694515d318b497f4ad0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 06:57:56 np0005604943 systemd[1]: libpod-8440f63611f43d1a2d936ed7e560e833ea82bf2cfa4aa13b874eaa81e0f0a9b5.scope: Deactivated successfully.
Feb  2 06:57:56 np0005604943 podman[250575]: 2026-02-02 11:57:56.644560425 +0000 UTC m=+0.052607331 container died 8440f63611f43d1a2d936ed7e560e833ea82bf2cfa4aa13b874eaa81e0f0a9b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 06:57:56 np0005604943 systemd[1]: Started libpod-conmon-044a50bebd787df8b613a6a3e596d7f6ab555b60c0d1694515d318b497f4ad0b.scope.
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.666 238887 INFO nova.virt.libvirt.driver [-] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Instance destroyed successfully.#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.668 238887 DEBUG nova.objects.instance [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lazy-loading 'resources' on Instance uuid 643f1632-51eb-4ee3-a152-cea78635d59c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.684 238887 DEBUG nova.virt.libvirt.vif [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:57:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1409568483',display_name='tempest-VolumesExtendAttachedTest-instance-1409568483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1409568483',id=6,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP+uUo4VUyfglFi+aS2qETcNEtGj44RWS0a2Pk2h7vf5brP5LrWipTQCrkXaRZFBTx21OoM2zKs+JQwCJmwwZKA2GQ11phRMnZCJt8nktB8rm8WWFPRDL6V2IIklbXlWzA==',key_name='tempest-keypair-1569122066',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:57:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a7e9957088fe43eaae10f11401fe89c4',ramdisk_id='',reservation_id='r-0pdj2d24',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesExtendAttachedTest-1942788377',owner_user_name='tempest-VolumesExtendAttachedTest-1942788377-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:57:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e144ee25c2b84ec5a1aecb69ceec619d',uuid=643f1632-51eb-4ee3-a152-cea78635d59c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "address": "fa:16:3e:cc:bc:aa", "network": {"id": "6b1c6eff-f6e6-4af3-aa02-11290c8b6c83", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-678478278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7e9957088fe43eaae10f11401fe89c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b14b0fc-01", "ovs_interfaceid": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.684 238887 DEBUG nova.network.os_vif_util [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Converting VIF {"id": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "address": "fa:16:3e:cc:bc:aa", "network": {"id": "6b1c6eff-f6e6-4af3-aa02-11290c8b6c83", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-678478278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a7e9957088fe43eaae10f11401fe89c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b14b0fc-01", "ovs_interfaceid": "9b14b0fc-0160-4715-bd45-6a8ec1128754", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:57:56 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.685 238887 DEBUG nova.network.os_vif_util [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cc:bc:aa,bridge_name='br-int',has_traffic_filtering=True,id=9b14b0fc-0160-4715-bd45-6a8ec1128754,network=Network(6b1c6eff-f6e6-4af3-aa02-11290c8b6c83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b14b0fc-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.686 238887 DEBUG os_vif [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:bc:aa,bridge_name='br-int',has_traffic_filtering=True,id=9b14b0fc-0160-4715-bd45-6a8ec1128754,network=Network(6b1c6eff-f6e6-4af3-aa02-11290c8b6c83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b14b0fc-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.688 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:56 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8440f63611f43d1a2d936ed7e560e833ea82bf2cfa4aa13b874eaa81e0f0a9b5-userdata-shm.mount: Deactivated successfully.
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.688 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b14b0fc-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:56 np0005604943 systemd[1]: var-lib-containers-storage-overlay-c0293fcea6f3ed88d3b1e8f4a3fa7d8db52d06486f20cc5eb19f207acf485c27-merged.mount: Deactivated successfully.
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.690 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.693 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.694 238887 INFO os_vif [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:bc:aa,bridge_name='br-int',has_traffic_filtering=True,id=9b14b0fc-0160-4715-bd45-6a8ec1128754,network=Network(6b1c6eff-f6e6-4af3-aa02-11290c8b6c83),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b14b0fc-01')#033[00m
Feb  2 06:57:56 np0005604943 podman[250575]: 2026-02-02 11:57:56.699456135 +0000 UTC m=+0.107503041 container cleanup 8440f63611f43d1a2d936ed7e560e833ea82bf2cfa4aa13b874eaa81e0f0a9b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 06:57:56 np0005604943 systemd[1]: libpod-conmon-8440f63611f43d1a2d936ed7e560e833ea82bf2cfa4aa13b874eaa81e0f0a9b5.scope: Deactivated successfully.
Feb  2 06:57:56 np0005604943 podman[250576]: 2026-02-02 11:57:56.705667987 +0000 UTC m=+0.108266350 container init 044a50bebd787df8b613a6a3e596d7f6ab555b60c0d1694515d318b497f4ad0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:57:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:56.707 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.712 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:56 np0005604943 podman[250576]: 2026-02-02 11:57:56.616494379 +0000 UTC m=+0.019092742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:57:56 np0005604943 podman[250576]: 2026-02-02 11:57:56.71492451 +0000 UTC m=+0.117522853 container start 044a50bebd787df8b613a6a3e596d7f6ab555b60c0d1694515d318b497f4ad0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 06:57:56 np0005604943 podman[250576]: 2026-02-02 11:57:56.718741811 +0000 UTC m=+0.121340234 container attach 044a50bebd787df8b613a6a3e596d7f6ab555b60c0d1694515d318b497f4ad0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bohr, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 06:57:56 np0005604943 wonderful_bohr[250622]: 167 167
Feb  2 06:57:56 np0005604943 systemd[1]: libpod-044a50bebd787df8b613a6a3e596d7f6ab555b60c0d1694515d318b497f4ad0b.scope: Deactivated successfully.
Feb  2 06:57:56 np0005604943 podman[250655]: 2026-02-02 11:57:56.761379919 +0000 UTC m=+0.029650269 container died 044a50bebd787df8b613a6a3e596d7f6ab555b60c0d1694515d318b497f4ad0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bohr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Feb  2 06:57:56 np0005604943 podman[250641]: 2026-02-02 11:57:56.76982805 +0000 UTC m=+0.049227822 container remove 8440f63611f43d1a2d936ed7e560e833ea82bf2cfa4aa13b874eaa81e0f0a9b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb  2 06:57:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:56.773 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[001d8f47-6562-48d7-b0bd-944345c3a9db]: (4, ('Mon Feb  2 11:57:56 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83 (8440f63611f43d1a2d936ed7e560e833ea82bf2cfa4aa13b874eaa81e0f0a9b5)\n8440f63611f43d1a2d936ed7e560e833ea82bf2cfa4aa13b874eaa81e0f0a9b5\nMon Feb  2 11:57:56 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83 (8440f63611f43d1a2d936ed7e560e833ea82bf2cfa4aa13b874eaa81e0f0a9b5)\n8440f63611f43d1a2d936ed7e560e833ea82bf2cfa4aa13b874eaa81e0f0a9b5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:56.775 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7c6c6d99-432c-4fd7-9ff4-11853de8e72d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:56.776 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b1c6eff-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:57:56 np0005604943 kernel: tap6b1c6eff-f0: left promiscuous mode
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.779 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:56.783 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[6b045132-6117-4d9b-933b-9882124efb92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.787 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:57:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:56.795 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[2987b3f8-e98b-4039-9460-a025a95bc7e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:56 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b86651cbec5811a92190cfd574bbf5d2158ef7e53ddcfb6262fb4a2480be1066-merged.mount: Deactivated successfully.
Feb  2 06:57:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:56.799 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5fe8a4-31d9-4ee6-9e90-e49a2ee02cc0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:56 np0005604943 podman[250655]: 2026-02-02 11:57:56.811872383 +0000 UTC m=+0.080142703 container remove 044a50bebd787df8b613a6a3e596d7f6ab555b60c0d1694515d318b497f4ad0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 06:57:56 np0005604943 systemd[1]: libpod-conmon-044a50bebd787df8b613a6a3e596d7f6ab555b60c0d1694515d318b497f4ad0b.scope: Deactivated successfully.
Feb  2 06:57:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:56.819 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[054fa58f-712b-4ead-b192-06d3ffe6ab6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389614, 'reachable_time': 24548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250677, 'error': None, 'target': 'ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:56.826 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6b1c6eff-f6e6-4af3-aa02-11290c8b6c83 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 06:57:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:56.827 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba97640-846d-453a-bf96-0fd40914ce92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:57:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:57:56.828 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 06:57:56 np0005604943 podman[250686]: 2026-02-02 11:57:56.962771801 +0000 UTC m=+0.040373160 container create 2f25979d9215ef70aa2ec4e9edd804588eca2ecec802d10660f3003783020c98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.969 238887 INFO nova.virt.libvirt.driver [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Deleting instance files /var/lib/nova/instances/643f1632-51eb-4ee3-a152-cea78635d59c_del#033[00m
Feb  2 06:57:56 np0005604943 nova_compute[238883]: 2026-02-02 11:57:56.969 238887 INFO nova.virt.libvirt.driver [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Deletion of /var/lib/nova/instances/643f1632-51eb-4ee3-a152-cea78635d59c_del complete#033[00m
Feb  2 06:57:56 np0005604943 systemd[1]: Started libpod-conmon-2f25979d9215ef70aa2ec4e9edd804588eca2ecec802d10660f3003783020c98.scope.
Feb  2 06:57:57 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:57:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd955474ca095bbc2d44d4dd89cd7674f32fb60fe7aa8040d9bb052d452d5535/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:57:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd955474ca095bbc2d44d4dd89cd7674f32fb60fe7aa8040d9bb052d452d5535/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:57:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd955474ca095bbc2d44d4dd89cd7674f32fb60fe7aa8040d9bb052d452d5535/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:57:57 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd955474ca095bbc2d44d4dd89cd7674f32fb60fe7aa8040d9bb052d452d5535/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.027 238887 DEBUG nova.network.neutron [-] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.038 238887 INFO nova.compute.manager [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.039 238887 DEBUG oslo.service.loopingcall [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.039 238887 DEBUG nova.compute.manager [-] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.039 238887 DEBUG nova.network.neutron [-] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 06:57:57 np0005604943 podman[250686]: 2026-02-02 11:57:57.042665177 +0000 UTC m=+0.120266536 container init 2f25979d9215ef70aa2ec4e9edd804588eca2ecec802d10660f3003783020c98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jemison, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 06:57:57 np0005604943 podman[250686]: 2026-02-02 11:57:56.947622734 +0000 UTC m=+0.025224113 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:57:57 np0005604943 podman[250686]: 2026-02-02 11:57:57.047701059 +0000 UTC m=+0.125302418 container start 2f25979d9215ef70aa2ec4e9edd804588eca2ecec802d10660f3003783020c98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.048 238887 INFO nova.compute.manager [-] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Took 0.97 seconds to deallocate network for instance.#033[00m
Feb  2 06:57:57 np0005604943 podman[250686]: 2026-02-02 11:57:57.050337328 +0000 UTC m=+0.127938717 container attach 2f25979d9215ef70aa2ec4e9edd804588eca2ecec802d10660f3003783020c98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jemison, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.164 238887 WARNING nova.volume.cinder [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Attachment 947b0bee-cfa5-4850-9274-c996d43eacb0 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = 947b0bee-cfa5-4850-9274-c996d43eacb0. (HTTP 404) (Request-ID: req-a7a0d4f1-ed54-47e7-a6bb-22a91eab032a)#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.165 238887 INFO nova.compute.manager [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Took 0.12 seconds to detach 1 volumes for instance.#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.205 238887 DEBUG oslo_concurrency.lockutils [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.205 238887 DEBUG oslo_concurrency.lockutils [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]: {
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:    "0": [
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:        {
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "devices": [
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "/dev/loop3"
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            ],
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "lv_name": "ceph_lv0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "lv_size": "21470642176",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "name": "ceph_lv0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "tags": {
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.cluster_name": "ceph",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.crush_device_class": "",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.encrypted": "0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.objectstore": "bluestore",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.osd_id": "0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.type": "block",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.vdo": "0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.with_tpm": "0"
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            },
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "type": "block",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "vg_name": "ceph_vg0"
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:        }
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:    ],
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:    "1": [
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:        {
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "devices": [
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "/dev/loop4"
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            ],
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "lv_name": "ceph_lv1",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "lv_size": "21470642176",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "name": "ceph_lv1",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "tags": {
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.cluster_name": "ceph",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.crush_device_class": "",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.encrypted": "0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.objectstore": "bluestore",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.osd_id": "1",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.type": "block",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.vdo": "0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.with_tpm": "0"
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            },
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "type": "block",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "vg_name": "ceph_vg1"
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:        }
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:    ],
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:    "2": [
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:        {
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "devices": [
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "/dev/loop5"
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            ],
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "lv_name": "ceph_lv2",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "lv_size": "21470642176",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "name": "ceph_lv2",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "tags": {
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.cluster_name": "ceph",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.crush_device_class": "",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.encrypted": "0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.objectstore": "bluestore",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.osd_id": "2",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.type": "block",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.vdo": "0",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:                "ceph.with_tpm": "0"
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            },
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "type": "block",
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:            "vg_name": "ceph_vg2"
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:        }
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]:    ]
Feb  2 06:57:57 np0005604943 amazing_jemison[250703]: }
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.309 238887 DEBUG oslo_concurrency.processutils [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:57 np0005604943 systemd[1]: run-netns-ovnmeta\x2d6b1c6eff\x2df6e6\x2d4af3\x2daa02\x2d11290c8b6c83.mount: Deactivated successfully.
Feb  2 06:57:57 np0005604943 systemd[1]: libpod-2f25979d9215ef70aa2ec4e9edd804588eca2ecec802d10660f3003783020c98.scope: Deactivated successfully.
Feb  2 06:57:57 np0005604943 podman[250686]: 2026-02-02 11:57:57.335403005 +0000 UTC m=+0.413004404 container died 2f25979d9215ef70aa2ec4e9edd804588eca2ecec802d10660f3003783020c98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 06:57:57 np0005604943 systemd[1]: var-lib-containers-storage-overlay-fd955474ca095bbc2d44d4dd89cd7674f32fb60fe7aa8040d9bb052d452d5535-merged.mount: Deactivated successfully.
Feb  2 06:57:57 np0005604943 podman[250686]: 2026-02-02 11:57:57.3782795 +0000 UTC m=+0.455880859 container remove 2f25979d9215ef70aa2ec4e9edd804588eca2ecec802d10660f3003783020c98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:57:57 np0005604943 systemd[1]: libpod-conmon-2f25979d9215ef70aa2ec4e9edd804588eca2ecec802d10660f3003783020c98.scope: Deactivated successfully.
Feb  2 06:57:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:57:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Feb  2 06:57:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Feb  2 06:57:57 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.606 238887 DEBUG nova.compute.manager [req-db14abe5-fa76-49a3-8459-3591c252971e req-eb6931d2-1ab2-490f-aca8-ce2a76e892ce 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Received event network-vif-unplugged-9b14b0fc-0160-4715-bd45-6a8ec1128754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.607 238887 DEBUG oslo_concurrency.lockutils [req-db14abe5-fa76-49a3-8459-3591c252971e req-eb6931d2-1ab2-490f-aca8-ce2a76e892ce 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.607 238887 DEBUG oslo_concurrency.lockutils [req-db14abe5-fa76-49a3-8459-3591c252971e req-eb6931d2-1ab2-490f-aca8-ce2a76e892ce 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.607 238887 DEBUG oslo_concurrency.lockutils [req-db14abe5-fa76-49a3-8459-3591c252971e req-eb6931d2-1ab2-490f-aca8-ce2a76e892ce 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.608 238887 DEBUG nova.compute.manager [req-db14abe5-fa76-49a3-8459-3591c252971e req-eb6931d2-1ab2-490f-aca8-ce2a76e892ce 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] No waiting events found dispatching network-vif-unplugged-9b14b0fc-0160-4715-bd45-6a8ec1128754 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.608 238887 DEBUG nova.compute.manager [req-db14abe5-fa76-49a3-8459-3591c252971e req-eb6931d2-1ab2-490f-aca8-ce2a76e892ce 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Received event network-vif-unplugged-9b14b0fc-0160-4715-bd45-6a8ec1128754 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.608 238887 DEBUG nova.compute.manager [req-db14abe5-fa76-49a3-8459-3591c252971e req-eb6931d2-1ab2-490f-aca8-ce2a76e892ce 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Received event network-vif-plugged-9b14b0fc-0160-4715-bd45-6a8ec1128754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.608 238887 DEBUG oslo_concurrency.lockutils [req-db14abe5-fa76-49a3-8459-3591c252971e req-eb6931d2-1ab2-490f-aca8-ce2a76e892ce 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.609 238887 DEBUG oslo_concurrency.lockutils [req-db14abe5-fa76-49a3-8459-3591c252971e req-eb6931d2-1ab2-490f-aca8-ce2a76e892ce 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.609 238887 DEBUG oslo_concurrency.lockutils [req-db14abe5-fa76-49a3-8459-3591c252971e req-eb6931d2-1ab2-490f-aca8-ce2a76e892ce 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.609 238887 DEBUG nova.compute.manager [req-db14abe5-fa76-49a3-8459-3591c252971e req-eb6931d2-1ab2-490f-aca8-ce2a76e892ce 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] No waiting events found dispatching network-vif-plugged-9b14b0fc-0160-4715-bd45-6a8ec1128754 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.609 238887 WARNING nova.compute.manager [req-db14abe5-fa76-49a3-8459-3591c252971e req-eb6931d2-1ab2-490f-aca8-ce2a76e892ce 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Received unexpected event network-vif-plugged-9b14b0fc-0160-4715-bd45-6a8ec1128754 for instance with vm_state active and task_state deleting.#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.728 238887 DEBUG nova.compute.manager [req-851b71e2-8718-496a-a3ae-bc3f8a6c2817 req-fd610f29-6a1d-4991-a13a-e88ade025676 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Received event network-vif-plugged-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.728 238887 DEBUG oslo_concurrency.lockutils [req-851b71e2-8718-496a-a3ae-bc3f8a6c2817 req-fd610f29-6a1d-4991-a13a-e88ade025676 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.728 238887 DEBUG oslo_concurrency.lockutils [req-851b71e2-8718-496a-a3ae-bc3f8a6c2817 req-fd610f29-6a1d-4991-a13a-e88ade025676 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.729 238887 DEBUG oslo_concurrency.lockutils [req-851b71e2-8718-496a-a3ae-bc3f8a6c2817 req-fd610f29-6a1d-4991-a13a-e88ade025676 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.729 238887 DEBUG nova.compute.manager [req-851b71e2-8718-496a-a3ae-bc3f8a6c2817 req-fd610f29-6a1d-4991-a13a-e88ade025676 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] No waiting events found dispatching network-vif-plugged-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.729 238887 WARNING nova.compute.manager [req-851b71e2-8718-496a-a3ae-bc3f8a6c2817 req-fd610f29-6a1d-4991-a13a-e88ade025676 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Received unexpected event network-vif-plugged-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c for instance with vm_state deleted and task_state None.#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.729 238887 DEBUG nova.compute.manager [req-851b71e2-8718-496a-a3ae-bc3f8a6c2817 req-fd610f29-6a1d-4991-a13a-e88ade025676 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Received event network-changed-2e53d89b-c3e1-480c-af8c-98b7e9b8d425 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.729 238887 DEBUG nova.compute.manager [req-851b71e2-8718-496a-a3ae-bc3f8a6c2817 req-fd610f29-6a1d-4991-a13a-e88ade025676 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Refreshing instance network info cache due to event network-changed-2e53d89b-c3e1-480c-af8c-98b7e9b8d425. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.729 238887 DEBUG oslo_concurrency.lockutils [req-851b71e2-8718-496a-a3ae-bc3f8a6c2817 req-fd610f29-6a1d-4991-a13a-e88ade025676 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-e3333751-86a5-40df-9180-a0c8153f06a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.729 238887 DEBUG oslo_concurrency.lockutils [req-851b71e2-8718-496a-a3ae-bc3f8a6c2817 req-fd610f29-6a1d-4991-a13a-e88ade025676 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-e3333751-86a5-40df-9180-a0c8153f06a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.730 238887 DEBUG nova.network.neutron [req-851b71e2-8718-496a-a3ae-bc3f8a6c2817 req-fd610f29-6a1d-4991-a13a-e88ade025676 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Refreshing network info cache for port 2e53d89b-c3e1-480c-af8c-98b7e9b8d425 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:57:57 np0005604943 podman[250805]: 2026-02-02 11:57:57.759428267 +0000 UTC m=+0.041676454 container create b9f0ed47e2f4329f80e662516e632010f3d87f74b8e86bbf9ae421121e1fda79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hertz, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 06:57:57 np0005604943 systemd[1]: Started libpod-conmon-b9f0ed47e2f4329f80e662516e632010f3d87f74b8e86bbf9ae421121e1fda79.scope.
Feb  2 06:57:57 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:57:57 np0005604943 podman[250805]: 2026-02-02 11:57:57.821877295 +0000 UTC m=+0.104125502 container init b9f0ed47e2f4329f80e662516e632010f3d87f74b8e86bbf9ae421121e1fda79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hertz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:57:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:57:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2321401931' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:57:57 np0005604943 podman[250805]: 2026-02-02 11:57:57.826496046 +0000 UTC m=+0.108744223 container start b9f0ed47e2f4329f80e662516e632010f3d87f74b8e86bbf9ae421121e1fda79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hertz, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:57:57 np0005604943 boring_hertz[250819]: 167 167
Feb  2 06:57:57 np0005604943 systemd[1]: libpod-b9f0ed47e2f4329f80e662516e632010f3d87f74b8e86bbf9ae421121e1fda79.scope: Deactivated successfully.
Feb  2 06:57:57 np0005604943 conmon[250819]: conmon b9f0ed47e2f4329f80e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9f0ed47e2f4329f80e662516e632010f3d87f74b8e86bbf9ae421121e1fda79.scope/container/memory.events
Feb  2 06:57:57 np0005604943 podman[250805]: 2026-02-02 11:57:57.830990214 +0000 UTC m=+0.113238411 container attach b9f0ed47e2f4329f80e662516e632010f3d87f74b8e86bbf9ae421121e1fda79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 06:57:57 np0005604943 podman[250805]: 2026-02-02 11:57:57.831533879 +0000 UTC m=+0.113782066 container died b9f0ed47e2f4329f80e662516e632010f3d87f74b8e86bbf9ae421121e1fda79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hertz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:57:57 np0005604943 podman[250805]: 2026-02-02 11:57:57.742738599 +0000 UTC m=+0.024986806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.847 238887 DEBUG oslo_concurrency.processutils [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.854 238887 DEBUG nova.compute.provider_tree [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:57:57 np0005604943 systemd[1]: var-lib-containers-storage-overlay-ed9c76350e7a7a93463f98a868b349c9a5b0b6ceb14ab40fa04c20efa7e6cb24-merged.mount: Deactivated successfully.
Feb  2 06:57:57 np0005604943 podman[250805]: 2026-02-02 11:57:57.870359826 +0000 UTC m=+0.152608003 container remove b9f0ed47e2f4329f80e662516e632010f3d87f74b8e86bbf9ae421121e1fda79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hertz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.876 238887 DEBUG nova.scheduler.client.report [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:57:57 np0005604943 systemd[1]: libpod-conmon-b9f0ed47e2f4329f80e662516e632010f3d87f74b8e86bbf9ae421121e1fda79.scope: Deactivated successfully.
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.898 238887 DEBUG oslo_concurrency.lockutils [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.924 238887 INFO nova.scheduler.client.report [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Deleted allocations for instance 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9#033[00m
Feb  2 06:57:57 np0005604943 nova_compute[238883]: 2026-02-02 11:57:57.989 238887 DEBUG oslo_concurrency.lockutils [None req-8c29b816-b50e-41bc-86ea-344dc2bdee5a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:57 np0005604943 podman[250843]: 2026-02-02 11:57:57.996559236 +0000 UTC m=+0.040507183 container create 1991ec4a1abd6a6315988548846ee601c5f1d69866dec8383cdd16763e1de80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 06:57:58 np0005604943 systemd[1]: Started libpod-conmon-1991ec4a1abd6a6315988548846ee601c5f1d69866dec8383cdd16763e1de80d.scope.
Feb  2 06:57:58 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:57:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a1d748b1c724815b8e2fbce7893b73d4b12c3143e19f22a56f108cec512a9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:57:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a1d748b1c724815b8e2fbce7893b73d4b12c3143e19f22a56f108cec512a9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:57:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a1d748b1c724815b8e2fbce7893b73d4b12c3143e19f22a56f108cec512a9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:57:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a1d748b1c724815b8e2fbce7893b73d4b12c3143e19f22a56f108cec512a9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:57:58 np0005604943 podman[250843]: 2026-02-02 11:57:58.067760644 +0000 UTC m=+0.111708611 container init 1991ec4a1abd6a6315988548846ee601c5f1d69866dec8383cdd16763e1de80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:57:58 np0005604943 nova_compute[238883]: 2026-02-02 11:57:58.071 238887 DEBUG nova.network.neutron [-] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:57:58 np0005604943 podman[250843]: 2026-02-02 11:57:57.975839844 +0000 UTC m=+0.019787811 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:57:58 np0005604943 podman[250843]: 2026-02-02 11:57:58.074171193 +0000 UTC m=+0.118119140 container start 1991ec4a1abd6a6315988548846ee601c5f1d69866dec8383cdd16763e1de80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_raman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 06:57:58 np0005604943 podman[250843]: 2026-02-02 11:57:58.07710777 +0000 UTC m=+0.121055737 container attach 1991ec4a1abd6a6315988548846ee601c5f1d69866dec8383cdd16763e1de80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_raman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 06:57:58 np0005604943 nova_compute[238883]: 2026-02-02 11:57:58.095 238887 INFO nova.compute.manager [-] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Took 1.06 seconds to deallocate network for instance.#033[00m
Feb  2 06:57:58 np0005604943 nova_compute[238883]: 2026-02-02 11:57:58.136 238887 DEBUG oslo_concurrency.lockutils [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:57:58 np0005604943 nova_compute[238883]: 2026-02-02 11:57:58.137 238887 DEBUG oslo_concurrency.lockutils [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:57:58 np0005604943 nova_compute[238883]: 2026-02-02 11:57:58.194 238887 DEBUG oslo_concurrency.processutils [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:57:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 177 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.3 MiB/s wr, 337 op/s
Feb  2 06:57:58 np0005604943 lvm[250954]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:57:58 np0005604943 lvm[250954]: VG ceph_vg0 finished
Feb  2 06:57:58 np0005604943 lvm[250956]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:57:58 np0005604943 lvm[250956]: VG ceph_vg1 finished
Feb  2 06:57:58 np0005604943 lvm[250957]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:57:58 np0005604943 lvm[250957]: VG ceph_vg2 finished
Feb  2 06:57:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:57:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1265690578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:57:58 np0005604943 nova_compute[238883]: 2026-02-02 11:57:58.721 238887 DEBUG oslo_concurrency.processutils [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:57:58 np0005604943 nova_compute[238883]: 2026-02-02 11:57:58.726 238887 DEBUG nova.compute.provider_tree [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:57:58 np0005604943 nova_compute[238883]: 2026-02-02 11:57:58.742 238887 DEBUG nova.scheduler.client.report [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:57:58 np0005604943 mystifying_raman[250859]: {}
Feb  2 06:57:58 np0005604943 nova_compute[238883]: 2026-02-02 11:57:58.767 238887 DEBUG oslo_concurrency.lockutils [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:58 np0005604943 systemd[1]: libpod-1991ec4a1abd6a6315988548846ee601c5f1d69866dec8383cdd16763e1de80d.scope: Deactivated successfully.
Feb  2 06:57:58 np0005604943 podman[250843]: 2026-02-02 11:57:58.777609243 +0000 UTC m=+0.821557210 container died 1991ec4a1abd6a6315988548846ee601c5f1d69866dec8383cdd16763e1de80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 06:57:58 np0005604943 nova_compute[238883]: 2026-02-02 11:57:58.794 238887 INFO nova.scheduler.client.report [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Deleted allocations for instance 643f1632-51eb-4ee3-a152-cea78635d59c#033[00m
Feb  2 06:57:58 np0005604943 systemd[1]: var-lib-containers-storage-overlay-47a1d748b1c724815b8e2fbce7893b73d4b12c3143e19f22a56f108cec512a9b-merged.mount: Deactivated successfully.
Feb  2 06:57:58 np0005604943 podman[250843]: 2026-02-02 11:57:58.819570624 +0000 UTC m=+0.863518571 container remove 1991ec4a1abd6a6315988548846ee601c5f1d69866dec8383cdd16763e1de80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 06:57:58 np0005604943 systemd[1]: libpod-conmon-1991ec4a1abd6a6315988548846ee601c5f1d69866dec8383cdd16763e1de80d.scope: Deactivated successfully.
Feb  2 06:57:58 np0005604943 nova_compute[238883]: 2026-02-02 11:57:58.847 238887 DEBUG oslo_concurrency.lockutils [None req-9f1c2c10-2f1c-4eb8-9f14-d0c68a7b4a9b e144ee25c2b84ec5a1aecb69ceec619d a7e9957088fe43eaae10f11401fe89c4 - - default default] Lock "643f1632-51eb-4ee3-a152-cea78635d59c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:57:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:57:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:57:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:57:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:57:59 np0005604943 nova_compute[238883]: 2026-02-02 11:57:59.092 238887 DEBUG nova.network.neutron [req-851b71e2-8718-496a-a3ae-bc3f8a6c2817 req-fd610f29-6a1d-4991-a13a-e88ade025676 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Updated VIF entry in instance network info cache for port 2e53d89b-c3e1-480c-af8c-98b7e9b8d425. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:57:59 np0005604943 nova_compute[238883]: 2026-02-02 11:57:59.092 238887 DEBUG nova.network.neutron [req-851b71e2-8718-496a-a3ae-bc3f8a6c2817 req-fd610f29-6a1d-4991-a13a-e88ade025676 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Updating instance_info_cache with network_info: [{"id": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "address": "fa:16:3e:b7:29:95", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e53d89b-c3", "ovs_interfaceid": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:57:59 np0005604943 nova_compute[238883]: 2026-02-02 11:57:59.116 238887 DEBUG oslo_concurrency.lockutils [req-851b71e2-8718-496a-a3ae-bc3f8a6c2817 req-fd610f29-6a1d-4991-a13a-e88ade025676 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-e3333751-86a5-40df-9180-a0c8153f06a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:57:59 np0005604943 nova_compute[238883]: 2026-02-02 11:57:59.117 238887 DEBUG nova.compute.manager [req-851b71e2-8718-496a-a3ae-bc3f8a6c2817 req-fd610f29-6a1d-4991-a13a-e88ade025676 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Received event network-vif-deleted-ebc43340-2bcb-4b83-bd2e-2a9d42a23f1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:57:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1836571688' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:57:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:57:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1836571688' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:57:59 np0005604943 nova_compute[238883]: 2026-02-02 11:57:59.806 238887 DEBUG nova.compute.manager [req-ed63470f-17a2-425e-8609-8dabaa8c6174 req-c3f9d52c-867d-4996-9466-cd49c4d300a3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Received event network-vif-deleted-9b14b0fc-0160-4715-bd45-6a8ec1128754 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:57:59 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:57:59 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:58:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 991 KiB/s wr, 264 op/s
Feb  2 06:58:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:58:00 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/535905565' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:58:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:58:00 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/535905565' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:58:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Feb  2 06:58:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Feb  2 06:58:00 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Feb  2 06:58:01 np0005604943 podman[251000]: 2026-02-02 11:58:01.028953624 +0000 UTC m=+0.051801080 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 06:58:01 np0005604943 podman[250999]: 2026-02-02 11:58:01.074101418 +0000 UTC m=+0.097380995 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Feb  2 06:58:01 np0005604943 nova_compute[238883]: 2026-02-02 11:58:01.077 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:01 np0005604943 nova_compute[238883]: 2026-02-02 11:58:01.710 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 13 KiB/s wr, 206 op/s
Feb  2 06:58:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:58:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Feb  2 06:58:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Feb  2 06:58:02 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Feb  2 06:58:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Feb  2 06:58:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Feb  2 06:58:03 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Feb  2 06:58:04 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:04Z|00076|binding|INFO|Releasing lport 7f7a24e7-2e36-4c1c-8857-8367e857534f from this chassis (sb_readonly=0)
Feb  2 06:58:04 np0005604943 nova_compute[238883]: 2026-02-02 11:58:04.193 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 498 KiB/s rd, 6.0 KiB/s wr, 152 op/s
Feb  2 06:58:05 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:05.829 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:58:05 np0005604943 nova_compute[238883]: 2026-02-02 11:58:05.977 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:58:06 np0005604943 nova_compute[238883]: 2026-02-02 11:58:06.004 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Triggering sync for uuid e3333751-86a5-40df-9180-a0c8153f06a4 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb  2 06:58:06 np0005604943 nova_compute[238883]: 2026-02-02 11:58:06.004 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "e3333751-86a5-40df-9180-a0c8153f06a4" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:06 np0005604943 nova_compute[238883]: 2026-02-02 11:58:06.005 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "e3333751-86a5-40df-9180-a0c8153f06a4" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:06 np0005604943 nova_compute[238883]: 2026-02-02 11:58:06.047 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "e3333751-86a5-40df-9180-a0c8153f06a4" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:06 np0005604943 nova_compute[238883]: 2026-02-02 11:58:06.074 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:06 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:06Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:29:95 10.100.0.9
Feb  2 06:58:06 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:06Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:29:95 10.100.0.9
Feb  2 06:58:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 3.3 KiB/s wr, 94 op/s
Feb  2 06:58:06 np0005604943 nova_compute[238883]: 2026-02-02 11:58:06.712 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:58:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Feb  2 06:58:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Feb  2 06:58:07 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Feb  2 06:58:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 109 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 444 KiB/s rd, 2.9 MiB/s wr, 183 op/s
Feb  2 06:58:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:58:09
Feb  2 06:58:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:58:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:58:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.mgr', '.rgw.root']
Feb  2 06:58:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:58:09 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:09Z|00077|binding|INFO|Releasing lport 7f7a24e7-2e36-4c1c-8857-8367e857534f from this chassis (sb_readonly=0)
Feb  2 06:58:09 np0005604943 nova_compute[238883]: 2026-02-02 11:58:09.705 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:10.022 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:10.023 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:10.024 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 121 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 568 KiB/s rd, 3.5 MiB/s wr, 144 op/s
Feb  2 06:58:10 np0005604943 nova_compute[238883]: 2026-02-02 11:58:10.624 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033475.623468, 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:58:10 np0005604943 nova_compute[238883]: 2026-02-02 11:58:10.624 238887 INFO nova.compute.manager [-] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] VM Stopped (Lifecycle Event)#033[00m
Feb  2 06:58:10 np0005604943 nova_compute[238883]: 2026-02-02 11:58:10.676 238887 DEBUG nova.compute.manager [None req-8b478e43-61d7-4024-8335-1b9a941f8801 - - - - - -] [instance: 9b8cd271-3a27-4cfd-8eeb-c6a60481f4e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:58:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:58:10 np0005604943 nova_compute[238883]: 2026-02-02 11:58:10.992 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "177de248-c6fd-437b-9326-31ed9842fe34" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:10 np0005604943 nova_compute[238883]: 2026-02-02 11:58:10.992 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.018 238887 DEBUG nova.compute.manager [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.079 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.107 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.107 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.119 238887 DEBUG nova.virt.hardware [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.120 238887 INFO nova.compute.claims [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.250 238887 DEBUG oslo_concurrency.processutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.666 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033476.6652122, 643f1632-51eb-4ee3-a152-cea78635d59c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.667 238887 INFO nova.compute.manager [-] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] VM Stopped (Lifecycle Event)#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.696 238887 DEBUG nova.compute.manager [None req-a94305e5-f714-4ed7-919e-6175895d8728 - - - - - -] [instance: 643f1632-51eb-4ee3-a152-cea78635d59c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.713 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:58:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/662055749' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.742 238887 DEBUG oslo_concurrency.processutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.746 238887 DEBUG nova.compute.provider_tree [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.762 238887 DEBUG nova.scheduler.client.report [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.784 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.785 238887 DEBUG nova.compute.manager [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.837 238887 DEBUG nova.compute.manager [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.838 238887 DEBUG nova.network.neutron [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.863 238887 INFO nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.885 238887 DEBUG nova.compute.manager [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.992 238887 DEBUG nova.compute.manager [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.993 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 06:58:11 np0005604943 nova_compute[238883]: 2026-02-02 11:58:11.993 238887 INFO nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Creating image(s)#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.014 238887 DEBUG nova.storage.rbd_utils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] rbd image 177de248-c6fd-437b-9326-31ed9842fe34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.034 238887 DEBUG nova.storage.rbd_utils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] rbd image 177de248-c6fd-437b-9326-31ed9842fe34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.055 238887 DEBUG nova.storage.rbd_utils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] rbd image 177de248-c6fd-437b-9326-31ed9842fe34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.058 238887 DEBUG oslo_concurrency.processutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.103 238887 DEBUG oslo_concurrency.processutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.104 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.105 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.105 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.128 238887 DEBUG nova.storage.rbd_utils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] rbd image 177de248-c6fd-437b-9326-31ed9842fe34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.132 238887 DEBUG oslo_concurrency.processutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 177de248-c6fd-437b-9326-31ed9842fe34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.164 238887 DEBUG nova.policy [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4846ccd205b54116a828ad91820ef58d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c061a009eae241049a1e3a1c35aa2503', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 06:58:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 121 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 480 KiB/s rd, 3.1 MiB/s wr, 99 op/s
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.317 238887 DEBUG oslo_concurrency.processutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 177de248-c6fd-437b-9326-31ed9842fe34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.366 238887 DEBUG nova.storage.rbd_utils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] resizing rbd image 177de248-c6fd-437b-9326-31ed9842fe34_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.435 238887 DEBUG nova.objects.instance [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lazy-loading 'migration_context' on Instance uuid 177de248-c6fd-437b-9326-31ed9842fe34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.451 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.452 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Ensure instance console log exists: /var/lib/nova/instances/177de248-c6fd-437b-9326-31ed9842fe34/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.452 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.453 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.453 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:58:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Feb  2 06:58:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Feb  2 06:58:12 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Feb  2 06:58:12 np0005604943 nova_compute[238883]: 2026-02-02 11:58:12.939 238887 DEBUG nova.network.neutron [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Successfully created port: c33f8157-4662-40a2-867e-4dac8467a80b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.006 238887 DEBUG oslo_concurrency.lockutils [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "e3333751-86a5-40df-9180-a0c8153f06a4" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.006 238887 DEBUG oslo_concurrency.lockutils [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.021 238887 DEBUG nova.objects.instance [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lazy-loading 'flavor' on Instance uuid e3333751-86a5-40df-9180-a0c8153f06a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.040 238887 INFO nova.virt.libvirt.driver [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Ignoring supplied device name: /dev/vdb#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.054 238887 DEBUG oslo_concurrency.lockutils [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.327 238887 DEBUG oslo_concurrency.lockutils [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "e3333751-86a5-40df-9180-a0c8153f06a4" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.328 238887 DEBUG oslo_concurrency.lockutils [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.328 238887 INFO nova.compute.manager [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Attaching volume f2d7a5ba-c440-4bb5-ab12-e7990d48251a to /dev/vdb#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.562 238887 DEBUG os_brick.utils [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.563 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.581 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.581 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[2598bd2d-f647-48e0-8451-77f7d961ecfa]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.582 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.587 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.588 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[b538ecbe-7519-46ea-8464-6fb4eac0d938]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.589 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.594 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.594 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[17de35c9-26a3-475e-a876-35677bbf4b32]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.595 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[a23f2e7f-a552-4ca7-bf72-a8b206ce4efa]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.595 238887 DEBUG oslo_concurrency.processutils [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.613 238887 DEBUG oslo_concurrency.processutils [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.616 238887 DEBUG os_brick.initiator.connectors.lightos [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.617 238887 DEBUG os_brick.initiator.connectors.lightos [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.617 238887 DEBUG os_brick.initiator.connectors.lightos [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.618 238887 DEBUG os_brick.utils [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] <== get_connector_properties: return (55ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.618 238887 DEBUG nova.virt.block_device [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Updating existing volume attachment record: e2a62c00-c6b7-470f-9bab-30c58379fc2c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.811 238887 DEBUG nova.network.neutron [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Successfully updated port: c33f8157-4662-40a2-867e-4dac8467a80b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.829 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "refresh_cache-177de248-c6fd-437b-9326-31ed9842fe34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.829 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquired lock "refresh_cache-177de248-c6fd-437b-9326-31ed9842fe34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.829 238887 DEBUG nova.network.neutron [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.907 238887 DEBUG nova.compute.manager [req-8f1a6113-6abd-4081-abf8-80c9cdcf2caf req-f2750c4e-3982-43ff-a286-a41790956b00 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Received event network-changed-c33f8157-4662-40a2-867e-4dac8467a80b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.907 238887 DEBUG nova.compute.manager [req-8f1a6113-6abd-4081-abf8-80c9cdcf2caf req-f2750c4e-3982-43ff-a286-a41790956b00 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Refreshing instance network info cache due to event network-changed-c33f8157-4662-40a2-867e-4dac8467a80b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:58:13 np0005604943 nova_compute[238883]: 2026-02-02 11:58:13.907 238887 DEBUG oslo_concurrency.lockutils [req-8f1a6113-6abd-4081-abf8-80c9cdcf2caf req-f2750c4e-3982-43ff-a286-a41790956b00 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-177de248-c6fd-437b-9326-31ed9842fe34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.129 238887 DEBUG nova.network.neutron [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 06:58:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 167 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 516 KiB/s rd, 5.9 MiB/s wr, 134 op/s
Feb  2 06:58:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:58:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/282700074' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.539 238887 DEBUG nova.objects.instance [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lazy-loading 'flavor' on Instance uuid e3333751-86a5-40df-9180-a0c8153f06a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.564 238887 DEBUG nova.virt.libvirt.driver [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Attempting to attach volume f2d7a5ba-c440-4bb5-ab12-e7990d48251a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.567 238887 DEBUG nova.virt.libvirt.guest [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 06:58:14 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:58:14 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-f2d7a5ba-c440-4bb5-ab12-e7990d48251a">
Feb  2 06:58:14 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:58:14 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:58:14 np0005604943 nova_compute[238883]:  <auth username="openstack">
Feb  2 06:58:14 np0005604943 nova_compute[238883]:    <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:58:14 np0005604943 nova_compute[238883]:  </auth>
Feb  2 06:58:14 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:58:14 np0005604943 nova_compute[238883]:  <serial>f2d7a5ba-c440-4bb5-ab12-e7990d48251a</serial>
Feb  2 06:58:14 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:58:14 np0005604943 nova_compute[238883]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.681 238887 DEBUG nova.virt.libvirt.driver [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.681 238887 DEBUG nova.virt.libvirt.driver [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.682 238887 DEBUG nova.virt.libvirt.driver [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.682 238887 DEBUG nova.virt.libvirt.driver [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] No VIF found with MAC fa:16:3e:b7:29:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.773 238887 DEBUG nova.network.neutron [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Updating instance_info_cache with network_info: [{"id": "c33f8157-4662-40a2-867e-4dac8467a80b", "address": "fa:16:3e:43:5a:3a", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc33f8157-46", "ovs_interfaceid": "c33f8157-4662-40a2-867e-4dac8467a80b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.796 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Releasing lock "refresh_cache-177de248-c6fd-437b-9326-31ed9842fe34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.796 238887 DEBUG nova.compute.manager [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Instance network_info: |[{"id": "c33f8157-4662-40a2-867e-4dac8467a80b", "address": "fa:16:3e:43:5a:3a", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc33f8157-46", "ovs_interfaceid": "c33f8157-4662-40a2-867e-4dac8467a80b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.798 238887 DEBUG oslo_concurrency.lockutils [req-8f1a6113-6abd-4081-abf8-80c9cdcf2caf req-f2750c4e-3982-43ff-a286-a41790956b00 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-177de248-c6fd-437b-9326-31ed9842fe34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.798 238887 DEBUG nova.network.neutron [req-8f1a6113-6abd-4081-abf8-80c9cdcf2caf req-f2750c4e-3982-43ff-a286-a41790956b00 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Refreshing network info cache for port c33f8157-4662-40a2-867e-4dac8467a80b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.803 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Start _get_guest_xml network_info=[{"id": "c33f8157-4662-40a2-867e-4dac8467a80b", "address": "fa:16:3e:43:5a:3a", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc33f8157-46", "ovs_interfaceid": "c33f8157-4662-40a2-867e-4dac8467a80b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'image_id': '21b263f0-00f1-47be-b8b1-e3c07da0a6a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.808 238887 WARNING nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.814 238887 DEBUG nova.virt.libvirt.host [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.815 238887 DEBUG nova.virt.libvirt.host [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.828 238887 DEBUG nova.virt.libvirt.host [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.829 238887 DEBUG nova.virt.libvirt.host [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.829 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.830 238887 DEBUG nova.virt.hardware [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.831 238887 DEBUG nova.virt.hardware [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.831 238887 DEBUG nova.virt.hardware [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.832 238887 DEBUG nova.virt.hardware [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.832 238887 DEBUG nova.virt.hardware [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.832 238887 DEBUG nova.virt.hardware [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.833 238887 DEBUG nova.virt.hardware [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.833 238887 DEBUG nova.virt.hardware [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.834 238887 DEBUG nova.virt.hardware [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.834 238887 DEBUG nova.virt.hardware [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.835 238887 DEBUG nova.virt.hardware [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.840 238887 DEBUG oslo_concurrency.processutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:14 np0005604943 nova_compute[238883]: 2026-02-02 11:58:14.879 238887 DEBUG oslo_concurrency.lockutils [None req-c14ce44a-3da1-4f05-904a-6a22f03d985f 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:58:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/15406608' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.372 238887 DEBUG oslo_concurrency.processutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.391 238887 DEBUG nova.storage.rbd_utils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] rbd image 177de248-c6fd-437b-9326-31ed9842fe34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.394 238887 DEBUG oslo_concurrency.processutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:58:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1727430883' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.906 238887 DEBUG oslo_concurrency.processutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.908 238887 DEBUG nova.virt.libvirt.vif [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:58:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1485192233',display_name='tempest-VolumesSnapshotTestJSON-instance-1485192233',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1485192233',id=8,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL6GxAVrfalGARmJw8jAxIibAgJ2b4nhzo9z62CVUPaqM4r5hNCPfuium6ZNr5kKgsWulTQcmag7XM8ABfuBli83zuAfWi/T+KoQ2rZbscPYLel85XacOnO64w6bfSTvfg==',key_name='tempest-keypair-490923632',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c061a009eae241049a1e3a1c35aa2503',ramdisk_id='',reservation_id='r-99c5ss6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-2018180325',owner_user_name='tempest-VolumesSnapshotTestJSON-2018180325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:58:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4846ccd205b54116a828ad91820ef58d',uuid=177de248-c6fd-437b-9326-31ed9842fe34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c33f8157-4662-40a2-867e-4dac8467a80b", "address": "fa:16:3e:43:5a:3a", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc33f8157-46", "ovs_interfaceid": "c33f8157-4662-40a2-867e-4dac8467a80b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.908 238887 DEBUG nova.network.os_vif_util [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Converting VIF {"id": "c33f8157-4662-40a2-867e-4dac8467a80b", "address": "fa:16:3e:43:5a:3a", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc33f8157-46", "ovs_interfaceid": "c33f8157-4662-40a2-867e-4dac8467a80b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.909 238887 DEBUG nova.network.os_vif_util [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:5a:3a,bridge_name='br-int',has_traffic_filtering=True,id=c33f8157-4662-40a2-867e-4dac8467a80b,network=Network(edd3a331-b14a-4730-a21c-7fc793b77005),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc33f8157-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.911 238887 DEBUG nova.objects.instance [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lazy-loading 'pci_devices' on Instance uuid 177de248-c6fd-437b-9326-31ed9842fe34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.928 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] End _get_guest_xml xml=<domain type="kvm">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  <uuid>177de248-c6fd-437b-9326-31ed9842fe34</uuid>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  <name>instance-00000008</name>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <nova:name>tempest-VolumesSnapshotTestJSON-instance-1485192233</nova:name>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 11:58:14</nova:creationTime>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:        <nova:user uuid="4846ccd205b54116a828ad91820ef58d">tempest-VolumesSnapshotTestJSON-2018180325-project-member</nova:user>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:        <nova:project uuid="c061a009eae241049a1e3a1c35aa2503">tempest-VolumesSnapshotTestJSON-2018180325</nova:project>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <nova:root type="image" uuid="21b263f0-00f1-47be-b8b1-e3c07da0a6a2"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:        <nova:port uuid="c33f8157-4662-40a2-867e-4dac8467a80b">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <system>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <entry name="serial">177de248-c6fd-437b-9326-31ed9842fe34</entry>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <entry name="uuid">177de248-c6fd-437b-9326-31ed9842fe34</entry>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    </system>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  <os>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  </os>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  <features>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  </features>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  </clock>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  <devices>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/177de248-c6fd-437b-9326-31ed9842fe34_disk">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/177de248-c6fd-437b-9326-31ed9842fe34_disk.config">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:43:5a:3a"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <target dev="tapc33f8157-46"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    </interface>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/177de248-c6fd-437b-9326-31ed9842fe34/console.log" append="off"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    </serial>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <video>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    </video>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    </rng>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 06:58:15 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 06:58:15 np0005604943 nova_compute[238883]:  </devices>
Feb  2 06:58:15 np0005604943 nova_compute[238883]: </domain>
Feb  2 06:58:15 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.929 238887 DEBUG nova.compute.manager [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Preparing to wait for external event network-vif-plugged-c33f8157-4662-40a2-867e-4dac8467a80b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.930 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "177de248-c6fd-437b-9326-31ed9842fe34-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.930 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.930 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.931 238887 DEBUG nova.virt.libvirt.vif [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:58:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1485192233',display_name='tempest-VolumesSnapshotTestJSON-instance-1485192233',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1485192233',id=8,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL6GxAVrfalGARmJw8jAxIibAgJ2b4nhzo9z62CVUPaqM4r5hNCPfuium6ZNr5kKgsWulTQcmag7XM8ABfuBli83zuAfWi/T+KoQ2rZbscPYLel85XacOnO64w6bfSTvfg==',key_name='tempest-keypair-490923632',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c061a009eae241049a1e3a1c35aa2503',ramdisk_id='',reservation_id='r-99c5ss6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-2018180325',owner_user_name='tempest-VolumesSnapshotTestJSON-2018180325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:58:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4846ccd205b54116a828ad91820ef58d',uuid=177de248-c6fd-437b-9326-31ed9842fe34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c33f8157-4662-40a2-867e-4dac8467a80b", "address": "fa:16:3e:43:5a:3a", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc33f8157-46", "ovs_interfaceid": "c33f8157-4662-40a2-867e-4dac8467a80b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.931 238887 DEBUG nova.network.os_vif_util [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Converting VIF {"id": "c33f8157-4662-40a2-867e-4dac8467a80b", "address": "fa:16:3e:43:5a:3a", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc33f8157-46", "ovs_interfaceid": "c33f8157-4662-40a2-867e-4dac8467a80b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.932 238887 DEBUG nova.network.os_vif_util [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:5a:3a,bridge_name='br-int',has_traffic_filtering=True,id=c33f8157-4662-40a2-867e-4dac8467a80b,network=Network(edd3a331-b14a-4730-a21c-7fc793b77005),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc33f8157-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.932 238887 DEBUG os_vif [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:5a:3a,bridge_name='br-int',has_traffic_filtering=True,id=c33f8157-4662-40a2-867e-4dac8467a80b,network=Network(edd3a331-b14a-4730-a21c-7fc793b77005),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc33f8157-46') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.932 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.933 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.933 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.935 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.936 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc33f8157-46, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.936 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc33f8157-46, col_values=(('external_ids', {'iface-id': 'c33f8157-4662-40a2-867e-4dac8467a80b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:5a:3a', 'vm-uuid': '177de248-c6fd-437b-9326-31ed9842fe34'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.969 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:15 np0005604943 NetworkManager[49093]: <info>  [1770033495.9715] manager: (tapc33f8157-46): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.973 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.976 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.977 238887 INFO os_vif [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:5a:3a,bridge_name='br-int',has_traffic_filtering=True,id=c33f8157-4662-40a2-867e-4dac8467a80b,network=Network(edd3a331-b14a-4730-a21c-7fc793b77005),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc33f8157-46')#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.989 238887 DEBUG nova.network.neutron [req-8f1a6113-6abd-4081-abf8-80c9cdcf2caf req-f2750c4e-3982-43ff-a286-a41790956b00 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Updated VIF entry in instance network info cache for port c33f8157-4662-40a2-867e-4dac8467a80b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:58:15 np0005604943 nova_compute[238883]: 2026-02-02 11:58:15.989 238887 DEBUG nova.network.neutron [req-8f1a6113-6abd-4081-abf8-80c9cdcf2caf req-f2750c4e-3982-43ff-a286-a41790956b00 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Updating instance_info_cache with network_info: [{"id": "c33f8157-4662-40a2-867e-4dac8467a80b", "address": "fa:16:3e:43:5a:3a", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc33f8157-46", "ovs_interfaceid": "c33f8157-4662-40a2-867e-4dac8467a80b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.008 238887 DEBUG oslo_concurrency.lockutils [req-8f1a6113-6abd-4081-abf8-80c9cdcf2caf req-f2750c4e-3982-43ff-a286-a41790956b00 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-177de248-c6fd-437b-9326-31ed9842fe34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.023 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.023 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.023 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] No VIF found with MAC fa:16:3e:43:5a:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.024 238887 INFO nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Using config drive#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.040 238887 DEBUG nova.storage.rbd_utils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] rbd image 177de248-c6fd-437b-9326-31ed9842fe34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.077 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 167 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 234 KiB/s rd, 4.5 MiB/s wr, 83 op/s
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.312 238887 INFO nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Creating config drive at /var/lib/nova/instances/177de248-c6fd-437b-9326-31ed9842fe34/disk.config#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.316 238887 DEBUG oslo_concurrency.processutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/177de248-c6fd-437b-9326-31ed9842fe34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpm8vpsost execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.435 238887 DEBUG oslo_concurrency.processutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/177de248-c6fd-437b-9326-31ed9842fe34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpm8vpsost" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.458 238887 DEBUG nova.storage.rbd_utils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] rbd image 177de248-c6fd-437b-9326-31ed9842fe34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.462 238887 DEBUG oslo_concurrency.processutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/177de248-c6fd-437b-9326-31ed9842fe34/disk.config 177de248-c6fd-437b-9326-31ed9842fe34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:58:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3998389558' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.567 238887 DEBUG oslo_concurrency.processutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/177de248-c6fd-437b-9326-31ed9842fe34/disk.config 177de248-c6fd-437b-9326-31ed9842fe34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.568 238887 INFO nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Deleting local config drive /var/lib/nova/instances/177de248-c6fd-437b-9326-31ed9842fe34/disk.config because it was imported into RBD.#033[00m
Feb  2 06:58:16 np0005604943 kernel: tapc33f8157-46: entered promiscuous mode
Feb  2 06:58:16 np0005604943 NetworkManager[49093]: <info>  [1770033496.6027] manager: (tapc33f8157-46): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Feb  2 06:58:16 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:16Z|00078|binding|INFO|Claiming lport c33f8157-4662-40a2-867e-4dac8467a80b for this chassis.
Feb  2 06:58:16 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:16Z|00079|binding|INFO|c33f8157-4662-40a2-867e-4dac8467a80b: Claiming fa:16:3e:43:5a:3a 10.100.0.11
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.603 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.611 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:5a:3a 10.100.0.11'], port_security=['fa:16:3e:43:5a:3a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '177de248-c6fd-437b-9326-31ed9842fe34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-edd3a331-b14a-4730-a21c-7fc793b77005', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c061a009eae241049a1e3a1c35aa2503', 'neutron:revision_number': '2', 'neutron:security_group_ids': '07a4532a-2c65-4c57-9063-6634eb312f26', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be5764d7-de7f-4844-afc6-7eadee6d6d3c, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=c33f8157-4662-40a2-867e-4dac8467a80b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.612 155011 INFO neutron.agent.ovn.metadata.agent [-] Port c33f8157-4662-40a2-867e-4dac8467a80b in datapath edd3a331-b14a-4730-a21c-7fc793b77005 bound to our chassis#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.613 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network edd3a331-b14a-4730-a21c-7fc793b77005#033[00m
Feb  2 06:58:16 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:16Z|00080|binding|INFO|Setting lport c33f8157-4662-40a2-867e-4dac8467a80b ovn-installed in OVS
Feb  2 06:58:16 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:16Z|00081|binding|INFO|Setting lport c33f8157-4662-40a2-867e-4dac8467a80b up in Southbound
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.618 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.622 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c4fcda0f-84bc-4008-9d36-382df6bee20d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.625 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapedd3a331-b1 in ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.627 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapedd3a331-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.627 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[0bfa6af7-f424-4c3b-ad64-4a25922aa610]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:16 np0005604943 systemd-udevd[251394]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.628 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[51dbeae3-43df-4b2f-b604-e845d0008b7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:16 np0005604943 systemd-machined[206973]: New machine qemu-8-instance-00000008.
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.636 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[10af15b5-f5d8-446d-8dec-e69d223cb841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:16 np0005604943 NetworkManager[49093]: <info>  [1770033496.6419] device (tapc33f8157-46): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 06:58:16 np0005604943 NetworkManager[49093]: <info>  [1770033496.6426] device (tapc33f8157-46): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.646 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b3d72d-4f79-4c2d-89da-36cbf503a416]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:16 np0005604943 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.665 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[de564952-4760-47a1-9b6a-2b14d49a8f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:16 np0005604943 NetworkManager[49093]: <info>  [1770033496.6726] manager: (tapedd3a331-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.671 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[673829b3-ec76-4e90-947f-adf6d67e8da0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.699 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1d1667-982d-4e6d-8fb1-5f9c2a1015b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.703 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[67ae9db8-8d32-414a-8c4d-fe3e46532684]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:16 np0005604943 NetworkManager[49093]: <info>  [1770033496.7217] device (tapedd3a331-b0): carrier: link connected
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.725 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[e6212fde-a69a-40f5-b363-a4d8e6a69116]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.739 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c87bafef-921c-49e4-b077-80e81285bd27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapedd3a331-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:f4:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394207, 'reachable_time': 24833, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251427, 'error': None, 'target': 'ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.754 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[14a405c0-e989-4ceb-b947-fa5a572d0817]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:f4cf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394207, 'tstamp': 394207}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251428, 'error': None, 'target': 'ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.766 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c136a048-316c-413c-a0df-fc2a6ebf20da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapedd3a331-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:f4:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394207, 'reachable_time': 24833, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251429, 'error': None, 'target': 'ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.792 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[cd0dceb9-9dd2-4efc-8d85-d1b792225234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.831 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7fffcdbe-6a0b-4386-95bd-170bc996bcb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.832 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedd3a331-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.833 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.833 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapedd3a331-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:58:16 np0005604943 kernel: tapedd3a331-b0: entered promiscuous mode
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.835 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:16 np0005604943 NetworkManager[49093]: <info>  [1770033496.8361] manager: (tapedd3a331-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.837 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.838 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapedd3a331-b0, col_values=(('external_ids', {'iface-id': 'b2fa0ea4-27d8-4ad2-be31-b707a8a3d0e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.839 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:16 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:16Z|00082|binding|INFO|Releasing lport b2fa0ea4-27d8-4ad2-be31-b707a8a3d0e4 from this chassis (sb_readonly=0)
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.851 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/edd3a331-b14a-4730-a21c-7fc793b77005.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/edd3a331-b14a-4730-a21c-7fc793b77005.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.853 238887 DEBUG nova.compute.manager [req-5471603b-f74a-4e88-b4e8-2210d611b6d1 req-3ebb9788-ba92-4530-9ddd-bb280b07d073 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Received event network-vif-plugged-c33f8157-4662-40a2-867e-4dac8467a80b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.853 238887 DEBUG oslo_concurrency.lockutils [req-5471603b-f74a-4e88-b4e8-2210d611b6d1 req-3ebb9788-ba92-4530-9ddd-bb280b07d073 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "177de248-c6fd-437b-9326-31ed9842fe34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.853 238887 DEBUG oslo_concurrency.lockutils [req-5471603b-f74a-4e88-b4e8-2210d611b6d1 req-3ebb9788-ba92-4530-9ddd-bb280b07d073 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.854 238887 DEBUG oslo_concurrency.lockutils [req-5471603b-f74a-4e88-b4e8-2210d611b6d1 req-3ebb9788-ba92-4530-9ddd-bb280b07d073 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.854 238887 DEBUG nova.compute.manager [req-5471603b-f74a-4e88-b4e8-2210d611b6d1 req-3ebb9788-ba92-4530-9ddd-bb280b07d073 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Processing event network-vif-plugged-c33f8157-4662-40a2-867e-4dac8467a80b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 06:58:16 np0005604943 nova_compute[238883]: 2026-02-02 11:58:16.854 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.855 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7369839c-340b-47de-bfeb-cedc64a05882]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.856 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-edd3a331-b14a-4730-a21c-7fc793b77005
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/edd3a331-b14a-4730-a21c-7fc793b77005.pid.haproxy
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID edd3a331-b14a-4730-a21c-7fc793b77005
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 06:58:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:16.856 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005', 'env', 'PROCESS_TAG=haproxy-edd3a331-b14a-4730-a21c-7fc793b77005', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/edd3a331-b14a-4730-a21c-7fc793b77005.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 06:58:17 np0005604943 podman[251462]: 2026-02-02 11:58:17.200167133 +0000 UTC m=+0.044506508 container create 2ba940cc425f9f2f0991591f99db28520d300601c83656227566eb4e9f690cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  2 06:58:17 np0005604943 systemd[1]: Started libpod-conmon-2ba940cc425f9f2f0991591f99db28520d300601c83656227566eb4e9f690cf3.scope.
Feb  2 06:58:17 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:58:17 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9006a10451d02e49b08a32eb412d261a3d300280b9bf53160b290595a81981ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 06:58:17 np0005604943 podman[251462]: 2026-02-02 11:58:17.176471272 +0000 UTC m=+0.020810667 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 06:58:17 np0005604943 podman[251462]: 2026-02-02 11:58:17.278156739 +0000 UTC m=+0.122496144 container init 2ba940cc425f9f2f0991591f99db28520d300601c83656227566eb4e9f690cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb  2 06:58:17 np0005604943 podman[251462]: 2026-02-02 11:58:17.286076466 +0000 UTC m=+0.130415841 container start 2ba940cc425f9f2f0991591f99db28520d300601c83656227566eb4e9f690cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 06:58:17 np0005604943 neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005[251477]: [NOTICE]   (251488) : New worker (251498) forked
Feb  2 06:58:17 np0005604943 neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005[251477]: [NOTICE]   (251488) : Loading success.
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.453 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033497.4522529, 177de248-c6fd-437b-9326-31ed9842fe34 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.453 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] VM Started (Lifecycle Event)#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.457 238887 DEBUG nova.compute.manager [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.459 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.463 238887 INFO nova.virt.libvirt.driver [-] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Instance spawned successfully.#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.463 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.491 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.496 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:58:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.500 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.500 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.501 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.501 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.501 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.502 238887 DEBUG nova.virt.libvirt.driver [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.535 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.536 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033497.4564035, 177de248-c6fd-437b-9326-31ed9842fe34 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.536 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] VM Paused (Lifecycle Event)#033[00m
Feb  2 06:58:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Feb  2 06:58:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Feb  2 06:58:17 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.587 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.591 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033497.4590611, 177de248-c6fd-437b-9326-31ed9842fe34 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.591 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] VM Resumed (Lifecycle Event)#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.620 238887 INFO nova.compute.manager [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Took 5.63 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.621 238887 DEBUG nova.compute.manager [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.622 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.629 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.661 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.690 238887 INFO nova.compute.manager [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Took 6.61 seconds to build instance.#033[00m
Feb  2 06:58:17 np0005604943 nova_compute[238883]: 2026-02-02 11:58:17.711 238887 DEBUG oslo_concurrency.lockutils [None req-4977068a-e7d4-4843-b72c-aa352da03afc 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 2.7 MiB/s wr, 66 op/s
Feb  2 06:58:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Feb  2 06:58:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Feb  2 06:58:18 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Feb  2 06:58:18 np0005604943 nova_compute[238883]: 2026-02-02 11:58:18.951 238887 DEBUG nova.compute.manager [req-b51bf71c-8f94-4d0a-ab15-35748c5c512f req-6ca1e774-9440-4dff-a541-52abcb66d268 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Received event network-vif-plugged-c33f8157-4662-40a2-867e-4dac8467a80b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:58:18 np0005604943 nova_compute[238883]: 2026-02-02 11:58:18.952 238887 DEBUG oslo_concurrency.lockutils [req-b51bf71c-8f94-4d0a-ab15-35748c5c512f req-6ca1e774-9440-4dff-a541-52abcb66d268 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "177de248-c6fd-437b-9326-31ed9842fe34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:18 np0005604943 nova_compute[238883]: 2026-02-02 11:58:18.952 238887 DEBUG oslo_concurrency.lockutils [req-b51bf71c-8f94-4d0a-ab15-35748c5c512f req-6ca1e774-9440-4dff-a541-52abcb66d268 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:18 np0005604943 nova_compute[238883]: 2026-02-02 11:58:18.953 238887 DEBUG oslo_concurrency.lockutils [req-b51bf71c-8f94-4d0a-ab15-35748c5c512f req-6ca1e774-9440-4dff-a541-52abcb66d268 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:18 np0005604943 nova_compute[238883]: 2026-02-02 11:58:18.953 238887 DEBUG nova.compute.manager [req-b51bf71c-8f94-4d0a-ab15-35748c5c512f req-6ca1e774-9440-4dff-a541-52abcb66d268 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] No waiting events found dispatching network-vif-plugged-c33f8157-4662-40a2-867e-4dac8467a80b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:58:18 np0005604943 nova_compute[238883]: 2026-02-02 11:58:18.953 238887 WARNING nova.compute.manager [req-b51bf71c-8f94-4d0a-ab15-35748c5c512f req-6ca1e774-9440-4dff-a541-52abcb66d268 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Received unexpected event network-vif-plugged-c33f8157-4662-40a2-867e-4dac8467a80b for instance with vm_state active and task_state None.#033[00m
Feb  2 06:58:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:58:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/52718307' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:58:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:58:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/52718307' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:58:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:58:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2882700834' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:58:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:58:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2882700834' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:58:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 122 op/s
Feb  2 06:58:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Feb  2 06:58:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Feb  2 06:58:20 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Feb  2 06:58:20 np0005604943 nova_compute[238883]: 2026-02-02 11:58:20.972 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.027 238887 DEBUG nova.compute.manager [req-04addfdd-c13c-49b7-b538-c963b21c3911 req-5c7b57f9-587e-499f-9dda-91b4d979ee7e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Received event network-changed-c33f8157-4662-40a2-867e-4dac8467a80b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.027 238887 DEBUG nova.compute.manager [req-04addfdd-c13c-49b7-b538-c963b21c3911 req-5c7b57f9-587e-499f-9dda-91b4d979ee7e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Refreshing instance network info cache due to event network-changed-c33f8157-4662-40a2-867e-4dac8467a80b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.027 238887 DEBUG oslo_concurrency.lockutils [req-04addfdd-c13c-49b7-b538-c963b21c3911 req-5c7b57f9-587e-499f-9dda-91b4d979ee7e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-177de248-c6fd-437b-9326-31ed9842fe34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.028 238887 DEBUG oslo_concurrency.lockutils [req-04addfdd-c13c-49b7-b538-c963b21c3911 req-5c7b57f9-587e-499f-9dda-91b4d979ee7e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-177de248-c6fd-437b-9326-31ed9842fe34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.028 238887 DEBUG nova.network.neutron [req-04addfdd-c13c-49b7-b538-c963b21c3911 req-5c7b57f9-587e-499f-9dda-91b4d979ee7e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Refreshing network info cache for port c33f8157-4662-40a2-867e-4dac8467a80b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.080 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011103320433819147 of space, bias 1.0, pg target 0.3330996130145744 quantized to 32 (current 32)
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.991096591471291e-06 of space, bias 1.0, pg target 0.0020973289774413872 quantized to 32 (current 32)
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 4.847704716055407e-07 of space, bias 1.0, pg target 0.00014543114148166223 quantized to 32 (current 32)
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661877902587923 of space, bias 1.0, pg target 0.19985633707763767 quantized to 32 (current 32)
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2318370878956486e-06 of space, bias 4.0, pg target 0.0014782045054747782 quantized to 16 (current 16)
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:58:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.697 238887 DEBUG oslo_concurrency.lockutils [None req-cf045882-bf90-4f56-afa8-36bcee05332e 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "e3333751-86a5-40df-9180-a0c8153f06a4" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.698 238887 DEBUG oslo_concurrency.lockutils [None req-cf045882-bf90-4f56-afa8-36bcee05332e 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.715 238887 INFO nova.compute.manager [None req-cf045882-bf90-4f56-afa8-36bcee05332e 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Detaching volume f2d7a5ba-c440-4bb5-ab12-e7990d48251a#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.839 238887 INFO nova.virt.block_device [None req-cf045882-bf90-4f56-afa8-36bcee05332e 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Attempting to driver detach volume f2d7a5ba-c440-4bb5-ab12-e7990d48251a from mountpoint /dev/vdb#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.849 238887 DEBUG nova.virt.libvirt.driver [None req-cf045882-bf90-4f56-afa8-36bcee05332e 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Attempting to detach device vdb from instance e3333751-86a5-40df-9180-a0c8153f06a4 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.850 238887 DEBUG nova.virt.libvirt.guest [None req-cf045882-bf90-4f56-afa8-36bcee05332e 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 06:58:21 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:58:21 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-f2d7a5ba-c440-4bb5-ab12-e7990d48251a">
Feb  2 06:58:21 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:58:21 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:58:21 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:58:21 np0005604943 nova_compute[238883]:  <serial>f2d7a5ba-c440-4bb5-ab12-e7990d48251a</serial>
Feb  2 06:58:21 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 06:58:21 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:58:21 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.860 238887 INFO nova.virt.libvirt.driver [None req-cf045882-bf90-4f56-afa8-36bcee05332e 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Successfully detached device vdb from instance e3333751-86a5-40df-9180-a0c8153f06a4 from the persistent domain config.#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.861 238887 DEBUG nova.virt.libvirt.driver [None req-cf045882-bf90-4f56-afa8-36bcee05332e 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e3333751-86a5-40df-9180-a0c8153f06a4 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.861 238887 DEBUG nova.virt.libvirt.guest [None req-cf045882-bf90-4f56-afa8-36bcee05332e 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 06:58:21 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:58:21 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-f2d7a5ba-c440-4bb5-ab12-e7990d48251a">
Feb  2 06:58:21 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:58:21 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:58:21 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:58:21 np0005604943 nova_compute[238883]:  <serial>f2d7a5ba-c440-4bb5-ab12-e7990d48251a</serial>
Feb  2 06:58:21 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 06:58:21 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:58:21 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.965 238887 DEBUG nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Received event <DeviceRemovedEvent: 1770033501.9651449, e3333751-86a5-40df-9180-a0c8153f06a4 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.966 238887 DEBUG nova.virt.libvirt.driver [None req-cf045882-bf90-4f56-afa8-36bcee05332e 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e3333751-86a5-40df-9180-a0c8153f06a4 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 06:58:21 np0005604943 nova_compute[238883]: 2026-02-02 11:58:21.968 238887 INFO nova.virt.libvirt.driver [None req-cf045882-bf90-4f56-afa8-36bcee05332e 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Successfully detached device vdb from instance e3333751-86a5-40df-9180-a0c8153f06a4 from the live domain config.#033[00m
Feb  2 06:58:22 np0005604943 nova_compute[238883]: 2026-02-02 11:58:22.148 238887 DEBUG nova.network.neutron [req-04addfdd-c13c-49b7-b538-c963b21c3911 req-5c7b57f9-587e-499f-9dda-91b4d979ee7e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Updated VIF entry in instance network info cache for port c33f8157-4662-40a2-867e-4dac8467a80b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:58:22 np0005604943 nova_compute[238883]: 2026-02-02 11:58:22.152 238887 DEBUG nova.network.neutron [req-04addfdd-c13c-49b7-b538-c963b21c3911 req-5c7b57f9-587e-499f-9dda-91b4d979ee7e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Updating instance_info_cache with network_info: [{"id": "c33f8157-4662-40a2-867e-4dac8467a80b", "address": "fa:16:3e:43:5a:3a", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc33f8157-46", "ovs_interfaceid": "c33f8157-4662-40a2-867e-4dac8467a80b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:58:22 np0005604943 nova_compute[238883]: 2026-02-02 11:58:22.169 238887 DEBUG oslo_concurrency.lockutils [req-04addfdd-c13c-49b7-b538-c963b21c3911 req-5c7b57f9-587e-499f-9dda-91b4d979ee7e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-177de248-c6fd-437b-9326-31ed9842fe34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:58:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 36 KiB/s wr, 131 op/s
Feb  2 06:58:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:58:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Feb  2 06:58:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Feb  2 06:58:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Feb  2 06:58:22 np0005604943 nova_compute[238883]: 2026-02-02 11:58:22.654 238887 DEBUG nova.objects.instance [None req-cf045882-bf90-4f56-afa8-36bcee05332e 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lazy-loading 'flavor' on Instance uuid e3333751-86a5-40df-9180-a0c8153f06a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:58:22 np0005604943 nova_compute[238883]: 2026-02-02 11:58:22.691 238887 DEBUG oslo_concurrency.lockutils [None req-cf045882-bf90-4f56-afa8-36bcee05332e 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:22 np0005604943 nova_compute[238883]: 2026-02-02 11:58:22.980 238887 DEBUG oslo_concurrency.lockutils [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "e3333751-86a5-40df-9180-a0c8153f06a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:22 np0005604943 nova_compute[238883]: 2026-02-02 11:58:22.980 238887 DEBUG oslo_concurrency.lockutils [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:22 np0005604943 nova_compute[238883]: 2026-02-02 11:58:22.981 238887 DEBUG oslo_concurrency.lockutils [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:22 np0005604943 nova_compute[238883]: 2026-02-02 11:58:22.981 238887 DEBUG oslo_concurrency.lockutils [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:22 np0005604943 nova_compute[238883]: 2026-02-02 11:58:22.981 238887 DEBUG oslo_concurrency.lockutils [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:22 np0005604943 nova_compute[238883]: 2026-02-02 11:58:22.982 238887 INFO nova.compute.manager [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Terminating instance#033[00m
Feb  2 06:58:22 np0005604943 nova_compute[238883]: 2026-02-02 11:58:22.983 238887 DEBUG nova.compute.manager [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 06:58:23 np0005604943 kernel: tap2e53d89b-c3 (unregistering): left promiscuous mode
Feb  2 06:58:23 np0005604943 NetworkManager[49093]: <info>  [1770033503.0241] device (tap2e53d89b-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.033 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:23 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:23Z|00083|binding|INFO|Releasing lport 2e53d89b-c3e1-480c-af8c-98b7e9b8d425 from this chassis (sb_readonly=0)
Feb  2 06:58:23 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:23Z|00084|binding|INFO|Setting lport 2e53d89b-c3e1-480c-af8c-98b7e9b8d425 down in Southbound
Feb  2 06:58:23 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:23Z|00085|binding|INFO|Removing iface tap2e53d89b-c3 ovn-installed in OVS
Feb  2 06:58:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:23.040 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:29:95 10.100.0.9'], port_security=['fa:16:3e:b7:29:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e3333751-86a5-40df-9180-a0c8153f06a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-302d1601-7819-4001-9e16-ee97183eb73b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61afd70cadc143c2a9c65f6cec8dc9e8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf550be2-fb79-4050-9dfb-2bfa7b384f11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb72e047-676c-4da5-9d5d-6a9b44c0057a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=2e53d89b-c3e1-480c-af8c-98b7e9b8d425) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.041 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:23.042 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 2e53d89b-c3e1-480c-af8c-98b7e9b8d425 in datapath 302d1601-7819-4001-9e16-ee97183eb73b unbound from our chassis#033[00m
Feb  2 06:58:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:23.044 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 302d1601-7819-4001-9e16-ee97183eb73b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 06:58:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:23.045 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1e6a3688-b7ad-4097-b1a6-c6a27d6a8bcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:23.045 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b namespace which is not needed anymore#033[00m
Feb  2 06:58:23 np0005604943 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Feb  2 06:58:23 np0005604943 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 12.336s CPU time.
Feb  2 06:58:23 np0005604943 systemd-machined[206973]: Machine qemu-7-instance-00000007 terminated.
Feb  2 06:58:23 np0005604943 neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b[250145]: [NOTICE]   (250149) : haproxy version is 2.8.14-c23fe91
Feb  2 06:58:23 np0005604943 neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b[250145]: [NOTICE]   (250149) : path to executable is /usr/sbin/haproxy
Feb  2 06:58:23 np0005604943 neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b[250145]: [WARNING]  (250149) : Exiting Master process...
Feb  2 06:58:23 np0005604943 neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b[250145]: [ALERT]    (250149) : Current worker (250151) exited with code 143 (Terminated)
Feb  2 06:58:23 np0005604943 neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b[250145]: [WARNING]  (250149) : All workers exited. Exiting... (0)
Feb  2 06:58:23 np0005604943 systemd[1]: libpod-d9249a0c356ae9607953b61c440e8f603da2f5f8ad4cde6f2901f54242fc8fa9.scope: Deactivated successfully.
Feb  2 06:58:23 np0005604943 podman[251558]: 2026-02-02 11:58:23.155279011 +0000 UTC m=+0.030785888 container died d9249a0c356ae9607953b61c440e8f603da2f5f8ad4cde6f2901f54242fc8fa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb  2 06:58:23 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d9249a0c356ae9607953b61c440e8f603da2f5f8ad4cde6f2901f54242fc8fa9-userdata-shm.mount: Deactivated successfully.
Feb  2 06:58:23 np0005604943 systemd[1]: var-lib-containers-storage-overlay-79175fe41426ce05f180faf58ba431d42d067fcc9c0bcb5a55aa74b24f9ab9b3-merged.mount: Deactivated successfully.
Feb  2 06:58:23 np0005604943 podman[251558]: 2026-02-02 11:58:23.199994244 +0000 UTC m=+0.075501111 container cleanup d9249a0c356ae9607953b61c440e8f603da2f5f8ad4cde6f2901f54242fc8fa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.211 238887 INFO nova.virt.libvirt.driver [-] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Instance destroyed successfully.#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.212 238887 DEBUG nova.objects.instance [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lazy-loading 'resources' on Instance uuid e3333751-86a5-40df-9180-a0c8153f06a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:58:23 np0005604943 systemd[1]: libpod-conmon-d9249a0c356ae9607953b61c440e8f603da2f5f8ad4cde6f2901f54242fc8fa9.scope: Deactivated successfully.
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.225 238887 DEBUG nova.virt.libvirt.vif [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:57:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-395658677',display_name='tempest-VolumesBackupsTest-instance-395658677',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-395658677',id=7,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH2E7G7yXreKh+R/5pZvHxH41KpQ1/cpxT1k5tVX9U3p92cG1tl6U58Hl2cMaNmii3kF0ulyFdE8uKaIFXxXHpjnBCsHQnsvTg/if5l+M1u7+7jeXkdUA5ba6jhNDG/1eQ==',key_name='tempest-keypair-259832784',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:57:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='61afd70cadc143c2a9c65f6cec8dc9e8',ramdisk_id='',reservation_id='r-32l91c70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-1949354358',owner_user_name='tempest-VolumesBackupsTest-1949354358-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:57:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='619ce2f20dd849f6a462d2162bcccc7a',uuid=e3333751-86a5-40df-9180-a0c8153f06a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "address": "fa:16:3e:b7:29:95", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e53d89b-c3", "ovs_interfaceid": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.227 238887 DEBUG nova.network.os_vif_util [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Converting VIF {"id": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "address": "fa:16:3e:b7:29:95", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e53d89b-c3", "ovs_interfaceid": "2e53d89b-c3e1-480c-af8c-98b7e9b8d425", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.228 238887 DEBUG nova.network.os_vif_util [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:29:95,bridge_name='br-int',has_traffic_filtering=True,id=2e53d89b-c3e1-480c-af8c-98b7e9b8d425,network=Network(302d1601-7819-4001-9e16-ee97183eb73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e53d89b-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.228 238887 DEBUG os_vif [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:29:95,bridge_name='br-int',has_traffic_filtering=True,id=2e53d89b-c3e1-480c-af8c-98b7e9b8d425,network=Network(302d1601-7819-4001-9e16-ee97183eb73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e53d89b-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.230 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.231 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e53d89b-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.237 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.239 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.241 238887 INFO os_vif [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:29:95,bridge_name='br-int',has_traffic_filtering=True,id=2e53d89b-c3e1-480c-af8c-98b7e9b8d425,network=Network(302d1601-7819-4001-9e16-ee97183eb73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e53d89b-c3')#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.269 238887 DEBUG nova.compute.manager [req-46b685f4-36b5-4e23-84c5-e85d45280cba req-7c766d98-321c-4392-9add-1f17dbbf9d95 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Received event network-vif-unplugged-2e53d89b-c3e1-480c-af8c-98b7e9b8d425 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.270 238887 DEBUG oslo_concurrency.lockutils [req-46b685f4-36b5-4e23-84c5-e85d45280cba req-7c766d98-321c-4392-9add-1f17dbbf9d95 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.271 238887 DEBUG oslo_concurrency.lockutils [req-46b685f4-36b5-4e23-84c5-e85d45280cba req-7c766d98-321c-4392-9add-1f17dbbf9d95 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.271 238887 DEBUG oslo_concurrency.lockutils [req-46b685f4-36b5-4e23-84c5-e85d45280cba req-7c766d98-321c-4392-9add-1f17dbbf9d95 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.272 238887 DEBUG nova.compute.manager [req-46b685f4-36b5-4e23-84c5-e85d45280cba req-7c766d98-321c-4392-9add-1f17dbbf9d95 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] No waiting events found dispatching network-vif-unplugged-2e53d89b-c3e1-480c-af8c-98b7e9b8d425 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.272 238887 DEBUG nova.compute.manager [req-46b685f4-36b5-4e23-84c5-e85d45280cba req-7c766d98-321c-4392-9add-1f17dbbf9d95 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Received event network-vif-unplugged-2e53d89b-c3e1-480c-af8c-98b7e9b8d425 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 06:58:23 np0005604943 podman[251597]: 2026-02-02 11:58:23.272826814 +0000 UTC m=+0.047934938 container remove d9249a0c356ae9607953b61c440e8f603da2f5f8ad4cde6f2901f54242fc8fa9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:58:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:23.278 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a296097c-08ed-4b69-b7b5-9e93a95a6a36]: (4, ('Mon Feb  2 11:58:23 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b (d9249a0c356ae9607953b61c440e8f603da2f5f8ad4cde6f2901f54242fc8fa9)\nd9249a0c356ae9607953b61c440e8f603da2f5f8ad4cde6f2901f54242fc8fa9\nMon Feb  2 11:58:23 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b (d9249a0c356ae9607953b61c440e8f603da2f5f8ad4cde6f2901f54242fc8fa9)\nd9249a0c356ae9607953b61c440e8f603da2f5f8ad4cde6f2901f54242fc8fa9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:23.281 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[68caafae-7744-4029-b7e5-c32632c4642b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:23.281 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap302d1601-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.283 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:23 np0005604943 kernel: tap302d1601-70: left promiscuous mode
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.291 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:23.296 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c9dca245-973a-4629-827c-c9c166a172a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:23.319 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b19cf2a7-0deb-4d2d-ab83-0907c11fcdfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:23.320 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[26cc85a8-a4bc-48e2-9f12-859b2f622b0a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:23.339 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[693d83ec-9a8d-4edd-8518-c6e8db19943e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 391864, 'reachable_time': 29627, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251630, 'error': None, 'target': 'ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:23.342 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 06:58:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:23.342 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4c65a7-6adc-482e-a192-6fb615da00b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:23 np0005604943 systemd[1]: run-netns-ovnmeta\x2d302d1601\x2d7819\x2d4001\x2d9e16\x2dee97183eb73b.mount: Deactivated successfully.
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.511 238887 INFO nova.virt.libvirt.driver [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Deleting instance files /var/lib/nova/instances/e3333751-86a5-40df-9180-a0c8153f06a4_del#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.515 238887 INFO nova.virt.libvirt.driver [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Deletion of /var/lib/nova/instances/e3333751-86a5-40df-9180-a0c8153f06a4_del complete#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.562 238887 INFO nova.compute.manager [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Took 0.58 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.563 238887 DEBUG oslo.service.loopingcall [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.564 238887 DEBUG nova.compute.manager [-] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 06:58:23 np0005604943 nova_compute[238883]: 2026-02-02 11:58:23.564 238887 DEBUG nova.network.neutron [-] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 06:58:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 144 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 40 KiB/s wr, 322 op/s
Feb  2 06:58:24 np0005604943 nova_compute[238883]: 2026-02-02 11:58:24.275 238887 DEBUG nova.network.neutron [-] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:58:24 np0005604943 nova_compute[238883]: 2026-02-02 11:58:24.294 238887 INFO nova.compute.manager [-] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Took 0.73 seconds to deallocate network for instance.#033[00m
Feb  2 06:58:24 np0005604943 nova_compute[238883]: 2026-02-02 11:58:24.343 238887 DEBUG nova.compute.manager [req-6b54b0f2-6785-4857-b77c-edc195918faf req-bdc9e9bf-fc73-4f1a-ae40-dedba67859e4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Received event network-vif-deleted-2e53d89b-c3e1-480c-af8c-98b7e9b8d425 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:58:24 np0005604943 nova_compute[238883]: 2026-02-02 11:58:24.346 238887 DEBUG oslo_concurrency.lockutils [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:24 np0005604943 nova_compute[238883]: 2026-02-02 11:58:24.347 238887 DEBUG oslo_concurrency.lockutils [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:24 np0005604943 nova_compute[238883]: 2026-02-02 11:58:24.423 238887 DEBUG oslo_concurrency.processutils [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:58:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432661104' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:58:25 np0005604943 nova_compute[238883]: 2026-02-02 11:58:25.000 238887 DEBUG oslo_concurrency.processutils [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:25 np0005604943 nova_compute[238883]: 2026-02-02 11:58:25.005 238887 DEBUG nova.compute.provider_tree [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:58:25 np0005604943 nova_compute[238883]: 2026-02-02 11:58:25.021 238887 DEBUG nova.scheduler.client.report [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:58:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:58:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4239329316' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:58:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:58:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4239329316' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:58:25 np0005604943 nova_compute[238883]: 2026-02-02 11:58:25.041 238887 DEBUG oslo_concurrency.lockutils [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:25 np0005604943 nova_compute[238883]: 2026-02-02 11:58:25.064 238887 INFO nova.scheduler.client.report [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Deleted allocations for instance e3333751-86a5-40df-9180-a0c8153f06a4#033[00m
Feb  2 06:58:25 np0005604943 nova_compute[238883]: 2026-02-02 11:58:25.126 238887 DEBUG oslo_concurrency.lockutils [None req-d0656fc8-0015-4e12-8fab-4d9341d0d195 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:25 np0005604943 nova_compute[238883]: 2026-02-02 11:58:25.352 238887 DEBUG nova.compute.manager [req-dba2d1c7-c2cd-4ac9-8d8e-07820382ea19 req-9b23ee0b-dad7-4cde-831a-8ad6c86c6185 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Received event network-vif-plugged-2e53d89b-c3e1-480c-af8c-98b7e9b8d425 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:58:25 np0005604943 nova_compute[238883]: 2026-02-02 11:58:25.354 238887 DEBUG oslo_concurrency.lockutils [req-dba2d1c7-c2cd-4ac9-8d8e-07820382ea19 req-9b23ee0b-dad7-4cde-831a-8ad6c86c6185 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:25 np0005604943 nova_compute[238883]: 2026-02-02 11:58:25.354 238887 DEBUG oslo_concurrency.lockutils [req-dba2d1c7-c2cd-4ac9-8d8e-07820382ea19 req-9b23ee0b-dad7-4cde-831a-8ad6c86c6185 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:25 np0005604943 nova_compute[238883]: 2026-02-02 11:58:25.354 238887 DEBUG oslo_concurrency.lockutils [req-dba2d1c7-c2cd-4ac9-8d8e-07820382ea19 req-9b23ee0b-dad7-4cde-831a-8ad6c86c6185 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "e3333751-86a5-40df-9180-a0c8153f06a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:25 np0005604943 nova_compute[238883]: 2026-02-02 11:58:25.354 238887 DEBUG nova.compute.manager [req-dba2d1c7-c2cd-4ac9-8d8e-07820382ea19 req-9b23ee0b-dad7-4cde-831a-8ad6c86c6185 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] No waiting events found dispatching network-vif-plugged-2e53d89b-c3e1-480c-af8c-98b7e9b8d425 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:58:25 np0005604943 nova_compute[238883]: 2026-02-02 11:58:25.355 238887 WARNING nova.compute.manager [req-dba2d1c7-c2cd-4ac9-8d8e-07820382ea19 req-9b23ee0b-dad7-4cde-831a-8ad6c86c6185 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Received unexpected event network-vif-plugged-2e53d89b-c3e1-480c-af8c-98b7e9b8d425 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 06:58:26 np0005604943 nova_compute[238883]: 2026-02-02 11:58:26.082 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 144 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 32 KiB/s wr, 252 op/s
Feb  2 06:58:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:58:26 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3154389838' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:58:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:58:26 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3154389838' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:58:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Feb  2 06:58:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Feb  2 06:58:26 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Feb  2 06:58:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:58:26 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2175207867' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:58:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:58:26 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2175207867' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:58:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:58:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Feb  2 06:58:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Feb  2 06:58:27 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Feb  2 06:58:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:58:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/461687499' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:58:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:58:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/461687499' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:58:27 np0005604943 nova_compute[238883]: 2026-02-02 11:58:27.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:58:27 np0005604943 nova_compute[238883]: 2026-02-02 11:58:27.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:58:27 np0005604943 nova_compute[238883]: 2026-02-02 11:58:27.665 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:27 np0005604943 nova_compute[238883]: 2026-02-02 11:58:27.665 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:27 np0005604943 nova_compute[238883]: 2026-02-02 11:58:27.666 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:27 np0005604943 nova_compute[238883]: 2026-02-02 11:58:27.666 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 06:58:27 np0005604943 nova_compute[238883]: 2026-02-02 11:58:27.666 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:58:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2984773506' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:58:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 88 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 9.2 KiB/s wr, 340 op/s
Feb  2 06:58:28 np0005604943 nova_compute[238883]: 2026-02-02 11:58:28.238 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:28 np0005604943 nova_compute[238883]: 2026-02-02 11:58:28.249 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:28 np0005604943 nova_compute[238883]: 2026-02-02 11:58:28.317 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 06:58:28 np0005604943 nova_compute[238883]: 2026-02-02 11:58:28.317 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 06:58:28 np0005604943 nova_compute[238883]: 2026-02-02 11:58:28.459 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:58:28 np0005604943 nova_compute[238883]: 2026-02-02 11:58:28.461 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4414MB free_disk=59.93125404603779GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 06:58:28 np0005604943 nova_compute[238883]: 2026-02-02 11:58:28.461 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:28 np0005604943 nova_compute[238883]: 2026-02-02 11:58:28.461 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:28 np0005604943 nova_compute[238883]: 2026-02-02 11:58:28.521 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance 177de248-c6fd-437b-9326-31ed9842fe34 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 06:58:28 np0005604943 nova_compute[238883]: 2026-02-02 11:58:28.521 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 06:58:28 np0005604943 nova_compute[238883]: 2026-02-02 11:58:28.522 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 06:58:28 np0005604943 nova_compute[238883]: 2026-02-02 11:58:28.557 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:58:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2378935341' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:58:29 np0005604943 nova_compute[238883]: 2026-02-02 11:58:29.091 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:29 np0005604943 nova_compute[238883]: 2026-02-02 11:58:29.096 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:58:29 np0005604943 nova_compute[238883]: 2026-02-02 11:58:29.119 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:58:29 np0005604943 nova_compute[238883]: 2026-02-02 11:58:29.150 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 06:58:29 np0005604943 nova_compute[238883]: 2026-02-02 11:58:29.150 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:29 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:29Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:5a:3a 10.100.0.11
Feb  2 06:58:29 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:29Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:5a:3a 10.100.0.11
Feb  2 06:58:30 np0005604943 nova_compute[238883]: 2026-02-02 11:58:30.149 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:58:30 np0005604943 nova_compute[238883]: 2026-02-02 11:58:30.150 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:58:30 np0005604943 nova_compute[238883]: 2026-02-02 11:58:30.150 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 06:58:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 91 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 580 KiB/s wr, 297 op/s
Feb  2 06:58:30 np0005604943 nova_compute[238883]: 2026-02-02 11:58:30.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:58:30 np0005604943 nova_compute[238883]: 2026-02-02 11:58:30.642 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 06:58:30 np0005604943 nova_compute[238883]: 2026-02-02 11:58:30.642 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 06:58:31 np0005604943 nova_compute[238883]: 2026-02-02 11:58:31.084 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:31 np0005604943 nova_compute[238883]: 2026-02-02 11:58:31.139 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "refresh_cache-177de248-c6fd-437b-9326-31ed9842fe34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:58:31 np0005604943 nova_compute[238883]: 2026-02-02 11:58:31.139 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquired lock "refresh_cache-177de248-c6fd-437b-9326-31ed9842fe34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:58:31 np0005604943 nova_compute[238883]: 2026-02-02 11:58:31.140 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 06:58:31 np0005604943 nova_compute[238883]: 2026-02-02 11:58:31.140 238887 DEBUG nova.objects.instance [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 177de248-c6fd-437b-9326-31ed9842fe34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:58:32 np0005604943 podman[251700]: 2026-02-02 11:58:32.055042015 +0000 UTC m=+0.075641975 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:58:32 np0005604943 podman[251699]: 2026-02-02 11:58:32.080836952 +0000 UTC m=+0.101886374 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 06:58:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 91 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 185 KiB/s rd, 549 KiB/s wr, 133 op/s
Feb  2 06:58:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:58:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Feb  2 06:58:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Feb  2 06:58:32 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Feb  2 06:58:33 np0005604943 nova_compute[238883]: 2026-02-02 11:58:33.242 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:33 np0005604943 nova_compute[238883]: 2026-02-02 11:58:33.378 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Updating instance_info_cache with network_info: [{"id": "c33f8157-4662-40a2-867e-4dac8467a80b", "address": "fa:16:3e:43:5a:3a", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc33f8157-46", "ovs_interfaceid": "c33f8157-4662-40a2-867e-4dac8467a80b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:58:33 np0005604943 nova_compute[238883]: 2026-02-02 11:58:33.404 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Releasing lock "refresh_cache-177de248-c6fd-437b-9326-31ed9842fe34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:58:33 np0005604943 nova_compute[238883]: 2026-02-02 11:58:33.405 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 06:58:33 np0005604943 nova_compute[238883]: 2026-02-02 11:58:33.405 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:58:33 np0005604943 nova_compute[238883]: 2026-02-02 11:58:33.405 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:58:33 np0005604943 nova_compute[238883]: 2026-02-02 11:58:33.405 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:58:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 3.4 MiB/s wr, 223 op/s
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.085 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.172 238887 DEBUG oslo_concurrency.lockutils [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "177de248-c6fd-437b-9326-31ed9842fe34" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.173 238887 DEBUG oslo_concurrency.lockutils [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.194 238887 DEBUG nova.objects.instance [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lazy-loading 'flavor' on Instance uuid 177de248-c6fd-437b-9326-31ed9842fe34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:58:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 2.9 MiB/s wr, 179 op/s
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.227 238887 INFO nova.virt.libvirt.driver [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Ignoring supplied device name: /dev/vdb#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.239 238887 DEBUG oslo_concurrency.lockutils [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:58:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1573930284' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.503 238887 DEBUG oslo_concurrency.lockutils [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "177de248-c6fd-437b-9326-31ed9842fe34" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.504 238887 DEBUG oslo_concurrency.lockutils [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.504 238887 INFO nova.compute.manager [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Attaching volume a1c81c48-b5bc-49e5-82d7-78c545037941 to /dev/vdb#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.648 238887 DEBUG os_brick.utils [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.649 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.662 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.662 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[c171b38f-5b4f-43e5-a34c-203d56513f72]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.665 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.675 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.675 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0e52d7-d460-46e2-8ba5-2c89c1df0c8f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.677 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.686 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.686 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[c22e99a5-464c-4453-bc8d-352e4f20793b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.687 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[55c448aa-5be4-4f99-ba12-b3c77f5e15ac]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.688 238887 DEBUG oslo_concurrency.processutils [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.712 238887 DEBUG oslo_concurrency.processutils [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.715 238887 DEBUG os_brick.initiator.connectors.lightos [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.716 238887 DEBUG os_brick.initiator.connectors.lightos [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.716 238887 DEBUG os_brick.initiator.connectors.lightos [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.716 238887 DEBUG os_brick.utils [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 06:58:36 np0005604943 nova_compute[238883]: 2026-02-02 11:58:36.717 238887 DEBUG nova.virt.block_device [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Updating existing volume attachment record: de92745e-c2ca-4c0f-98da-11186798b1cf _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 06:58:37 np0005604943 nova_compute[238883]: 2026-02-02 11:58:37.398 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:58:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:58:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3482439464' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:58:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:58:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Feb  2 06:58:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Feb  2 06:58:37 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Feb  2 06:58:37 np0005604943 nova_compute[238883]: 2026-02-02 11:58:37.656 238887 DEBUG nova.objects.instance [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lazy-loading 'flavor' on Instance uuid 177de248-c6fd-437b-9326-31ed9842fe34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:58:37 np0005604943 nova_compute[238883]: 2026-02-02 11:58:37.677 238887 DEBUG nova.virt.libvirt.driver [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Attempting to attach volume a1c81c48-b5bc-49e5-82d7-78c545037941 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 06:58:37 np0005604943 nova_compute[238883]: 2026-02-02 11:58:37.680 238887 DEBUG nova.virt.libvirt.guest [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 06:58:37 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:58:37 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-a1c81c48-b5bc-49e5-82d7-78c545037941">
Feb  2 06:58:37 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:58:37 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:58:37 np0005604943 nova_compute[238883]:  <auth username="openstack">
Feb  2 06:58:37 np0005604943 nova_compute[238883]:    <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:58:37 np0005604943 nova_compute[238883]:  </auth>
Feb  2 06:58:37 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:58:37 np0005604943 nova_compute[238883]:  <serial>a1c81c48-b5bc-49e5-82d7-78c545037941</serial>
Feb  2 06:58:37 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:58:37 np0005604943 nova_compute[238883]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 06:58:37 np0005604943 nova_compute[238883]: 2026-02-02 11:58:37.785 238887 DEBUG nova.virt.libvirt.driver [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:58:37 np0005604943 nova_compute[238883]: 2026-02-02 11:58:37.786 238887 DEBUG nova.virt.libvirt.driver [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:58:37 np0005604943 nova_compute[238883]: 2026-02-02 11:58:37.787 238887 DEBUG nova.virt.libvirt.driver [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:58:37 np0005604943 nova_compute[238883]: 2026-02-02 11:58:37.787 238887 DEBUG nova.virt.libvirt.driver [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] No VIF found with MAC fa:16:3e:43:5a:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 06:58:37 np0005604943 nova_compute[238883]: 2026-02-02 11:58:37.972 238887 DEBUG oslo_concurrency.lockutils [None req-1a0a756f-8cc5-43e1-8a0c-11d2fbb34a89 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:38 np0005604943 nova_compute[238883]: 2026-02-02 11:58:38.209 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033503.2080665, e3333751-86a5-40df-9180-a0c8153f06a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:58:38 np0005604943 nova_compute[238883]: 2026-02-02 11:58:38.210 238887 INFO nova.compute.manager [-] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] VM Stopped (Lifecycle Event)#033[00m
Feb  2 06:58:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 191 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 6.6 MiB/s rd, 7.0 MiB/s wr, 150 op/s
Feb  2 06:58:38 np0005604943 nova_compute[238883]: 2026-02-02 11:58:38.231 238887 DEBUG nova.compute.manager [None req-e05436b9-df42-4a54-a687-16b51cde5a15 - - - - - -] [instance: e3333751-86a5-40df-9180-a0c8153f06a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:58:38 np0005604943 nova_compute[238883]: 2026-02-02 11:58:38.246 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:58:38.556892) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033518556963, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1306, "num_deletes": 256, "total_data_size": 1740846, "memory_usage": 1767808, "flush_reason": "Manual Compaction"}
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033518565278, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1719636, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20200, "largest_seqno": 21505, "table_properties": {"data_size": 1713255, "index_size": 3584, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14237, "raw_average_key_size": 20, "raw_value_size": 1700193, "raw_average_value_size": 2478, "num_data_blocks": 159, "num_entries": 686, "num_filter_entries": 686, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770033432, "oldest_key_time": 1770033432, "file_creation_time": 1770033518, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 8416 microseconds, and 4075 cpu microseconds.
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:58:38.565317) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1719636 bytes OK
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:58:38.565339) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:58:38.568178) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:58:38.568194) EVENT_LOG_v1 {"time_micros": 1770033518568189, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:58:38.568236) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1734772, prev total WAL file size 1734813, number of live WAL files 2.
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:58:38.568799) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1679KB)], [47(7562KB)]
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033518568876, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9464108, "oldest_snapshot_seqno": -1}
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4637 keys, 7676106 bytes, temperature: kUnknown
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033518608388, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7676106, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7642862, "index_size": 20526, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11653, "raw_key_size": 115257, "raw_average_key_size": 24, "raw_value_size": 7556926, "raw_average_value_size": 1629, "num_data_blocks": 849, "num_entries": 4637, "num_filter_entries": 4637, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770033518, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:58:38.608716) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7676106 bytes
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:58:38.611185) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 238.9 rd, 193.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.4 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.0) write-amplify(4.5) OK, records in: 5163, records dropped: 526 output_compression: NoCompression
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:58:38.611234) EVENT_LOG_v1 {"time_micros": 1770033518611197, "job": 24, "event": "compaction_finished", "compaction_time_micros": 39617, "compaction_time_cpu_micros": 16727, "output_level": 6, "num_output_files": 1, "total_output_size": 7676106, "num_input_records": 5163, "num_output_records": 4637, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033518611563, "job": 24, "event": "table_file_deletion", "file_number": 49}
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033518612330, "job": 24, "event": "table_file_deletion", "file_number": 47}
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:58:38.568721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:58:38.612402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:58:38.612408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:58:38.612410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:58:38.612412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:58:38 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-11:58:38.612414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 06:58:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:58:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/313420887' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:58:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:58:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/313420887' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:58:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Feb  2 06:58:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Feb  2 06:58:39 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Feb  2 06:58:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 240 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 9.1 MiB/s wr, 178 op/s
Feb  2 06:58:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:58:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2532501396' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:58:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:58:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2532501396' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:58:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Feb  2 06:58:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Feb  2 06:58:40 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Feb  2 06:58:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:58:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:58:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:58:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:58:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:58:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:58:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:58:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1977145286' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:58:41 np0005604943 nova_compute[238883]: 2026-02-02 11:58:41.088 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Feb  2 06:58:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Feb  2 06:58:41 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Feb  2 06:58:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 240 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 463 KiB/s rd, 5.0 MiB/s wr, 125 op/s
Feb  2 06:58:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:58:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Feb  2 06:58:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Feb  2 06:58:42 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Feb  2 06:58:43 np0005604943 nova_compute[238883]: 2026-02-02 11:58:43.249 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Feb  2 06:58:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Feb  2 06:58:43 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Feb  2 06:58:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:58:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4180467089' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:58:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:58:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4180467089' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:58:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 253 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 8.2 MiB/s rd, 7.6 MiB/s wr, 374 op/s
Feb  2 06:58:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Feb  2 06:58:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Feb  2 06:58:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Feb  2 06:58:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:58:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3914590937' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:58:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:58:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3914590937' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:58:46 np0005604943 nova_compute[238883]: 2026-02-02 11:58:46.108 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 253 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 7.1 MiB/s rd, 6.6 MiB/s wr, 324 op/s
Feb  2 06:58:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Feb  2 06:58:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Feb  2 06:58:46 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Feb  2 06:58:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:58:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3044822319' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:58:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:58:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3044822319' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:58:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:58:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Feb  2 06:58:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Feb  2 06:58:47 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Feb  2 06:58:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 151 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 289 KiB/s wr, 314 op/s
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.230 238887 DEBUG oslo_concurrency.lockutils [None req-3cad8e9f-4d95-4db0-9fec-627875c1d469 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "177de248-c6fd-437b-9326-31ed9842fe34" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.230 238887 DEBUG oslo_concurrency.lockutils [None req-3cad8e9f-4d95-4db0-9fec-627875c1d469 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.247 238887 INFO nova.compute.manager [None req-3cad8e9f-4d95-4db0-9fec-627875c1d469 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Detaching volume a1c81c48-b5bc-49e5-82d7-78c545037941#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.254 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.371 238887 DEBUG oslo_concurrency.lockutils [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "177de248-c6fd-437b-9326-31ed9842fe34" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.392 238887 INFO nova.virt.block_device [None req-3cad8e9f-4d95-4db0-9fec-627875c1d469 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Attempting to driver detach volume a1c81c48-b5bc-49e5-82d7-78c545037941 from mountpoint /dev/vdb#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.404 238887 DEBUG nova.virt.libvirt.driver [None req-3cad8e9f-4d95-4db0-9fec-627875c1d469 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Attempting to detach device vdb from instance 177de248-c6fd-437b-9326-31ed9842fe34 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.405 238887 DEBUG nova.virt.libvirt.guest [None req-3cad8e9f-4d95-4db0-9fec-627875c1d469 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 06:58:48 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:58:48 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-a1c81c48-b5bc-49e5-82d7-78c545037941">
Feb  2 06:58:48 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:58:48 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:58:48 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:58:48 np0005604943 nova_compute[238883]:  <serial>a1c81c48-b5bc-49e5-82d7-78c545037941</serial>
Feb  2 06:58:48 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 06:58:48 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:58:48 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.417 238887 INFO nova.virt.libvirt.driver [None req-3cad8e9f-4d95-4db0-9fec-627875c1d469 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Successfully detached device vdb from instance 177de248-c6fd-437b-9326-31ed9842fe34 from the persistent domain config.#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.418 238887 DEBUG nova.virt.libvirt.driver [None req-3cad8e9f-4d95-4db0-9fec-627875c1d469 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 177de248-c6fd-437b-9326-31ed9842fe34 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.418 238887 DEBUG nova.virt.libvirt.guest [None req-3cad8e9f-4d95-4db0-9fec-627875c1d469 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 06:58:48 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:58:48 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-a1c81c48-b5bc-49e5-82d7-78c545037941">
Feb  2 06:58:48 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:58:48 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:58:48 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:58:48 np0005604943 nova_compute[238883]:  <serial>a1c81c48-b5bc-49e5-82d7-78c545037941</serial>
Feb  2 06:58:48 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 06:58:48 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:58:48 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.470 238887 DEBUG nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Received event <DeviceRemovedEvent: 1770033528.470536, 177de248-c6fd-437b-9326-31ed9842fe34 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.472 238887 DEBUG nova.virt.libvirt.driver [None req-3cad8e9f-4d95-4db0-9fec-627875c1d469 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 177de248-c6fd-437b-9326-31ed9842fe34 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.474 238887 INFO nova.virt.libvirt.driver [None req-3cad8e9f-4d95-4db0-9fec-627875c1d469 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Successfully detached device vdb from instance 177de248-c6fd-437b-9326-31ed9842fe34 from the live domain config.#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.635 238887 DEBUG nova.objects.instance [None req-3cad8e9f-4d95-4db0-9fec-627875c1d469 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lazy-loading 'flavor' on Instance uuid 177de248-c6fd-437b-9326-31ed9842fe34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.671 238887 DEBUG oslo_concurrency.lockutils [None req-3cad8e9f-4d95-4db0-9fec-627875c1d469 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.672 238887 DEBUG oslo_concurrency.lockutils [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.672 238887 DEBUG oslo_concurrency.lockutils [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "177de248-c6fd-437b-9326-31ed9842fe34-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.673 238887 DEBUG oslo_concurrency.lockutils [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.673 238887 DEBUG oslo_concurrency.lockutils [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.674 238887 INFO nova.compute.manager [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Terminating instance#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.675 238887 DEBUG nova.compute.manager [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 06:58:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Feb  2 06:58:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Feb  2 06:58:48 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Feb  2 06:58:48 np0005604943 kernel: tapc33f8157-46 (unregistering): left promiscuous mode
Feb  2 06:58:48 np0005604943 NetworkManager[49093]: <info>  [1770033528.7881] device (tapc33f8157-46): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 06:58:48 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:48Z|00086|binding|INFO|Releasing lport c33f8157-4662-40a2-867e-4dac8467a80b from this chassis (sb_readonly=0)
Feb  2 06:58:48 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:48Z|00087|binding|INFO|Setting lport c33f8157-4662-40a2-867e-4dac8467a80b down in Southbound
Feb  2 06:58:48 np0005604943 ovn_controller[145056]: 2026-02-02T11:58:48Z|00088|binding|INFO|Removing iface tapc33f8157-46 ovn-installed in OVS
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.797 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.809 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:48.811 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:5a:3a 10.100.0.11'], port_security=['fa:16:3e:43:5a:3a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '177de248-c6fd-437b-9326-31ed9842fe34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-edd3a331-b14a-4730-a21c-7fc793b77005', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c061a009eae241049a1e3a1c35aa2503', 'neutron:revision_number': '4', 'neutron:security_group_ids': '07a4532a-2c65-4c57-9063-6634eb312f26', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be5764d7-de7f-4844-afc6-7eadee6d6d3c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=c33f8157-4662-40a2-867e-4dac8467a80b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:58:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:48.813 155011 INFO neutron.agent.ovn.metadata.agent [-] Port c33f8157-4662-40a2-867e-4dac8467a80b in datapath edd3a331-b14a-4730-a21c-7fc793b77005 unbound from our chassis#033[00m
Feb  2 06:58:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:48.814 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network edd3a331-b14a-4730-a21c-7fc793b77005, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 06:58:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:48.815 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[019a127c-d212-4b28-8528-dda671436b24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:48.816 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005 namespace which is not needed anymore#033[00m
Feb  2 06:58:48 np0005604943 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Feb  2 06:58:48 np0005604943 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 13.357s CPU time.
Feb  2 06:58:48 np0005604943 systemd-machined[206973]: Machine qemu-8-instance-00000008 terminated.
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.908 238887 INFO nova.virt.libvirt.driver [-] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Instance destroyed successfully.#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.908 238887 DEBUG nova.objects.instance [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lazy-loading 'resources' on Instance uuid 177de248-c6fd-437b-9326-31ed9842fe34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.936 238887 DEBUG nova.virt.libvirt.vif [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:58:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1485192233',display_name='tempest-VolumesSnapshotTestJSON-instance-1485192233',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1485192233',id=8,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL6GxAVrfalGARmJw8jAxIibAgJ2b4nhzo9z62CVUPaqM4r5hNCPfuium6ZNr5kKgsWulTQcmag7XM8ABfuBli83zuAfWi/T+KoQ2rZbscPYLel85XacOnO64w6bfSTvfg==',key_name='tempest-keypair-490923632',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:58:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c061a009eae241049a1e3a1c35aa2503',ramdisk_id='',reservation_id='r-99c5ss6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-2018180325',owner_user_name='tempest-VolumesSnapshotTestJSON-2018180325-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:58:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4846ccd205b54116a828ad91820ef58d',uuid=177de248-c6fd-437b-9326-31ed9842fe34,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c33f8157-4662-40a2-867e-4dac8467a80b", "address": "fa:16:3e:43:5a:3a", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc33f8157-46", "ovs_interfaceid": "c33f8157-4662-40a2-867e-4dac8467a80b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.937 238887 DEBUG nova.network.os_vif_util [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Converting VIF {"id": "c33f8157-4662-40a2-867e-4dac8467a80b", "address": "fa:16:3e:43:5a:3a", "network": {"id": "edd3a331-b14a-4730-a21c-7fc793b77005", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1296637809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c061a009eae241049a1e3a1c35aa2503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc33f8157-46", "ovs_interfaceid": "c33f8157-4662-40a2-867e-4dac8467a80b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.937 238887 DEBUG nova.network.os_vif_util [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:43:5a:3a,bridge_name='br-int',has_traffic_filtering=True,id=c33f8157-4662-40a2-867e-4dac8467a80b,network=Network(edd3a331-b14a-4730-a21c-7fc793b77005),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc33f8157-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.938 238887 DEBUG os_vif [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:5a:3a,bridge_name='br-int',has_traffic_filtering=True,id=c33f8157-4662-40a2-867e-4dac8467a80b,network=Network(edd3a331-b14a-4730-a21c-7fc793b77005),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc33f8157-46') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.939 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.939 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc33f8157-46, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.942 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:48 np0005604943 neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005[251477]: [NOTICE]   (251488) : haproxy version is 2.8.14-c23fe91
Feb  2 06:58:48 np0005604943 neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005[251477]: [NOTICE]   (251488) : path to executable is /usr/sbin/haproxy
Feb  2 06:58:48 np0005604943 neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005[251477]: [WARNING]  (251488) : Exiting Master process...
Feb  2 06:58:48 np0005604943 nova_compute[238883]: 2026-02-02 11:58:48.945 238887 INFO os_vif [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:5a:3a,bridge_name='br-int',has_traffic_filtering=True,id=c33f8157-4662-40a2-867e-4dac8467a80b,network=Network(edd3a331-b14a-4730-a21c-7fc793b77005),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc33f8157-46')#033[00m
Feb  2 06:58:48 np0005604943 neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005[251477]: [ALERT]    (251488) : Current worker (251498) exited with code 143 (Terminated)
Feb  2 06:58:48 np0005604943 neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005[251477]: [WARNING]  (251488) : All workers exited. Exiting... (0)
Feb  2 06:58:48 np0005604943 systemd[1]: libpod-2ba940cc425f9f2f0991591f99db28520d300601c83656227566eb4e9f690cf3.scope: Deactivated successfully.
Feb  2 06:58:48 np0005604943 podman[251798]: 2026-02-02 11:58:48.955183295 +0000 UTC m=+0.072074302 container died 2ba940cc425f9f2f0991591f99db28520d300601c83656227566eb4e9f690cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:58:49 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ba940cc425f9f2f0991591f99db28520d300601c83656227566eb4e9f690cf3-userdata-shm.mount: Deactivated successfully.
Feb  2 06:58:49 np0005604943 systemd[1]: var-lib-containers-storage-overlay-9006a10451d02e49b08a32eb412d261a3d300280b9bf53160b290595a81981ad-merged.mount: Deactivated successfully.
Feb  2 06:58:49 np0005604943 podman[251798]: 2026-02-02 11:58:49.037173905 +0000 UTC m=+0.154064922 container cleanup 2ba940cc425f9f2f0991591f99db28520d300601c83656227566eb4e9f690cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:58:49 np0005604943 systemd[1]: libpod-conmon-2ba940cc425f9f2f0991591f99db28520d300601c83656227566eb4e9f690cf3.scope: Deactivated successfully.
Feb  2 06:58:49 np0005604943 nova_compute[238883]: 2026-02-02 11:58:49.075 238887 DEBUG nova.compute.manager [req-201d2a72-3351-4e6a-97a3-e07297a82de7 req-a9aa7af2-62f4-4348-8193-ee00684334be 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Received event network-vif-unplugged-c33f8157-4662-40a2-867e-4dac8467a80b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:58:49 np0005604943 nova_compute[238883]: 2026-02-02 11:58:49.076 238887 DEBUG oslo_concurrency.lockutils [req-201d2a72-3351-4e6a-97a3-e07297a82de7 req-a9aa7af2-62f4-4348-8193-ee00684334be 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "177de248-c6fd-437b-9326-31ed9842fe34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:49 np0005604943 nova_compute[238883]: 2026-02-02 11:58:49.076 238887 DEBUG oslo_concurrency.lockutils [req-201d2a72-3351-4e6a-97a3-e07297a82de7 req-a9aa7af2-62f4-4348-8193-ee00684334be 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:49 np0005604943 nova_compute[238883]: 2026-02-02 11:58:49.076 238887 DEBUG oslo_concurrency.lockutils [req-201d2a72-3351-4e6a-97a3-e07297a82de7 req-a9aa7af2-62f4-4348-8193-ee00684334be 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:49 np0005604943 nova_compute[238883]: 2026-02-02 11:58:49.076 238887 DEBUG nova.compute.manager [req-201d2a72-3351-4e6a-97a3-e07297a82de7 req-a9aa7af2-62f4-4348-8193-ee00684334be 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] No waiting events found dispatching network-vif-unplugged-c33f8157-4662-40a2-867e-4dac8467a80b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:58:49 np0005604943 nova_compute[238883]: 2026-02-02 11:58:49.077 238887 DEBUG nova.compute.manager [req-201d2a72-3351-4e6a-97a3-e07297a82de7 req-a9aa7af2-62f4-4348-8193-ee00684334be 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Received event network-vif-unplugged-c33f8157-4662-40a2-867e-4dac8467a80b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 06:58:49 np0005604943 podman[251857]: 2026-02-02 11:58:49.093267496 +0000 UTC m=+0.040904013 container remove 2ba940cc425f9f2f0991591f99db28520d300601c83656227566eb4e9f690cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 06:58:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:49.097 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[4b21c399-c6ab-439e-8aa3-b1e24064cc49]: (4, ('Mon Feb  2 11:58:48 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005 (2ba940cc425f9f2f0991591f99db28520d300601c83656227566eb4e9f690cf3)\n2ba940cc425f9f2f0991591f99db28520d300601c83656227566eb4e9f690cf3\nMon Feb  2 11:58:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005 (2ba940cc425f9f2f0991591f99db28520d300601c83656227566eb4e9f690cf3)\n2ba940cc425f9f2f0991591f99db28520d300601c83656227566eb4e9f690cf3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:49.098 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[25725200-dabc-4dde-a6e4-92b1b5be011a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:49.099 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedd3a331-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:58:49 np0005604943 nova_compute[238883]: 2026-02-02 11:58:49.101 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:49 np0005604943 kernel: tapedd3a331-b0: left promiscuous mode
Feb  2 06:58:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:49.104 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7038083f-137f-4a16-af2d-8f68131080be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:49 np0005604943 nova_compute[238883]: 2026-02-02 11:58:49.109 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:49.116 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[207dc4b4-d369-409a-b569-cba1a62e4d08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:49.117 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c1fdb53e-1d63-4fe9-9d83-425b51120d81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:49.130 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[964c3716-15ce-4065-8a0c-68a93216f7eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394201, 'reachable_time': 44086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251873, 'error': None, 'target': 'ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:49 np0005604943 systemd[1]: run-netns-ovnmeta\x2dedd3a331\x2db14a\x2d4730\x2da21c\x2d7fc793b77005.mount: Deactivated successfully.
Feb  2 06:58:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:49.131 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-edd3a331-b14a-4730-a21c-7fc793b77005 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 06:58:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:49.131 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[60a96a12-7686-4d48-bc57-6473475a8196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:58:49 np0005604943 nova_compute[238883]: 2026-02-02 11:58:49.223 238887 INFO nova.virt.libvirt.driver [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Deleting instance files /var/lib/nova/instances/177de248-c6fd-437b-9326-31ed9842fe34_del#033[00m
Feb  2 06:58:49 np0005604943 nova_compute[238883]: 2026-02-02 11:58:49.224 238887 INFO nova.virt.libvirt.driver [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Deletion of /var/lib/nova/instances/177de248-c6fd-437b-9326-31ed9842fe34_del complete#033[00m
Feb  2 06:58:49 np0005604943 nova_compute[238883]: 2026-02-02 11:58:49.299 238887 INFO nova.compute.manager [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 06:58:49 np0005604943 nova_compute[238883]: 2026-02-02 11:58:49.299 238887 DEBUG oslo.service.loopingcall [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 06:58:49 np0005604943 nova_compute[238883]: 2026-02-02 11:58:49.299 238887 DEBUG nova.compute.manager [-] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 06:58:49 np0005604943 nova_compute[238883]: 2026-02-02 11:58:49.300 238887 DEBUG nova.network.neutron [-] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 06:58:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:58:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/38036830' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:58:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:58:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/38036830' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:58:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 119 KiB/s rd, 9.3 KiB/s wr, 170 op/s
Feb  2 06:58:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:58:50 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3263493749' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:58:50 np0005604943 nova_compute[238883]: 2026-02-02 11:58:50.409 238887 DEBUG nova.network.neutron [-] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:58:50 np0005604943 nova_compute[238883]: 2026-02-02 11:58:50.429 238887 INFO nova.compute.manager [-] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Took 1.13 seconds to deallocate network for instance.#033[00m
Feb  2 06:58:50 np0005604943 nova_compute[238883]: 2026-02-02 11:58:50.491 238887 DEBUG nova.compute.manager [req-dc9402f9-47d5-4114-9609-111c2809d56d req-17e544a7-b5a8-487a-9dca-f76995aba258 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Received event network-vif-deleted-c33f8157-4662-40a2-867e-4dac8467a80b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:58:50 np0005604943 nova_compute[238883]: 2026-02-02 11:58:50.520 238887 WARNING nova.volume.cinder [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Attachment de92745e-c2ca-4c0f-98da-11186798b1cf does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = de92745e-c2ca-4c0f-98da-11186798b1cf. (HTTP 404) (Request-ID: req-69452687-f17a-4fe3-b914-c736d7bda2fa)#033[00m
Feb  2 06:58:50 np0005604943 nova_compute[238883]: 2026-02-02 11:58:50.520 238887 INFO nova.compute.manager [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Took 0.09 seconds to detach 1 volumes for instance.#033[00m
Feb  2 06:58:50 np0005604943 nova_compute[238883]: 2026-02-02 11:58:50.594 238887 DEBUG oslo_concurrency.lockutils [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:50 np0005604943 nova_compute[238883]: 2026-02-02 11:58:50.594 238887 DEBUG oslo_concurrency.lockutils [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:50 np0005604943 nova_compute[238883]: 2026-02-02 11:58:50.648 238887 DEBUG oslo_concurrency.processutils [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:58:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Feb  2 06:58:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Feb  2 06:58:50 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Feb  2 06:58:51 np0005604943 nova_compute[238883]: 2026-02-02 11:58:51.109 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:58:51 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4185010490' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:58:51 np0005604943 nova_compute[238883]: 2026-02-02 11:58:51.189 238887 DEBUG oslo_concurrency.processutils [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:58:51 np0005604943 nova_compute[238883]: 2026-02-02 11:58:51.194 238887 DEBUG nova.compute.provider_tree [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:58:51 np0005604943 nova_compute[238883]: 2026-02-02 11:58:51.203 238887 DEBUG nova.compute.manager [req-a6366a79-ddd0-4b94-a35a-444f683cb3cf req-3d235406-2034-4ffc-a979-b18059874dfb 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Received event network-vif-plugged-c33f8157-4662-40a2-867e-4dac8467a80b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:58:51 np0005604943 nova_compute[238883]: 2026-02-02 11:58:51.204 238887 DEBUG oslo_concurrency.lockutils [req-a6366a79-ddd0-4b94-a35a-444f683cb3cf req-3d235406-2034-4ffc-a979-b18059874dfb 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "177de248-c6fd-437b-9326-31ed9842fe34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:58:51 np0005604943 nova_compute[238883]: 2026-02-02 11:58:51.204 238887 DEBUG oslo_concurrency.lockutils [req-a6366a79-ddd0-4b94-a35a-444f683cb3cf req-3d235406-2034-4ffc-a979-b18059874dfb 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:58:51 np0005604943 nova_compute[238883]: 2026-02-02 11:58:51.204 238887 DEBUG oslo_concurrency.lockutils [req-a6366a79-ddd0-4b94-a35a-444f683cb3cf req-3d235406-2034-4ffc-a979-b18059874dfb 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:51 np0005604943 nova_compute[238883]: 2026-02-02 11:58:51.204 238887 DEBUG nova.compute.manager [req-a6366a79-ddd0-4b94-a35a-444f683cb3cf req-3d235406-2034-4ffc-a979-b18059874dfb 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] No waiting events found dispatching network-vif-plugged-c33f8157-4662-40a2-867e-4dac8467a80b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:58:51 np0005604943 nova_compute[238883]: 2026-02-02 11:58:51.204 238887 WARNING nova.compute.manager [req-a6366a79-ddd0-4b94-a35a-444f683cb3cf req-3d235406-2034-4ffc-a979-b18059874dfb 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Received unexpected event network-vif-plugged-c33f8157-4662-40a2-867e-4dac8467a80b for instance with vm_state deleted and task_state None.#033[00m
Feb  2 06:58:51 np0005604943 nova_compute[238883]: 2026-02-02 11:58:51.213 238887 DEBUG nova.scheduler.client.report [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:58:51 np0005604943 nova_compute[238883]: 2026-02-02 11:58:51.239 238887 DEBUG oslo_concurrency.lockutils [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:51 np0005604943 nova_compute[238883]: 2026-02-02 11:58:51.262 238887 INFO nova.scheduler.client.report [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Deleted allocations for instance 177de248-c6fd-437b-9326-31ed9842fe34#033[00m
Feb  2 06:58:51 np0005604943 nova_compute[238883]: 2026-02-02 11:58:51.324 238887 DEBUG oslo_concurrency.lockutils [None req-2678b7e0-e5c7-4e54-bb7e-7706bc8ab58a 4846ccd205b54116a828ad91820ef58d c061a009eae241049a1e3a1c35aa2503 - - default default] Lock "177de248-c6fd-437b-9326-31ed9842fe34" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:58:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Feb  2 06:58:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Feb  2 06:58:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Feb  2 06:58:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 9.0 KiB/s wr, 178 op/s
Feb  2 06:58:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:58:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Feb  2 06:58:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Feb  2 06:58:52 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Feb  2 06:58:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Feb  2 06:58:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Feb  2 06:58:53 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Feb  2 06:58:53 np0005604943 nova_compute[238883]: 2026-02-02 11:58:53.941 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 13 KiB/s wr, 194 op/s
Feb  2 06:58:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:58:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3411994091' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:58:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Feb  2 06:58:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Feb  2 06:58:55 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Feb  2 06:58:56 np0005604943 nova_compute[238883]: 2026-02-02 11:58:56.098 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:56 np0005604943 nova_compute[238883]: 2026-02-02 11:58:56.111 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 12 KiB/s wr, 174 op/s
Feb  2 06:58:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:58:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1200656586' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:58:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:58:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1200656586' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:58:57 np0005604943 nova_compute[238883]: 2026-02-02 11:58:57.176 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:57.175 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:58:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:57.177 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 06:58:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:58:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Feb  2 06:58:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Feb  2 06:58:57 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Feb  2 06:58:58 np0005604943 nova_compute[238883]: 2026-02-02 11:58:58.155 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:58 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:58:58.179 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:58:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 5.8 KiB/s wr, 130 op/s
Feb  2 06:58:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:58:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/411057' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:58:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:58:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/411057' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:58:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Feb  2 06:58:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Feb  2 06:58:58 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Feb  2 06:58:58 np0005604943 nova_compute[238883]: 2026-02-02 11:58:58.943 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/274629556' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/274629556' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Feb  2 06:58:59 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Feb  2 06:58:59 np0005604943 podman[252038]: 2026-02-02 11:58:59.913177635 +0000 UTC m=+0.056590765 container create 4b3b3df776894f70a9380a9320e65b0ace467739477ba9d48b35893837b69abb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mccarthy, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 06:58:59 np0005604943 systemd[1]: Started libpod-conmon-4b3b3df776894f70a9380a9320e65b0ace467739477ba9d48b35893837b69abb.scope.
Feb  2 06:58:59 np0005604943 podman[252038]: 2026-02-02 11:58:59.888443206 +0000 UTC m=+0.031856426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:58:59 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:59:00 np0005604943 podman[252038]: 2026-02-02 11:59:00.00029238 +0000 UTC m=+0.143705530 container init 4b3b3df776894f70a9380a9320e65b0ace467739477ba9d48b35893837b69abb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mccarthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:59:00 np0005604943 podman[252038]: 2026-02-02 11:59:00.005942948 +0000 UTC m=+0.149356088 container start 4b3b3df776894f70a9380a9320e65b0ace467739477ba9d48b35893837b69abb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mccarthy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 06:59:00 np0005604943 podman[252038]: 2026-02-02 11:59:00.009921742 +0000 UTC m=+0.153334912 container attach 4b3b3df776894f70a9380a9320e65b0ace467739477ba9d48b35893837b69abb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mccarthy, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 06:59:00 np0005604943 adoring_mccarthy[252054]: 167 167
Feb  2 06:59:00 np0005604943 systemd[1]: libpod-4b3b3df776894f70a9380a9320e65b0ace467739477ba9d48b35893837b69abb.scope: Deactivated successfully.
Feb  2 06:59:00 np0005604943 podman[252038]: 2026-02-02 11:59:00.013937608 +0000 UTC m=+0.157350748 container died 4b3b3df776894f70a9380a9320e65b0ace467739477ba9d48b35893837b69abb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 06:59:00 np0005604943 systemd[1]: var-lib-containers-storage-overlay-047a50e8af78d3e52449bbb91cc74a5a12932b60d7ec862607a738d8c4b49ef3-merged.mount: Deactivated successfully.
Feb  2 06:59:00 np0005604943 podman[252038]: 2026-02-02 11:59:00.050582309 +0000 UTC m=+0.193995439 container remove 4b3b3df776894f70a9380a9320e65b0ace467739477ba9d48b35893837b69abb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb  2 06:59:00 np0005604943 systemd[1]: libpod-conmon-4b3b3df776894f70a9380a9320e65b0ace467739477ba9d48b35893837b69abb.scope: Deactivated successfully.
Feb  2 06:59:00 np0005604943 podman[252077]: 2026-02-02 11:59:00.188394654 +0000 UTC m=+0.048718919 container create 4a8e978ee784ccdb771e4e285325487fc678eb0211be62b572116b2a31dcbbe9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:59:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 8.1 KiB/s wr, 225 op/s
Feb  2 06:59:00 np0005604943 systemd[1]: Started libpod-conmon-4a8e978ee784ccdb771e4e285325487fc678eb0211be62b572116b2a31dcbbe9.scope.
Feb  2 06:59:00 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:59:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21dea7d0fe7155c75525e95e60abdb555e320d423d85acde5a933f0736309947/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:59:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21dea7d0fe7155c75525e95e60abdb555e320d423d85acde5a933f0736309947/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:59:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21dea7d0fe7155c75525e95e60abdb555e320d423d85acde5a933f0736309947/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:59:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21dea7d0fe7155c75525e95e60abdb555e320d423d85acde5a933f0736309947/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:59:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21dea7d0fe7155c75525e95e60abdb555e320d423d85acde5a933f0736309947/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 06:59:00 np0005604943 podman[252077]: 2026-02-02 11:59:00.166837468 +0000 UTC m=+0.027161743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:59:00 np0005604943 podman[252077]: 2026-02-02 11:59:00.274441821 +0000 UTC m=+0.134766116 container init 4a8e978ee784ccdb771e4e285325487fc678eb0211be62b572116b2a31dcbbe9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 06:59:00 np0005604943 podman[252077]: 2026-02-02 11:59:00.280582811 +0000 UTC m=+0.140907076 container start 4a8e978ee784ccdb771e4e285325487fc678eb0211be62b572116b2a31dcbbe9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 06:59:00 np0005604943 podman[252077]: 2026-02-02 11:59:00.287916525 +0000 UTC m=+0.148240790 container attach 4a8e978ee784ccdb771e4e285325487fc678eb0211be62b572116b2a31dcbbe9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 06:59:00 np0005604943 wonderful_kapitsa[252093]: --> passed data devices: 0 physical, 3 LVM
Feb  2 06:59:00 np0005604943 wonderful_kapitsa[252093]: --> All data devices are unavailable
Feb  2 06:59:00 np0005604943 systemd[1]: libpod-4a8e978ee784ccdb771e4e285325487fc678eb0211be62b572116b2a31dcbbe9.scope: Deactivated successfully.
Feb  2 06:59:00 np0005604943 podman[252113]: 2026-02-02 11:59:00.800188761 +0000 UTC m=+0.022729808 container died 4a8e978ee784ccdb771e4e285325487fc678eb0211be62b572116b2a31dcbbe9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:59:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Feb  2 06:59:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Feb  2 06:59:00 np0005604943 systemd[1]: var-lib-containers-storage-overlay-21dea7d0fe7155c75525e95e60abdb555e320d423d85acde5a933f0736309947-merged.mount: Deactivated successfully.
Feb  2 06:59:01 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Feb  2 06:59:01 np0005604943 podman[252113]: 2026-02-02 11:59:01.021516006 +0000 UTC m=+0.244057043 container remove 4a8e978ee784ccdb771e4e285325487fc678eb0211be62b572116b2a31dcbbe9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_kapitsa, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:59:01 np0005604943 systemd[1]: libpod-conmon-4a8e978ee784ccdb771e4e285325487fc678eb0211be62b572116b2a31dcbbe9.scope: Deactivated successfully.
Feb  2 06:59:01 np0005604943 nova_compute[238883]: 2026-02-02 11:59:01.113 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:59:01 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/224016020' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:59:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:59:01 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/224016020' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:59:01 np0005604943 podman[252191]: 2026-02-02 11:59:01.478696297 +0000 UTC m=+0.042631708 container create 66b52f13a0436dc997990b2e925a1b78204e57514d52169070b6cc6017941f3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_feistel, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:59:01 np0005604943 systemd[1]: Started libpod-conmon-66b52f13a0436dc997990b2e925a1b78204e57514d52169070b6cc6017941f3e.scope.
Feb  2 06:59:01 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:59:01 np0005604943 podman[252191]: 2026-02-02 11:59:01.461858056 +0000 UTC m=+0.025793277 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:59:01 np0005604943 podman[252191]: 2026-02-02 11:59:01.558795898 +0000 UTC m=+0.122731119 container init 66b52f13a0436dc997990b2e925a1b78204e57514d52169070b6cc6017941f3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_feistel, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 06:59:01 np0005604943 podman[252191]: 2026-02-02 11:59:01.566686035 +0000 UTC m=+0.130621236 container start 66b52f13a0436dc997990b2e925a1b78204e57514d52169070b6cc6017941f3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_feistel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 06:59:01 np0005604943 funny_feistel[252208]: 167 167
Feb  2 06:59:01 np0005604943 podman[252191]: 2026-02-02 11:59:01.571628705 +0000 UTC m=+0.135564166 container attach 66b52f13a0436dc997990b2e925a1b78204e57514d52169070b6cc6017941f3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_feistel, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 06:59:01 np0005604943 systemd[1]: libpod-66b52f13a0436dc997990b2e925a1b78204e57514d52169070b6cc6017941f3e.scope: Deactivated successfully.
Feb  2 06:59:01 np0005604943 podman[252191]: 2026-02-02 11:59:01.573304019 +0000 UTC m=+0.137239220 container died 66b52f13a0436dc997990b2e925a1b78204e57514d52169070b6cc6017941f3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_feistel, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 06:59:01 np0005604943 systemd[1]: var-lib-containers-storage-overlay-c0dd31643b0884577df48f9e41a306f13ad067dbad98dc30c405a67c5bfa62e0-merged.mount: Deactivated successfully.
Feb  2 06:59:01 np0005604943 podman[252191]: 2026-02-02 11:59:01.613060992 +0000 UTC m=+0.176996193 container remove 66b52f13a0436dc997990b2e925a1b78204e57514d52169070b6cc6017941f3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:59:01 np0005604943 systemd[1]: libpod-conmon-66b52f13a0436dc997990b2e925a1b78204e57514d52169070b6cc6017941f3e.scope: Deactivated successfully.
Feb  2 06:59:01 np0005604943 podman[252230]: 2026-02-02 11:59:01.758024274 +0000 UTC m=+0.047148978 container create 36b375cb55860f958ff4e490b1ce8a7de9b78146024aa134fa3706487a7d7136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 06:59:01 np0005604943 systemd[1]: Started libpod-conmon-36b375cb55860f958ff4e490b1ce8a7de9b78146024aa134fa3706487a7d7136.scope.
Feb  2 06:59:01 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:59:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf0ee1b6e041f118128e27520dfd6f500b39d1c43e916fb82ebaa3289d609b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:59:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf0ee1b6e041f118128e27520dfd6f500b39d1c43e916fb82ebaa3289d609b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:59:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf0ee1b6e041f118128e27520dfd6f500b39d1c43e916fb82ebaa3289d609b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:59:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf0ee1b6e041f118128e27520dfd6f500b39d1c43e916fb82ebaa3289d609b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:59:01 np0005604943 podman[252230]: 2026-02-02 11:59:01.737528947 +0000 UTC m=+0.026653671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:59:01 np0005604943 podman[252230]: 2026-02-02 11:59:01.845786256 +0000 UTC m=+0.134910990 container init 36b375cb55860f958ff4e490b1ce8a7de9b78146024aa134fa3706487a7d7136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_heyrovsky, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:59:01 np0005604943 podman[252230]: 2026-02-02 11:59:01.851853495 +0000 UTC m=+0.140978199 container start 36b375cb55860f958ff4e490b1ce8a7de9b78146024aa134fa3706487a7d7136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 06:59:01 np0005604943 podman[252230]: 2026-02-02 11:59:01.85586035 +0000 UTC m=+0.144985084 container attach 36b375cb55860f958ff4e490b1ce8a7de9b78146024aa134fa3706487a7d7136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_heyrovsky, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]: {
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:    "0": [
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:        {
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "devices": [
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "/dev/loop3"
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            ],
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "lv_name": "ceph_lv0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "lv_size": "21470642176",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "name": "ceph_lv0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "tags": {
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.cluster_name": "ceph",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.crush_device_class": "",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.encrypted": "0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.objectstore": "bluestore",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.osd_id": "0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.type": "block",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.vdo": "0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.with_tpm": "0"
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            },
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "type": "block",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "vg_name": "ceph_vg0"
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:        }
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:    ],
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:    "1": [
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:        {
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "devices": [
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "/dev/loop4"
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            ],
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "lv_name": "ceph_lv1",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "lv_size": "21470642176",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "name": "ceph_lv1",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "tags": {
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.cluster_name": "ceph",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.crush_device_class": "",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.encrypted": "0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.objectstore": "bluestore",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.osd_id": "1",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.type": "block",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.vdo": "0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.with_tpm": "0"
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            },
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "type": "block",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "vg_name": "ceph_vg1"
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:        }
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:    ],
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:    "2": [
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:        {
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "devices": [
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "/dev/loop5"
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            ],
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "lv_name": "ceph_lv2",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "lv_size": "21470642176",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "name": "ceph_lv2",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "tags": {
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.cephx_lockbox_secret": "",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.cluster_name": "ceph",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.crush_device_class": "",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.encrypted": "0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.objectstore": "bluestore",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.osd_id": "2",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.type": "block",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.vdo": "0",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:                "ceph.with_tpm": "0"
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            },
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "type": "block",
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:            "vg_name": "ceph_vg2"
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:        }
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]:    ]
Feb  2 06:59:02 np0005604943 vigorous_heyrovsky[252247]: }
Feb  2 06:59:02 np0005604943 systemd[1]: libpod-36b375cb55860f958ff4e490b1ce8a7de9b78146024aa134fa3706487a7d7136.scope: Deactivated successfully.
Feb  2 06:59:02 np0005604943 podman[252230]: 2026-02-02 11:59:02.130690449 +0000 UTC m=+0.419815163 container died 36b375cb55860f958ff4e490b1ce8a7de9b78146024aa134fa3706487a7d7136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 06:59:02 np0005604943 systemd[1]: var-lib-containers-storage-overlay-caf0ee1b6e041f118128e27520dfd6f500b39d1c43e916fb82ebaa3289d609b9-merged.mount: Deactivated successfully.
Feb  2 06:59:02 np0005604943 podman[252230]: 2026-02-02 11:59:02.178657297 +0000 UTC m=+0.467782001 container remove 36b375cb55860f958ff4e490b1ce8a7de9b78146024aa134fa3706487a7d7136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Feb  2 06:59:02 np0005604943 systemd[1]: libpod-conmon-36b375cb55860f958ff4e490b1ce8a7de9b78146024aa134fa3706487a7d7136.scope: Deactivated successfully.
Feb  2 06:59:02 np0005604943 podman[252264]: 2026-02-02 11:59:02.229257194 +0000 UTC m=+0.071350353 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:59:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 7.4 KiB/s wr, 187 op/s
Feb  2 06:59:02 np0005604943 podman[252257]: 2026-02-02 11:59:02.312854897 +0000 UTC m=+0.154396140 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb  2 06:59:02 np0005604943 nova_compute[238883]: 2026-02-02 11:59:02.433 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:59:02 np0005604943 podman[252372]: 2026-02-02 11:59:02.704561011 +0000 UTC m=+0.045182237 container create 3a994dd1a7fbf2f248264353da2feae2e76d29188c4f20ae6406259b756faf71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_feynman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 06:59:02 np0005604943 systemd[1]: Started libpod-conmon-3a994dd1a7fbf2f248264353da2feae2e76d29188c4f20ae6406259b756faf71.scope.
Feb  2 06:59:02 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:59:02 np0005604943 podman[252372]: 2026-02-02 11:59:02.68431145 +0000 UTC m=+0.024932696 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:59:02 np0005604943 podman[252372]: 2026-02-02 11:59:02.805678453 +0000 UTC m=+0.146299709 container init 3a994dd1a7fbf2f248264353da2feae2e76d29188c4f20ae6406259b756faf71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_feynman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:59:02 np0005604943 podman[252372]: 2026-02-02 11:59:02.813762236 +0000 UTC m=+0.154383472 container start 3a994dd1a7fbf2f248264353da2feae2e76d29188c4f20ae6406259b756faf71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_feynman, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 06:59:02 np0005604943 cool_feynman[252388]: 167 167
Feb  2 06:59:02 np0005604943 systemd[1]: libpod-3a994dd1a7fbf2f248264353da2feae2e76d29188c4f20ae6406259b756faf71.scope: Deactivated successfully.
Feb  2 06:59:02 np0005604943 podman[252372]: 2026-02-02 11:59:02.823163411 +0000 UTC m=+0.163784657 container attach 3a994dd1a7fbf2f248264353da2feae2e76d29188c4f20ae6406259b756faf71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 06:59:02 np0005604943 podman[252372]: 2026-02-02 11:59:02.824304382 +0000 UTC m=+0.164925618 container died 3a994dd1a7fbf2f248264353da2feae2e76d29188c4f20ae6406259b756faf71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_feynman, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 06:59:02 np0005604943 systemd[1]: var-lib-containers-storage-overlay-51aa24c7d606e55f4be73b95bfc3d7b36d31ee1300a385c1edd269531ccf04ed-merged.mount: Deactivated successfully.
Feb  2 06:59:02 np0005604943 podman[252372]: 2026-02-02 11:59:02.883314649 +0000 UTC m=+0.223935885 container remove 3a994dd1a7fbf2f248264353da2feae2e76d29188c4f20ae6406259b756faf71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 06:59:02 np0005604943 systemd[1]: libpod-conmon-3a994dd1a7fbf2f248264353da2feae2e76d29188c4f20ae6406259b756faf71.scope: Deactivated successfully.
Feb  2 06:59:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Feb  2 06:59:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Feb  2 06:59:03 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Feb  2 06:59:03 np0005604943 podman[252413]: 2026-02-02 11:59:03.054859989 +0000 UTC m=+0.074281789 container create db9f615cd23bc029aa4ba31716c28df301ccb31e907e788f49bf551ebc051946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_blackwell, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 06:59:03 np0005604943 podman[252413]: 2026-02-02 11:59:03.014058989 +0000 UTC m=+0.033480789 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 06:59:03 np0005604943 systemd[1]: Started libpod-conmon-db9f615cd23bc029aa4ba31716c28df301ccb31e907e788f49bf551ebc051946.scope.
Feb  2 06:59:03 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:59:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c6b1db9e287b01047c9b4401b4287b4a3a4e8eeaa89a22067f2749cedc89e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 06:59:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c6b1db9e287b01047c9b4401b4287b4a3a4e8eeaa89a22067f2749cedc89e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 06:59:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c6b1db9e287b01047c9b4401b4287b4a3a4e8eeaa89a22067f2749cedc89e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 06:59:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c6b1db9e287b01047c9b4401b4287b4a3a4e8eeaa89a22067f2749cedc89e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 06:59:03 np0005604943 podman[252413]: 2026-02-02 11:59:03.152418339 +0000 UTC m=+0.171840119 container init db9f615cd23bc029aa4ba31716c28df301ccb31e907e788f49bf551ebc051946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_blackwell, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 06:59:03 np0005604943 podman[252413]: 2026-02-02 11:59:03.162430731 +0000 UTC m=+0.181852491 container start db9f615cd23bc029aa4ba31716c28df301ccb31e907e788f49bf551ebc051946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_blackwell, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 06:59:03 np0005604943 podman[252413]: 2026-02-02 11:59:03.17076852 +0000 UTC m=+0.190190280 container attach db9f615cd23bc029aa4ba31716c28df301ccb31e907e788f49bf551ebc051946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_blackwell, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 06:59:03 np0005604943 lvm[252506]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 06:59:03 np0005604943 lvm[252506]: VG ceph_vg0 finished
Feb  2 06:59:03 np0005604943 lvm[252509]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 06:59:03 np0005604943 lvm[252509]: VG ceph_vg1 finished
Feb  2 06:59:03 np0005604943 nova_compute[238883]: 2026-02-02 11:59:03.907 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033528.905668, 177de248-c6fd-437b-9326-31ed9842fe34 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:59:03 np0005604943 nova_compute[238883]: 2026-02-02 11:59:03.910 238887 INFO nova.compute.manager [-] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] VM Stopped (Lifecycle Event)#033[00m
Feb  2 06:59:03 np0005604943 lvm[252511]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 06:59:03 np0005604943 lvm[252511]: VG ceph_vg2 finished
Feb  2 06:59:03 np0005604943 nova_compute[238883]: 2026-02-02 11:59:03.935 238887 DEBUG nova.compute.manager [None req-a828258e-3c99-4dfe-8566-1a3b1428c9bf - - - - - -] [instance: 177de248-c6fd-437b-9326-31ed9842fe34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:59:03 np0005604943 nova_compute[238883]: 2026-02-02 11:59:03.946 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:04 np0005604943 determined_blackwell[252430]: {}
Feb  2 06:59:04 np0005604943 systemd[1]: libpod-db9f615cd23bc029aa4ba31716c28df301ccb31e907e788f49bf551ebc051946.scope: Deactivated successfully.
Feb  2 06:59:04 np0005604943 systemd[1]: libpod-db9f615cd23bc029aa4ba31716c28df301ccb31e907e788f49bf551ebc051946.scope: Consumed 1.281s CPU time.
Feb  2 06:59:04 np0005604943 podman[252413]: 2026-02-02 11:59:04.033669703 +0000 UTC m=+1.053091493 container died db9f615cd23bc029aa4ba31716c28df301ccb31e907e788f49bf551ebc051946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Feb  2 06:59:04 np0005604943 systemd[1]: var-lib-containers-storage-overlay-71c6b1db9e287b01047c9b4401b4287b4a3a4e8eeaa89a22067f2749cedc89e0-merged.mount: Deactivated successfully.
Feb  2 06:59:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 208 KiB/s rd, 11 KiB/s wr, 278 op/s
Feb  2 06:59:04 np0005604943 podman[252413]: 2026-02-02 11:59:04.239426319 +0000 UTC m=+1.258848079 container remove db9f615cd23bc029aa4ba31716c28df301ccb31e907e788f49bf551ebc051946 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 06:59:04 np0005604943 systemd[1]: libpod-conmon-db9f615cd23bc029aa4ba31716c28df301ccb31e907e788f49bf551ebc051946.scope: Deactivated successfully.
Feb  2 06:59:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 06:59:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:59:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 06:59:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:59:05 np0005604943 nova_compute[238883]: 2026-02-02 11:59:05.203 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:05 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:59:05 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 06:59:06 np0005604943 nova_compute[238883]: 2026-02-02 11:59:06.116 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 3.9 KiB/s wr, 107 op/s
Feb  2 06:59:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:59:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2978234117' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:59:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:59:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2978234117' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:59:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:59:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Feb  2 06:59:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Feb  2 06:59:07 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Feb  2 06:59:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 6.1 KiB/s wr, 139 op/s
Feb  2 06:59:08 np0005604943 nova_compute[238883]: 2026-02-02 11:59:08.948 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:09 np0005604943 nova_compute[238883]: 2026-02-02 11:59:09.109 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "484e5b46-6672-4796-8f30-6d3e862428d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:09 np0005604943 nova_compute[238883]: 2026-02-02 11:59:09.110 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:59:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/741504291' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:59:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:59:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/741504291' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:59:09 np0005604943 nova_compute[238883]: 2026-02-02 11:59:09.177 238887 DEBUG nova.compute.manager [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 06:59:09 np0005604943 nova_compute[238883]: 2026-02-02 11:59:09.263 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:09 np0005604943 nova_compute[238883]: 2026-02-02 11:59:09.264 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:09 np0005604943 nova_compute[238883]: 2026-02-02 11:59:09.272 238887 DEBUG nova.virt.hardware [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 06:59:09 np0005604943 nova_compute[238883]: 2026-02-02 11:59:09.272 238887 INFO nova.compute.claims [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 06:59:09 np0005604943 nova_compute[238883]: 2026-02-02 11:59:09.455 238887 DEBUG oslo_concurrency.processutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_11:59:09
Feb  2 06:59:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 06:59:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 06:59:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'backups', '.rgw.root']
Feb  2 06:59:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 06:59:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:59:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/271232397' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:59:09 np0005604943 nova_compute[238883]: 2026-02-02 11:59:09.993 238887 DEBUG oslo_concurrency.processutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:09 np0005604943 nova_compute[238883]: 2026-02-02 11:59:09.999 238887 DEBUG nova.compute.provider_tree [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.015 238887 DEBUG nova.scheduler.client.report [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:59:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:10.023 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:10.023 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:10.023 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.146 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.147 238887 DEBUG nova.compute.manager [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.204 238887 DEBUG nova.compute.manager [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.204 238887 DEBUG nova.network.neutron [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.234 238887 INFO nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 6.5 KiB/s wr, 146 op/s
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.266 238887 DEBUG nova.compute.manager [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.401 238887 DEBUG nova.compute.manager [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.403 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.403 238887 INFO nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Creating image(s)#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.424 238887 DEBUG nova.storage.rbd_utils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] rbd image 484e5b46-6672-4796-8f30-6d3e862428d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.450 238887 DEBUG nova.storage.rbd_utils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] rbd image 484e5b46-6672-4796-8f30-6d3e862428d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.473 238887 DEBUG nova.storage.rbd_utils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] rbd image 484e5b46-6672-4796-8f30-6d3e862428d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.475 238887 DEBUG oslo_concurrency.processutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.524 238887 DEBUG oslo_concurrency.processutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.525 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.525 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.526 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.545 238887 DEBUG nova.storage.rbd_utils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] rbd image 484e5b46-6672-4796-8f30-6d3e862428d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.549 238887 DEBUG oslo_concurrency.processutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 484e5b46-6672-4796-8f30-6d3e862428d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.566 238887 DEBUG nova.policy [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '619ce2f20dd849f6a462d2162bcccc7a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '61afd70cadc143c2a9c65f6cec8dc9e8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 06:59:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:59:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2082523617' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:59:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:59:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2082523617' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.748 238887 DEBUG oslo_concurrency.processutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 484e5b46-6672-4796-8f30-6d3e862428d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.799 238887 DEBUG nova.storage.rbd_utils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] resizing rbd image 484e5b46-6672-4796-8f30-6d3e862428d3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.862 238887 DEBUG nova.objects.instance [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lazy-loading 'migration_context' on Instance uuid 484e5b46-6672-4796-8f30-6d3e862428d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.875 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.876 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Ensure instance console log exists: /var/lib/nova/instances/484e5b46-6672-4796-8f30-6d3e862428d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.876 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.877 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:10 np0005604943 nova_compute[238883]: 2026-02-02 11:59:10.877 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:59:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 06:59:11 np0005604943 nova_compute[238883]: 2026-02-02 11:59:11.118 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:11 np0005604943 nova_compute[238883]: 2026-02-02 11:59:11.432 238887 DEBUG nova.network.neutron [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Successfully created port: 5ea4af23-2f74-4e93-8aa4-42a49865dbf4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.067 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Acquiring lock "b6e0af38-f069-4516-848d-2b7093956fa0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.068 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.114 238887 DEBUG nova.compute.manager [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.224 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.224 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.230 238887 DEBUG nova.virt.hardware [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.231 238887 INFO nova.compute.claims [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 06:59:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 4.6 KiB/s wr, 88 op/s
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.365 238887 DEBUG oslo_concurrency.processutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.773 238887 DEBUG nova.network.neutron [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Successfully updated port: 5ea4af23-2f74-4e93-8aa4-42a49865dbf4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.790 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "refresh_cache-484e5b46-6672-4796-8f30-6d3e862428d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.790 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquired lock "refresh_cache-484e5b46-6672-4796-8f30-6d3e862428d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.791 238887 DEBUG nova.network.neutron [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.881 238887 DEBUG nova.compute.manager [req-ece2160d-3594-4876-97a7-387d27647409 req-aaa45c6a-cf27-490e-b965-ed261ca056cd 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Received event network-changed-5ea4af23-2f74-4e93-8aa4-42a49865dbf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.882 238887 DEBUG nova.compute.manager [req-ece2160d-3594-4876-97a7-387d27647409 req-aaa45c6a-cf27-490e-b965-ed261ca056cd 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Refreshing instance network info cache due to event network-changed-5ea4af23-2f74-4e93-8aa4-42a49865dbf4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.882 238887 DEBUG oslo_concurrency.lockutils [req-ece2160d-3594-4876-97a7-387d27647409 req-aaa45c6a-cf27-490e-b965-ed261ca056cd 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-484e5b46-6672-4796-8f30-6d3e862428d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:59:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:59:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2064342486' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.957 238887 DEBUG nova.network.neutron [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.970 238887 DEBUG oslo_concurrency.processutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.975 238887 DEBUG nova.compute.provider_tree [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:59:12 np0005604943 nova_compute[238883]: 2026-02-02 11:59:12.991 238887 DEBUG nova.scheduler.client.report [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.022 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.023 238887 DEBUG nova.compute.manager [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.064 238887 DEBUG nova.compute.manager [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.065 238887 DEBUG nova.network.neutron [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.080 238887 INFO nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.094 238887 DEBUG nova.compute.manager [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.184 238887 DEBUG nova.compute.manager [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.186 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.186 238887 INFO nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Creating image(s)#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.206 238887 DEBUG nova.storage.rbd_utils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] rbd image b6e0af38-f069-4516-848d-2b7093956fa0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.227 238887 DEBUG nova.storage.rbd_utils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] rbd image b6e0af38-f069-4516-848d-2b7093956fa0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.248 238887 DEBUG nova.storage.rbd_utils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] rbd image b6e0af38-f069-4516-848d-2b7093956fa0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.251 238887 DEBUG oslo_concurrency.processutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.298 238887 DEBUG oslo_concurrency.processutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.299 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Acquiring lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.300 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.300 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.316 238887 DEBUG nova.storage.rbd_utils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] rbd image b6e0af38-f069-4516-848d-2b7093956fa0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.320 238887 DEBUG oslo_concurrency.processutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 b6e0af38-f069-4516-848d-2b7093956fa0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.551 238887 DEBUG oslo_concurrency.processutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 b6e0af38-f069-4516-848d-2b7093956fa0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.619 238887 DEBUG nova.storage.rbd_utils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] resizing rbd image b6e0af38-f069-4516-848d-2b7093956fa0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.676 238887 DEBUG nova.objects.instance [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lazy-loading 'migration_context' on Instance uuid b6e0af38-f069-4516-848d-2b7093956fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.700 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.700 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Ensure instance console log exists: /var/lib/nova/instances/b6e0af38-f069-4516-848d-2b7093956fa0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.701 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.701 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.701 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.804 238887 DEBUG nova.policy [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0d0b5cfd8d84432894bd264065bcb0ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cdcfa3aaa83541878311def7781b5b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 06:59:13 np0005604943 nova_compute[238883]: 2026-02-02 11:59:13.950 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.212 238887 DEBUG nova.network.neutron [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Updating instance_info_cache with network_info: [{"id": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "address": "fa:16:3e:92:d4:07", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea4af23-2f", "ovs_interfaceid": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.240 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Releasing lock "refresh_cache-484e5b46-6672-4796-8f30-6d3e862428d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.241 238887 DEBUG nova.compute.manager [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Instance network_info: |[{"id": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "address": "fa:16:3e:92:d4:07", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea4af23-2f", "ovs_interfaceid": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.241 238887 DEBUG oslo_concurrency.lockutils [req-ece2160d-3594-4876-97a7-387d27647409 req-aaa45c6a-cf27-490e-b965-ed261ca056cd 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-484e5b46-6672-4796-8f30-6d3e862428d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.241 238887 DEBUG nova.network.neutron [req-ece2160d-3594-4876-97a7-387d27647409 req-aaa45c6a-cf27-490e-b965-ed261ca056cd 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Refreshing network info cache for port 5ea4af23-2f74-4e93-8aa4-42a49865dbf4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:59:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 105 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 2.8 MiB/s wr, 117 op/s
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.245 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Start _get_guest_xml network_info=[{"id": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "address": "fa:16:3e:92:d4:07", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea4af23-2f", "ovs_interfaceid": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'image_id': '21b263f0-00f1-47be-b8b1-e3c07da0a6a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.249 238887 WARNING nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.254 238887 DEBUG nova.virt.libvirt.host [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.255 238887 DEBUG nova.virt.libvirt.host [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.264 238887 DEBUG nova.virt.libvirt.host [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.264 238887 DEBUG nova.virt.libvirt.host [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.265 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.265 238887 DEBUG nova.virt.hardware [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.266 238887 DEBUG nova.virt.hardware [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.266 238887 DEBUG nova.virt.hardware [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.266 238887 DEBUG nova.virt.hardware [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.267 238887 DEBUG nova.virt.hardware [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.267 238887 DEBUG nova.virt.hardware [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.267 238887 DEBUG nova.virt.hardware [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.267 238887 DEBUG nova.virt.hardware [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.268 238887 DEBUG nova.virt.hardware [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.268 238887 DEBUG nova.virt.hardware [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.268 238887 DEBUG nova.virt.hardware [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.271 238887 DEBUG oslo_concurrency.processutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.449 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.538 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.658 238887 DEBUG nova.network.neutron [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Successfully created port: 86db1a97-63b9-4069-a69f-bc0ef1f8342f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 06:59:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:59:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3894967870' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.855 238887 DEBUG oslo_concurrency.processutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.876 238887 DEBUG nova.storage.rbd_utils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] rbd image 484e5b46-6672-4796-8f30-6d3e862428d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:14 np0005604943 nova_compute[238883]: 2026-02-02 11:59:14.879 238887 DEBUG oslo_concurrency.processutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.364 238887 DEBUG nova.network.neutron [req-ece2160d-3594-4876-97a7-387d27647409 req-aaa45c6a-cf27-490e-b965-ed261ca056cd 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Updated VIF entry in instance network info cache for port 5ea4af23-2f74-4e93-8aa4-42a49865dbf4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.365 238887 DEBUG nova.network.neutron [req-ece2160d-3594-4876-97a7-387d27647409 req-aaa45c6a-cf27-490e-b965-ed261ca056cd 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Updating instance_info_cache with network_info: [{"id": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "address": "fa:16:3e:92:d4:07", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea4af23-2f", "ovs_interfaceid": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.380 238887 DEBUG oslo_concurrency.lockutils [req-ece2160d-3594-4876-97a7-387d27647409 req-aaa45c6a-cf27-490e-b965-ed261ca056cd 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-484e5b46-6672-4796-8f30-6d3e862428d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:59:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:59:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1391305135' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.417 238887 DEBUG oslo_concurrency.processutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.418 238887 DEBUG nova.virt.libvirt.vif [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:59:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-949220200',display_name='tempest-VolumesBackupsTest-instance-949220200',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-949220200',id=9,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNxTZ14bn8vPrO9oxOInbihHMGj10qQvaC4jOXS0xXmtEhvsgYoxZPSH8wDBRxFEFRVuxr8jHsawf9NRli3KMqWhhXStjp7DSe1XQieULnJNXv/iowk8UImq0y2/s8Et5g==',key_name='tempest-keypair-225548884',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61afd70cadc143c2a9c65f6cec8dc9e8',ramdisk_id='',reservation_id='r-r0o2x8zy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1949354358',owner_user_name='tempest-VolumesBackupsTest-1949354358-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:59:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='619ce2f20dd849f6a462d2162bcccc7a',uuid=484e5b46-6672-4796-8f30-6d3e862428d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "address": "fa:16:3e:92:d4:07", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea4af23-2f", "ovs_interfaceid": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.419 238887 DEBUG nova.network.os_vif_util [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Converting VIF {"id": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "address": "fa:16:3e:92:d4:07", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea4af23-2f", "ovs_interfaceid": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.419 238887 DEBUG nova.network.os_vif_util [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:92:d4:07,bridge_name='br-int',has_traffic_filtering=True,id=5ea4af23-2f74-4e93-8aa4-42a49865dbf4,network=Network(302d1601-7819-4001-9e16-ee97183eb73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea4af23-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.420 238887 DEBUG nova.objects.instance [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 484e5b46-6672-4796-8f30-6d3e862428d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.439 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] End _get_guest_xml xml=<domain type="kvm">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  <uuid>484e5b46-6672-4796-8f30-6d3e862428d3</uuid>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  <name>instance-00000009</name>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <nova:name>tempest-VolumesBackupsTest-instance-949220200</nova:name>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 11:59:14</nova:creationTime>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:        <nova:user uuid="619ce2f20dd849f6a462d2162bcccc7a">tempest-VolumesBackupsTest-1949354358-project-member</nova:user>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:        <nova:project uuid="61afd70cadc143c2a9c65f6cec8dc9e8">tempest-VolumesBackupsTest-1949354358</nova:project>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <nova:root type="image" uuid="21b263f0-00f1-47be-b8b1-e3c07da0a6a2"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:        <nova:port uuid="5ea4af23-2f74-4e93-8aa4-42a49865dbf4">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <system>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <entry name="serial">484e5b46-6672-4796-8f30-6d3e862428d3</entry>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <entry name="uuid">484e5b46-6672-4796-8f30-6d3e862428d3</entry>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    </system>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  <os>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  </os>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  <features>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  </features>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  </clock>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  <devices>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/484e5b46-6672-4796-8f30-6d3e862428d3_disk">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/484e5b46-6672-4796-8f30-6d3e862428d3_disk.config">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:92:d4:07"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <target dev="tap5ea4af23-2f"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    </interface>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/484e5b46-6672-4796-8f30-6d3e862428d3/console.log" append="off"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    </serial>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <video>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    </video>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    </rng>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 06:59:15 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 06:59:15 np0005604943 nova_compute[238883]:  </devices>
Feb  2 06:59:15 np0005604943 nova_compute[238883]: </domain>
Feb  2 06:59:15 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.440 238887 DEBUG nova.compute.manager [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Preparing to wait for external event network-vif-plugged-5ea4af23-2f74-4e93-8aa4-42a49865dbf4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.441 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.441 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.441 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.442 238887 DEBUG nova.virt.libvirt.vif [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:59:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-949220200',display_name='tempest-VolumesBackupsTest-instance-949220200',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-949220200',id=9,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNxTZ14bn8vPrO9oxOInbihHMGj10qQvaC4jOXS0xXmtEhvsgYoxZPSH8wDBRxFEFRVuxr8jHsawf9NRli3KMqWhhXStjp7DSe1XQieULnJNXv/iowk8UImq0y2/s8Et5g==',key_name='tempest-keypair-225548884',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61afd70cadc143c2a9c65f6cec8dc9e8',ramdisk_id='',reservation_id='r-r0o2x8zy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1949354358',owner_user_name='tempest-VolumesBackupsTest-1949354358-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:59:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='619ce2f20dd849f6a462d2162bcccc7a',uuid=484e5b46-6672-4796-8f30-6d3e862428d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "address": "fa:16:3e:92:d4:07", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea4af23-2f", "ovs_interfaceid": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.442 238887 DEBUG nova.network.os_vif_util [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Converting VIF {"id": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "address": "fa:16:3e:92:d4:07", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea4af23-2f", "ovs_interfaceid": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.443 238887 DEBUG nova.network.os_vif_util [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:92:d4:07,bridge_name='br-int',has_traffic_filtering=True,id=5ea4af23-2f74-4e93-8aa4-42a49865dbf4,network=Network(302d1601-7819-4001-9e16-ee97183eb73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea4af23-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.443 238887 DEBUG os_vif [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:92:d4:07,bridge_name='br-int',has_traffic_filtering=True,id=5ea4af23-2f74-4e93-8aa4-42a49865dbf4,network=Network(302d1601-7819-4001-9e16-ee97183eb73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea4af23-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.444 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.444 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.445 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.447 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.447 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ea4af23-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.448 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ea4af23-2f, col_values=(('external_ids', {'iface-id': '5ea4af23-2f74-4e93-8aa4-42a49865dbf4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:92:d4:07', 'vm-uuid': '484e5b46-6672-4796-8f30-6d3e862428d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:15 np0005604943 NetworkManager[49093]: <info>  [1770033555.4896] manager: (tap5ea4af23-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.489 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.491 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.495 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.497 238887 INFO os_vif [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:92:d4:07,bridge_name='br-int',has_traffic_filtering=True,id=5ea4af23-2f74-4e93-8aa4-42a49865dbf4,network=Network(302d1601-7819-4001-9e16-ee97183eb73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea4af23-2f')#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.542 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.542 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.542 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] No VIF found with MAC fa:16:3e:92:d4:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.543 238887 INFO nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Using config drive#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.560 238887 DEBUG nova.storage.rbd_utils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] rbd image 484e5b46-6672-4796-8f30-6d3e862428d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.976 238887 INFO nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Creating config drive at /var/lib/nova/instances/484e5b46-6672-4796-8f30-6d3e862428d3/disk.config#033[00m
Feb  2 06:59:15 np0005604943 nova_compute[238883]: 2026-02-02 11:59:15.986 238887 DEBUG oslo_concurrency.processutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/484e5b46-6672-4796-8f30-6d3e862428d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp20hvsqe_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.004 238887 DEBUG nova.network.neutron [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Successfully updated port: 86db1a97-63b9-4069-a69f-bc0ef1f8342f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.025 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Acquiring lock "refresh_cache-b6e0af38-f069-4516-848d-2b7093956fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.025 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Acquired lock "refresh_cache-b6e0af38-f069-4516-848d-2b7093956fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.026 238887 DEBUG nova.network.neutron [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.106 238887 DEBUG oslo_concurrency.processutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/484e5b46-6672-4796-8f30-6d3e862428d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp20hvsqe_" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.132 238887 DEBUG nova.storage.rbd_utils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] rbd image 484e5b46-6672-4796-8f30-6d3e862428d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.135 238887 DEBUG oslo_concurrency.processutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/484e5b46-6672-4796-8f30-6d3e862428d3/disk.config 484e5b46-6672-4796-8f30-6d3e862428d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.147 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.152 238887 DEBUG nova.compute.manager [req-9328b71c-fab8-4e04-bd9d-6f090c42b0fc req-8fce5703-e807-4d3f-a568-ddf734e5fae6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Received event network-changed-86db1a97-63b9-4069-a69f-bc0ef1f8342f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.153 238887 DEBUG nova.compute.manager [req-9328b71c-fab8-4e04-bd9d-6f090c42b0fc req-8fce5703-e807-4d3f-a568-ddf734e5fae6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Refreshing instance network info cache due to event network-changed-86db1a97-63b9-4069-a69f-bc0ef1f8342f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.153 238887 DEBUG oslo_concurrency.lockutils [req-9328b71c-fab8-4e04-bd9d-6f090c42b0fc req-8fce5703-e807-4d3f-a568-ddf734e5fae6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-b6e0af38-f069-4516-848d-2b7093956fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.194 238887 DEBUG nova.network.neutron [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.235 238887 DEBUG oslo_concurrency.processutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/484e5b46-6672-4796-8f30-6d3e862428d3/disk.config 484e5b46-6672-4796-8f30-6d3e862428d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.235 238887 INFO nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Deleting local config drive /var/lib/nova/instances/484e5b46-6672-4796-8f30-6d3e862428d3/disk.config because it was imported into RBD.#033[00m
Feb  2 06:59:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 105 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 2.8 MiB/s wr, 117 op/s
Feb  2 06:59:16 np0005604943 kernel: tap5ea4af23-2f: entered promiscuous mode
Feb  2 06:59:16 np0005604943 NetworkManager[49093]: <info>  [1770033556.2690] manager: (tap5ea4af23-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Feb  2 06:59:16 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:16Z|00089|binding|INFO|Claiming lport 5ea4af23-2f74-4e93-8aa4-42a49865dbf4 for this chassis.
Feb  2 06:59:16 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:16Z|00090|binding|INFO|5ea4af23-2f74-4e93-8aa4-42a49865dbf4: Claiming fa:16:3e:92:d4:07 10.100.0.7
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.269 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.273 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.282 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:d4:07 10.100.0.7'], port_security=['fa:16:3e:92:d4:07 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '484e5b46-6672-4796-8f30-6d3e862428d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-302d1601-7819-4001-9e16-ee97183eb73b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61afd70cadc143c2a9c65f6cec8dc9e8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f4789968-3c4c-4b11-a2c2-fa2dafeb7088', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb72e047-676c-4da5-9d5d-6a9b44c0057a, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=5ea4af23-2f74-4e93-8aa4-42a49865dbf4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.283 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 5ea4af23-2f74-4e93-8aa4-42a49865dbf4 in datapath 302d1601-7819-4001-9e16-ee97183eb73b bound to our chassis#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.285 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 302d1601-7819-4001-9e16-ee97183eb73b#033[00m
Feb  2 06:59:16 np0005604943 systemd-machined[206973]: New machine qemu-9-instance-00000009.
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.293 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ea303a8e-a85d-4750-b758-0af7a9f12f5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.294 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap302d1601-71 in ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.296 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap302d1601-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.296 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[3dd9b894-4bb8-41a3-bba7-bd1567dfaca5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.296 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[2cbd8395-d39d-44c8-a8b8-de3a7a20183f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.301 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:16 np0005604943 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.306 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[90ec0a2e-460e-46bc-a6b0-8b4d0ffce92b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:16 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:16Z|00091|binding|INFO|Setting lport 5ea4af23-2f74-4e93-8aa4-42a49865dbf4 ovn-installed in OVS
Feb  2 06:59:16 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:16Z|00092|binding|INFO|Setting lport 5ea4af23-2f74-4e93-8aa4-42a49865dbf4 up in Southbound
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.308 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:16 np0005604943 systemd-udevd[253065]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.317 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[aa6f581c-dc76-4cee-ad29-45926a4f1468]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:16 np0005604943 NetworkManager[49093]: <info>  [1770033556.3220] device (tap5ea4af23-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 06:59:16 np0005604943 NetworkManager[49093]: <info>  [1770033556.3224] device (tap5ea4af23-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.337 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdd1dc6-3f22-4ca0-94aa-a0937d455a56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:16 np0005604943 NetworkManager[49093]: <info>  [1770033556.3431] manager: (tap302d1601-70): new Veth device (/org/freedesktop/NetworkManager/Devices/55)
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.342 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[3d83ea8b-8857-4100-bfea-7b59d7588d61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.363 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[0a75e76d-272d-4866-a074-8a7ac23b90d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.365 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[619e1a96-60bf-4407-b555-70d55b0db48e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:16 np0005604943 NetworkManager[49093]: <info>  [1770033556.3763] device (tap302d1601-70): carrier: link connected
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.377 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[fe37c786-2dc2-4208-8025-85f6b0192a53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.387 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[5f858f6b-a5f4-4617-a67b-fc778c0b7041]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap302d1601-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:b2:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400172, 'reachable_time': 18752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253096, 'error': None, 'target': 'ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.396 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9521fc-81dd-43b3-810b-0b0e35d476f6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0c:b2d7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400172, 'tstamp': 400172}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253097, 'error': None, 'target': 'ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.405 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[38eceff3-79ba-4aca-818e-c1c7e46ec371]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap302d1601-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:b2:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400172, 'reachable_time': 18752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253098, 'error': None, 'target': 'ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.422 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[88e32680-581d-40d0-aa55-d81e99b283f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.453 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b69f2c-e94b-45ae-b2f9-f41553793e76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.454 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap302d1601-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.455 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.455 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap302d1601-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:16 np0005604943 NetworkManager[49093]: <info>  [1770033556.4578] manager: (tap302d1601-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Feb  2 06:59:16 np0005604943 kernel: tap302d1601-70: entered promiscuous mode
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.457 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.459 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap302d1601-70, col_values=(('external_ids', {'iface-id': '7f7a24e7-2e36-4c1c-8857-8367e857534f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:16 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:16Z|00093|binding|INFO|Releasing lport 7f7a24e7-2e36-4c1c-8857-8367e857534f from this chassis (sb_readonly=0)
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.460 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.466 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.466 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/302d1601-7819-4001-9e16-ee97183eb73b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/302d1601-7819-4001-9e16-ee97183eb73b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.467 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[0b39fba9-d784-4296-87f4-10abafd87206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.467 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-302d1601-7819-4001-9e16-ee97183eb73b
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/302d1601-7819-4001-9e16-ee97183eb73b.pid.haproxy
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID 302d1601-7819-4001-9e16-ee97183eb73b
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 06:59:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:16.468 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b', 'env', 'PROCESS_TAG=haproxy-302d1601-7819-4001-9e16-ee97183eb73b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/302d1601-7819-4001-9e16-ee97183eb73b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.623 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033556.6230104, 484e5b46-6672-4796-8f30-6d3e862428d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.624 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] VM Started (Lifecycle Event)#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.650 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.654 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033556.6237733, 484e5b46-6672-4796-8f30-6d3e862428d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.655 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] VM Paused (Lifecycle Event)#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.680 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.684 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:59:16 np0005604943 nova_compute[238883]: 2026-02-02 11:59:16.726 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:59:16 np0005604943 podman[253172]: 2026-02-02 11:59:16.773866468 +0000 UTC m=+0.042932157 container create 0ad10784a1e0608887a8fda3e883bec873b3305bef297292afe54601fa9ace41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb  2 06:59:16 np0005604943 systemd[1]: Started libpod-conmon-0ad10784a1e0608887a8fda3e883bec873b3305bef297292afe54601fa9ace41.scope.
Feb  2 06:59:16 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:59:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708cf6674ca98aecb2bd3d3b8a772c550dc0637d72d5f2b63c8c278e063f02ff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 06:59:16 np0005604943 podman[253172]: 2026-02-02 11:59:16.837734054 +0000 UTC m=+0.106799723 container init 0ad10784a1e0608887a8fda3e883bec873b3305bef297292afe54601fa9ace41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 06:59:16 np0005604943 podman[253172]: 2026-02-02 11:59:16.842646262 +0000 UTC m=+0.111711931 container start 0ad10784a1e0608887a8fda3e883bec873b3305bef297292afe54601fa9ace41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb  2 06:59:16 np0005604943 podman[253172]: 2026-02-02 11:59:16.750150776 +0000 UTC m=+0.019216475 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 06:59:16 np0005604943 neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b[253187]: [NOTICE]   (253191) : New worker (253193) forked
Feb  2 06:59:16 np0005604943 neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b[253187]: [NOTICE]   (253191) : Loading success.
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.269 238887 DEBUG nova.network.neutron [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Updating instance_info_cache with network_info: [{"id": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "address": "fa:16:3e:ed:66:a5", "network": {"id": "0d5311c0-0d13-45dd-abcb-d46d409b1a1d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1807408621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdcfa3aaa83541878311def7781b5b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86db1a97-63", "ovs_interfaceid": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.333 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Releasing lock "refresh_cache-b6e0af38-f069-4516-848d-2b7093956fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.334 238887 DEBUG nova.compute.manager [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Instance network_info: |[{"id": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "address": "fa:16:3e:ed:66:a5", "network": {"id": "0d5311c0-0d13-45dd-abcb-d46d409b1a1d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1807408621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdcfa3aaa83541878311def7781b5b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86db1a97-63", "ovs_interfaceid": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.335 238887 DEBUG oslo_concurrency.lockutils [req-9328b71c-fab8-4e04-bd9d-6f090c42b0fc req-8fce5703-e807-4d3f-a568-ddf734e5fae6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-b6e0af38-f069-4516-848d-2b7093956fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.335 238887 DEBUG nova.network.neutron [req-9328b71c-fab8-4e04-bd9d-6f090c42b0fc req-8fce5703-e807-4d3f-a568-ddf734e5fae6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Refreshing network info cache for port 86db1a97-63b9-4069-a69f-bc0ef1f8342f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.337 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Start _get_guest_xml network_info=[{"id": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "address": "fa:16:3e:ed:66:a5", "network": {"id": "0d5311c0-0d13-45dd-abcb-d46d409b1a1d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1807408621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdcfa3aaa83541878311def7781b5b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86db1a97-63", "ovs_interfaceid": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'image_id': '21b263f0-00f1-47be-b8b1-e3c07da0a6a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.340 238887 WARNING nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.345 238887 DEBUG nova.virt.libvirt.host [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.345 238887 DEBUG nova.virt.libvirt.host [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.347 238887 DEBUG nova.virt.libvirt.host [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.348 238887 DEBUG nova.virt.libvirt.host [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.348 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.348 238887 DEBUG nova.virt.hardware [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.349 238887 DEBUG nova.virt.hardware [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.349 238887 DEBUG nova.virt.hardware [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.349 238887 DEBUG nova.virt.hardware [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.349 238887 DEBUG nova.virt.hardware [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.350 238887 DEBUG nova.virt.hardware [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.350 238887 DEBUG nova.virt.hardware [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.350 238887 DEBUG nova.virt.hardware [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.350 238887 DEBUG nova.virt.hardware [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.350 238887 DEBUG nova.virt.hardware [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.351 238887 DEBUG nova.virt.hardware [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.353 238887 DEBUG oslo_concurrency.processutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:59:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Feb  2 06:59:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Feb  2 06:59:17 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Feb  2 06:59:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:59:17 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3176334505' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.908 238887 DEBUG oslo_concurrency.processutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.935 238887 DEBUG nova.storage.rbd_utils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] rbd image b6e0af38-f069-4516-848d-2b7093956fa0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:17 np0005604943 nova_compute[238883]: 2026-02-02 11:59:17.940 238887 DEBUG oslo_concurrency.processutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 134 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Feb  2 06:59:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:59:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3753796706' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.468 238887 DEBUG oslo_concurrency.processutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.469 238887 DEBUG nova.virt.libvirt.vif [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:59:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1701407797',display_name='tempest-TestEncryptedCinderVolumes-server-1701407797',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1701407797',id=10,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEX84H+sJrhf5p3KTmgGpTv7aL1JeHX+jKomKR+iVFNA5sje0qIr0cbd2tkcYekHu8KBhE73g1auIf5O4mdKv3J1DJFzRrlsaIenzFIP4e0B3zCPKMXcvCNhgAiOs0HKTA==',key_name='tempest-keypair-1106499579',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cdcfa3aaa83541878311def7781b5b82',ramdisk_id='',reservation_id='r-vpsnetyn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-600410399',owner_user_name='tempest-TestEncryptedCinderVolumes-600410399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:59:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0d0b5cfd8d84432894bd264065bcb0ba',uuid=b6e0af38-f069-4516-848d-2b7093956fa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "address": "fa:16:3e:ed:66:a5", "network": {"id": "0d5311c0-0d13-45dd-abcb-d46d409b1a1d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1807408621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdcfa3aaa83541878311def7781b5b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86db1a97-63", "ovs_interfaceid": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.469 238887 DEBUG nova.network.os_vif_util [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Converting VIF {"id": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "address": "fa:16:3e:ed:66:a5", "network": {"id": "0d5311c0-0d13-45dd-abcb-d46d409b1a1d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1807408621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdcfa3aaa83541878311def7781b5b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86db1a97-63", "ovs_interfaceid": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.470 238887 DEBUG nova.network.os_vif_util [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:66:a5,bridge_name='br-int',has_traffic_filtering=True,id=86db1a97-63b9-4069-a69f-bc0ef1f8342f,network=Network(0d5311c0-0d13-45dd-abcb-d46d409b1a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86db1a97-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.471 238887 DEBUG nova.objects.instance [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6e0af38-f069-4516-848d-2b7093956fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.558 238887 DEBUG nova.compute.manager [req-9c78afed-f73e-4381-b53a-459def95c3bd req-c268d18d-667b-40c0-8df7-5bdc46009967 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Received event network-vif-plugged-5ea4af23-2f74-4e93-8aa4-42a49865dbf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.558 238887 DEBUG oslo_concurrency.lockutils [req-9c78afed-f73e-4381-b53a-459def95c3bd req-c268d18d-667b-40c0-8df7-5bdc46009967 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.559 238887 DEBUG oslo_concurrency.lockutils [req-9c78afed-f73e-4381-b53a-459def95c3bd req-c268d18d-667b-40c0-8df7-5bdc46009967 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.559 238887 DEBUG oslo_concurrency.lockutils [req-9c78afed-f73e-4381-b53a-459def95c3bd req-c268d18d-667b-40c0-8df7-5bdc46009967 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.559 238887 DEBUG nova.compute.manager [req-9c78afed-f73e-4381-b53a-459def95c3bd req-c268d18d-667b-40c0-8df7-5bdc46009967 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Processing event network-vif-plugged-5ea4af23-2f74-4e93-8aa4-42a49865dbf4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.559 238887 DEBUG nova.compute.manager [req-9c78afed-f73e-4381-b53a-459def95c3bd req-c268d18d-667b-40c0-8df7-5bdc46009967 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Received event network-vif-plugged-5ea4af23-2f74-4e93-8aa4-42a49865dbf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.560 238887 DEBUG oslo_concurrency.lockutils [req-9c78afed-f73e-4381-b53a-459def95c3bd req-c268d18d-667b-40c0-8df7-5bdc46009967 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.560 238887 DEBUG oslo_concurrency.lockutils [req-9c78afed-f73e-4381-b53a-459def95c3bd req-c268d18d-667b-40c0-8df7-5bdc46009967 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.560 238887 DEBUG oslo_concurrency.lockutils [req-9c78afed-f73e-4381-b53a-459def95c3bd req-c268d18d-667b-40c0-8df7-5bdc46009967 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.560 238887 DEBUG nova.compute.manager [req-9c78afed-f73e-4381-b53a-459def95c3bd req-c268d18d-667b-40c0-8df7-5bdc46009967 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] No waiting events found dispatching network-vif-plugged-5ea4af23-2f74-4e93-8aa4-42a49865dbf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.561 238887 WARNING nova.compute.manager [req-9c78afed-f73e-4381-b53a-459def95c3bd req-c268d18d-667b-40c0-8df7-5bdc46009967 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Received unexpected event network-vif-plugged-5ea4af23-2f74-4e93-8aa4-42a49865dbf4 for instance with vm_state building and task_state spawning.#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.561 238887 DEBUG nova.compute.manager [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.565 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033558.5653698, 484e5b46-6672-4796-8f30-6d3e862428d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.565 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] VM Resumed (Lifecycle Event)#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.567 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.570 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] End _get_guest_xml xml=<domain type="kvm">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  <uuid>b6e0af38-f069-4516-848d-2b7093956fa0</uuid>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  <name>instance-0000000a</name>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-1701407797</nova:name>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 11:59:17</nova:creationTime>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:        <nova:user uuid="0d0b5cfd8d84432894bd264065bcb0ba">tempest-TestEncryptedCinderVolumes-600410399-project-member</nova:user>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:        <nova:project uuid="cdcfa3aaa83541878311def7781b5b82">tempest-TestEncryptedCinderVolumes-600410399</nova:project>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <nova:root type="image" uuid="21b263f0-00f1-47be-b8b1-e3c07da0a6a2"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:        <nova:port uuid="86db1a97-63b9-4069-a69f-bc0ef1f8342f">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <system>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <entry name="serial">b6e0af38-f069-4516-848d-2b7093956fa0</entry>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <entry name="uuid">b6e0af38-f069-4516-848d-2b7093956fa0</entry>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    </system>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  <os>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  </os>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  <features>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  </features>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  </clock>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  <devices>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/b6e0af38-f069-4516-848d-2b7093956fa0_disk">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/b6e0af38-f069-4516-848d-2b7093956fa0_disk.config">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:ed:66:a5"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <target dev="tap86db1a97-63"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    </interface>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/b6e0af38-f069-4516-848d-2b7093956fa0/console.log" append="off"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    </serial>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <video>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    </video>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    </rng>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 06:59:18 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 06:59:18 np0005604943 nova_compute[238883]:  </devices>
Feb  2 06:59:18 np0005604943 nova_compute[238883]: </domain>
Feb  2 06:59:18 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.571 238887 DEBUG nova.compute.manager [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Preparing to wait for external event network-vif-plugged-86db1a97-63b9-4069-a69f-bc0ef1f8342f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.571 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Acquiring lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.571 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.571 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.572 238887 DEBUG nova.virt.libvirt.vif [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:59:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1701407797',display_name='tempest-TestEncryptedCinderVolumes-server-1701407797',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1701407797',id=10,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEX84H+sJrhf5p3KTmgGpTv7aL1JeHX+jKomKR+iVFNA5sje0qIr0cbd2tkcYekHu8KBhE73g1auIf5O4mdKv3J1DJFzRrlsaIenzFIP4e0B3zCPKMXcvCNhgAiOs0HKTA==',key_name='tempest-keypair-1106499579',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cdcfa3aaa83541878311def7781b5b82',ramdisk_id='',reservation_id='r-vpsnetyn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-600410399',owner_user_name='tempest-TestEncryptedCinderVolumes-600410399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:59:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0d0b5cfd8d84432894bd264065bcb0ba',uuid=b6e0af38-f069-4516-848d-2b7093956fa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "address": "fa:16:3e:ed:66:a5", "network": {"id": "0d5311c0-0d13-45dd-abcb-d46d409b1a1d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1807408621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdcfa3aaa83541878311def7781b5b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86db1a97-63", "ovs_interfaceid": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.573 238887 DEBUG nova.network.os_vif_util [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Converting VIF {"id": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "address": "fa:16:3e:ed:66:a5", "network": {"id": "0d5311c0-0d13-45dd-abcb-d46d409b1a1d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1807408621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdcfa3aaa83541878311def7781b5b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86db1a97-63", "ovs_interfaceid": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.573 238887 DEBUG nova.network.os_vif_util [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:66:a5,bridge_name='br-int',has_traffic_filtering=True,id=86db1a97-63b9-4069-a69f-bc0ef1f8342f,network=Network(0d5311c0-0d13-45dd-abcb-d46d409b1a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86db1a97-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.574 238887 DEBUG os_vif [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:66:a5,bridge_name='br-int',has_traffic_filtering=True,id=86db1a97-63b9-4069-a69f-bc0ef1f8342f,network=Network(0d5311c0-0d13-45dd-abcb-d46d409b1a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86db1a97-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.575 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.576 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.576 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.580 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.581 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86db1a97-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.582 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86db1a97-63, col_values=(('external_ids', {'iface-id': '86db1a97-63b9-4069-a69f-bc0ef1f8342f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:66:a5', 'vm-uuid': 'b6e0af38-f069-4516-848d-2b7093956fa0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:18 np0005604943 NetworkManager[49093]: <info>  [1770033558.5842] manager: (tap86db1a97-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.588 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.589 238887 INFO nova.virt.libvirt.driver [-] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Instance spawned successfully.#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.589 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.591 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.592 238887 INFO os_vif [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:66:a5,bridge_name='br-int',has_traffic_filtering=True,id=86db1a97-63b9-4069-a69f-bc0ef1f8342f,network=Network(0d5311c0-0d13-45dd-abcb-d46d409b1a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86db1a97-63')#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.618 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.621 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.627 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.627 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.628 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.628 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.629 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.629 238887 DEBUG nova.virt.libvirt.driver [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.662 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.667 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.667 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.668 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] No VIF found with MAC fa:16:3e:ed:66:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.668 238887 INFO nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Using config drive#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.689 238887 DEBUG nova.storage.rbd_utils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] rbd image b6e0af38-f069-4516-848d-2b7093956fa0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.727 238887 INFO nova.compute.manager [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Took 8.33 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.728 238887 DEBUG nova.compute.manager [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.831 238887 INFO nova.compute.manager [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Took 9.59 seconds to build instance.#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.856 238887 DEBUG oslo_concurrency.lockutils [None req-97010a39-0495-41ab-91da-72359fc3bae7 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.915 238887 DEBUG nova.network.neutron [req-9328b71c-fab8-4e04-bd9d-6f090c42b0fc req-8fce5703-e807-4d3f-a568-ddf734e5fae6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Updated VIF entry in instance network info cache for port 86db1a97-63b9-4069-a69f-bc0ef1f8342f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.916 238887 DEBUG nova.network.neutron [req-9328b71c-fab8-4e04-bd9d-6f090c42b0fc req-8fce5703-e807-4d3f-a568-ddf734e5fae6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Updating instance_info_cache with network_info: [{"id": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "address": "fa:16:3e:ed:66:a5", "network": {"id": "0d5311c0-0d13-45dd-abcb-d46d409b1a1d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1807408621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdcfa3aaa83541878311def7781b5b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86db1a97-63", "ovs_interfaceid": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.939 238887 DEBUG oslo_concurrency.lockutils [req-9328b71c-fab8-4e04-bd9d-6f090c42b0fc req-8fce5703-e807-4d3f-a568-ddf734e5fae6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-b6e0af38-f069-4516-848d-2b7093956fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.987 238887 INFO nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Creating config drive at /var/lib/nova/instances/b6e0af38-f069-4516-848d-2b7093956fa0/disk.config#033[00m
Feb  2 06:59:18 np0005604943 nova_compute[238883]: 2026-02-02 11:59:18.991 238887 DEBUG oslo_concurrency.processutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6e0af38-f069-4516-848d-2b7093956fa0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmps7tjp4_u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.108 238887 DEBUG oslo_concurrency.processutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6e0af38-f069-4516-848d-2b7093956fa0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmps7tjp4_u" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.129 238887 DEBUG nova.storage.rbd_utils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] rbd image b6e0af38-f069-4516-848d-2b7093956fa0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.133 238887 DEBUG oslo_concurrency.processutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6e0af38-f069-4516-848d-2b7093956fa0/disk.config b6e0af38-f069-4516-848d-2b7093956fa0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.276 238887 DEBUG oslo_concurrency.processutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6e0af38-f069-4516-848d-2b7093956fa0/disk.config b6e0af38-f069-4516-848d-2b7093956fa0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.277 238887 INFO nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Deleting local config drive /var/lib/nova/instances/b6e0af38-f069-4516-848d-2b7093956fa0/disk.config because it was imported into RBD.#033[00m
Feb  2 06:59:19 np0005604943 NetworkManager[49093]: <info>  [1770033559.3035] manager: (tap86db1a97-63): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Feb  2 06:59:19 np0005604943 kernel: tap86db1a97-63: entered promiscuous mode
Feb  2 06:59:19 np0005604943 systemd-udevd[253334]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.364 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:19 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:19Z|00094|binding|INFO|Claiming lport 86db1a97-63b9-4069-a69f-bc0ef1f8342f for this chassis.
Feb  2 06:59:19 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:19Z|00095|binding|INFO|86db1a97-63b9-4069-a69f-bc0ef1f8342f: Claiming fa:16:3e:ed:66:a5 10.100.0.3
Feb  2 06:59:19 np0005604943 NetworkManager[49093]: <info>  [1770033559.3745] device (tap86db1a97-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 06:59:19 np0005604943 NetworkManager[49093]: <info>  [1770033559.3751] device (tap86db1a97-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.378 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:66:a5 10.100.0.3'], port_security=['fa:16:3e:ed:66:a5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b6e0af38-f069-4516-848d-2b7093956fa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d5311c0-0d13-45dd-abcb-d46d409b1a1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cdcfa3aaa83541878311def7781b5b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '284e4bd7-1dab-4d9d-9034-a93c0bfa4056', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1724d015-9fba-41e3-bd31-dd97100bf6bd, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=86db1a97-63b9-4069-a69f-bc0ef1f8342f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.379 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 86db1a97-63b9-4069-a69f-bc0ef1f8342f in datapath 0d5311c0-0d13-45dd-abcb-d46d409b1a1d bound to our chassis#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.381 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0d5311c0-0d13-45dd-abcb-d46d409b1a1d#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.390 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f57224-bba2-4daf-ad1b-678a85ed5dcf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.390 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0d5311c0-01 in ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.391 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0d5311c0-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.391 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c8707ad2-9c4e-47f9-9851-d57d18b3a3a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.392 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7dffae-3a5a-4402-82a4-eba1eccca7fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:19 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:19Z|00096|binding|INFO|Setting lport 86db1a97-63b9-4069-a69f-bc0ef1f8342f ovn-installed in OVS
Feb  2 06:59:19 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:19Z|00097|binding|INFO|Setting lport 86db1a97-63b9-4069-a69f-bc0ef1f8342f up in Southbound
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.396 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.401 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[086448ad-6a54-4963-a2da-cff4a1620bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:19 np0005604943 systemd-machined[206973]: New machine qemu-10-instance-0000000a.
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.412 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[231076a1-de19-4912-9951-40712843d97a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:19 np0005604943 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.437 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[0e8cabc0-e0c9-4969-9680-200ec0380c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:19 np0005604943 NetworkManager[49093]: <info>  [1770033559.4425] manager: (tap0d5311c0-00): new Veth device (/org/freedesktop/NetworkManager/Devices/59)
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.441 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[da3635b9-2e76-46d3-8636-bef4581c39fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.464 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[70918afc-173c-4131-816c-0a65a06b70c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.467 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[70ac576d-0f32-4156-b2aa-fe7951ba2291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:19 np0005604943 NetworkManager[49093]: <info>  [1770033559.4815] device (tap0d5311c0-00): carrier: link connected
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.485 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[4d33b330-f85f-443a-88bf-7d35ab872012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.499 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[909e65a1-0540-492d-a7be-cf24e4f6018f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d5311c0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:49:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400483, 'reachable_time': 35667, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253370, 'error': None, 'target': 'ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.516 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[4c3216d5-aa10-4418-8f02-b22771f86193]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:498d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400483, 'tstamp': 400483}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253371, 'error': None, 'target': 'ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.536 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[968af77d-9651-43e7-b6db-0d27c65c988a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d5311c0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:49:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400483, 'reachable_time': 35667, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253372, 'error': None, 'target': 'ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.566 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[3e81a5dc-e26e-4669-932a-2b09dd49ec3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.610 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2a1825-5085-46f6-91ce-8cbecbb37b2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.611 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d5311c0-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.611 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.612 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d5311c0-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:19 np0005604943 kernel: tap0d5311c0-00: entered promiscuous mode
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.615 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:19 np0005604943 NetworkManager[49093]: <info>  [1770033559.6157] manager: (tap0d5311c0-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.617 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.619 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0d5311c0-00, col_values=(('external_ids', {'iface-id': '9668654a-b075-417b-a783-86a27a04d357'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.620 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:19 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:19Z|00098|binding|INFO|Releasing lport 9668654a-b075-417b-a783-86a27a04d357 from this chassis (sb_readonly=0)
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.621 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.623 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0d5311c0-0d13-45dd-abcb-d46d409b1a1d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0d5311c0-0d13-45dd-abcb-d46d409b1a1d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.624 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[72682801-105f-46ea-956d-f4f2eb1a95aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.624 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-0d5311c0-0d13-45dd-abcb-d46d409b1a1d
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/0d5311c0-0d13-45dd-abcb-d46d409b1a1d.pid.haproxy
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID 0d5311c0-0d13-45dd-abcb-d46d409b1a1d
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 06:59:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:19.625 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d', 'env', 'PROCESS_TAG=haproxy-0d5311c0-0d13-45dd-abcb-d46d409b1a1d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0d5311c0-0d13-45dd-abcb-d46d409b1a1d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.628 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.755 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033559.755514, b6e0af38-f069-4516-848d-2b7093956fa0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.756 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] VM Started (Lifecycle Event)#033[00m
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.781 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.786 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033559.758194, b6e0af38-f069-4516-848d-2b7093956fa0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.787 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] VM Paused (Lifecycle Event)#033[00m
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.810 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.814 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:59:19 np0005604943 nova_compute[238883]: 2026-02-02 11:59:19.839 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:59:19 np0005604943 podman[253445]: 2026-02-02 11:59:19.974316374 +0000 UTC m=+0.039238570 container create d81f8ec38d076eb87a664f0c9b603f147025cc957c4dca0a555a426ec77b28b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb  2 06:59:20 np0005604943 systemd[1]: Started libpod-conmon-d81f8ec38d076eb87a664f0c9b603f147025cc957c4dca0a555a426ec77b28b4.scope.
Feb  2 06:59:20 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:59:20 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c895f4a4accc9081bd9aec7e9d8492b4530282908163da8d409623b9ade303/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 06:59:20 np0005604943 podman[253445]: 2026-02-02 11:59:19.954790242 +0000 UTC m=+0.019712458 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 06:59:20 np0005604943 podman[253445]: 2026-02-02 11:59:20.053035839 +0000 UTC m=+0.117958035 container init d81f8ec38d076eb87a664f0c9b603f147025cc957c4dca0a555a426ec77b28b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb  2 06:59:20 np0005604943 podman[253445]: 2026-02-02 11:59:20.058316557 +0000 UTC m=+0.123238753 container start d81f8ec38d076eb87a664f0c9b603f147025cc957c4dca0a555a426ec77b28b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:59:20 np0005604943 neutron-haproxy-ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d[253460]: [NOTICE]   (253465) : New worker (253467) forked
Feb  2 06:59:20 np0005604943 neutron-haproxy-ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d[253460]: [NOTICE]   (253465) : Loading success.
Feb  2 06:59:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 134 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.3 MiB/s wr, 115 op/s
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.694 238887 DEBUG nova.compute.manager [req-4bbc555f-086a-47f8-8d07-23f9f8a38de9 req-b78eb991-8936-4148-878d-9d97ca350725 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Received event network-vif-plugged-86db1a97-63b9-4069-a69f-bc0ef1f8342f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.695 238887 DEBUG oslo_concurrency.lockutils [req-4bbc555f-086a-47f8-8d07-23f9f8a38de9 req-b78eb991-8936-4148-878d-9d97ca350725 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.695 238887 DEBUG oslo_concurrency.lockutils [req-4bbc555f-086a-47f8-8d07-23f9f8a38de9 req-b78eb991-8936-4148-878d-9d97ca350725 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.695 238887 DEBUG oslo_concurrency.lockutils [req-4bbc555f-086a-47f8-8d07-23f9f8a38de9 req-b78eb991-8936-4148-878d-9d97ca350725 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.695 238887 DEBUG nova.compute.manager [req-4bbc555f-086a-47f8-8d07-23f9f8a38de9 req-b78eb991-8936-4148-878d-9d97ca350725 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Processing event network-vif-plugged-86db1a97-63b9-4069-a69f-bc0ef1f8342f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.696 238887 DEBUG nova.compute.manager [req-4bbc555f-086a-47f8-8d07-23f9f8a38de9 req-b78eb991-8936-4148-878d-9d97ca350725 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Received event network-vif-plugged-86db1a97-63b9-4069-a69f-bc0ef1f8342f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.696 238887 DEBUG oslo_concurrency.lockutils [req-4bbc555f-086a-47f8-8d07-23f9f8a38de9 req-b78eb991-8936-4148-878d-9d97ca350725 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.696 238887 DEBUG oslo_concurrency.lockutils [req-4bbc555f-086a-47f8-8d07-23f9f8a38de9 req-b78eb991-8936-4148-878d-9d97ca350725 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.696 238887 DEBUG oslo_concurrency.lockutils [req-4bbc555f-086a-47f8-8d07-23f9f8a38de9 req-b78eb991-8936-4148-878d-9d97ca350725 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.697 238887 DEBUG nova.compute.manager [req-4bbc555f-086a-47f8-8d07-23f9f8a38de9 req-b78eb991-8936-4148-878d-9d97ca350725 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] No waiting events found dispatching network-vif-plugged-86db1a97-63b9-4069-a69f-bc0ef1f8342f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.697 238887 WARNING nova.compute.manager [req-4bbc555f-086a-47f8-8d07-23f9f8a38de9 req-b78eb991-8936-4148-878d-9d97ca350725 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Received unexpected event network-vif-plugged-86db1a97-63b9-4069-a69f-bc0ef1f8342f for instance with vm_state building and task_state spawning.#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.697 238887 DEBUG nova.compute.manager [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.703 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033560.7028265, b6e0af38-f069-4516-848d-2b7093956fa0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.703 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] VM Resumed (Lifecycle Event)#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.705 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.710 238887 INFO nova.virt.libvirt.driver [-] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Instance spawned successfully.#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.712 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.765 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.768 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.785 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.786 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.787 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.788 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.788 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.789 238887 DEBUG nova.virt.libvirt.driver [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.851 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:20 np0005604943 NetworkManager[49093]: <info>  [1770033560.8525] manager: (patch-br-int-to-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Feb  2 06:59:20 np0005604943 NetworkManager[49093]: <info>  [1770033560.8533] manager: (patch-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.859 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.913 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:20 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:20Z|00099|binding|INFO|Releasing lport 9668654a-b075-417b-a783-86a27a04d357 from this chassis (sb_readonly=0)
Feb  2 06:59:20 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:20Z|00100|binding|INFO|Releasing lport 7f7a24e7-2e36-4c1c-8857-8367e857534f from this chassis (sb_readonly=0)
Feb  2 06:59:20 np0005604943 nova_compute[238883]: 2026-02-02 11:59:20.925 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:21 np0005604943 nova_compute[238883]: 2026-02-02 11:59:21.092 238887 INFO nova.compute.manager [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Took 7.91 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 06:59:21 np0005604943 nova_compute[238883]: 2026-02-02 11:59:21.092 238887 DEBUG nova.compute.manager [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:59:21 np0005604943 nova_compute[238883]: 2026-02-02 11:59:21.121 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:21 np0005604943 nova_compute[238883]: 2026-02-02 11:59:21.279 238887 INFO nova.compute.manager [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Took 9.07 seconds to build instance.#033[00m
Feb  2 06:59:21 np0005604943 nova_compute[238883]: 2026-02-02 11:59:21.414 238887 DEBUG nova.compute.manager [req-12874e31-7e0d-4973-93d9-3a97ee43ed06 req-6d5365b5-81ed-420c-8d3c-aa26b7686ac4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Received event network-changed-5ea4af23-2f74-4e93-8aa4-42a49865dbf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:59:21 np0005604943 nova_compute[238883]: 2026-02-02 11:59:21.415 238887 DEBUG nova.compute.manager [req-12874e31-7e0d-4973-93d9-3a97ee43ed06 req-6d5365b5-81ed-420c-8d3c-aa26b7686ac4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Refreshing instance network info cache due to event network-changed-5ea4af23-2f74-4e93-8aa4-42a49865dbf4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:59:21 np0005604943 nova_compute[238883]: 2026-02-02 11:59:21.416 238887 DEBUG oslo_concurrency.lockutils [req-12874e31-7e0d-4973-93d9-3a97ee43ed06 req-6d5365b5-81ed-420c-8d3c-aa26b7686ac4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-484e5b46-6672-4796-8f30-6d3e862428d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:59:21 np0005604943 nova_compute[238883]: 2026-02-02 11:59:21.416 238887 DEBUG oslo_concurrency.lockutils [req-12874e31-7e0d-4973-93d9-3a97ee43ed06 req-6d5365b5-81ed-420c-8d3c-aa26b7686ac4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-484e5b46-6672-4796-8f30-6d3e862428d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:59:21 np0005604943 nova_compute[238883]: 2026-02-02 11:59:21.417 238887 DEBUG nova.network.neutron [req-12874e31-7e0d-4973-93d9-3a97ee43ed06 req-6d5365b5-81ed-420c-8d3c-aa26b7686ac4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Refreshing network info cache for port 5ea4af23-2f74-4e93-8aa4-42a49865dbf4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:59:21 np0005604943 nova_compute[238883]: 2026-02-02 11:59:21.419 238887 DEBUG oslo_concurrency.lockutils [None req-b9980270-73df-40d9-9a92-aa99f57d4ee9 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006966824223227184 of space, bias 1.0, pg target 0.2090047266968155 quantized to 32 (current 32)
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.2324798260069207e-05 of space, bias 1.0, pg target 0.003697439478020762 quantized to 32 (current 32)
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.6978222185678763e-07 of space, bias 1.0, pg target 5.093466655703629e-05 quantized to 32 (current 32)
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661790030662565 of space, bias 1.0, pg target 0.19985370091987695 quantized to 32 (current 32)
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2395220000956405e-06 of space, bias 4.0, pg target 0.0014874264001147686 quantized to 16 (current 16)
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 06:59:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 06:59:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 134 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.3 MiB/s wr, 115 op/s
Feb  2 06:59:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:59:22 np0005604943 nova_compute[238883]: 2026-02-02 11:59:22.739 238887 DEBUG nova.network.neutron [req-12874e31-7e0d-4973-93d9-3a97ee43ed06 req-6d5365b5-81ed-420c-8d3c-aa26b7686ac4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Updated VIF entry in instance network info cache for port 5ea4af23-2f74-4e93-8aa4-42a49865dbf4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:59:22 np0005604943 nova_compute[238883]: 2026-02-02 11:59:22.739 238887 DEBUG nova.network.neutron [req-12874e31-7e0d-4973-93d9-3a97ee43ed06 req-6d5365b5-81ed-420c-8d3c-aa26b7686ac4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Updating instance_info_cache with network_info: [{"id": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "address": "fa:16:3e:92:d4:07", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea4af23-2f", "ovs_interfaceid": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:59:22 np0005604943 nova_compute[238883]: 2026-02-02 11:59:22.811 238887 DEBUG oslo_concurrency.lockutils [req-12874e31-7e0d-4973-93d9-3a97ee43ed06 req-6d5365b5-81ed-420c-8d3c-aa26b7686ac4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-484e5b46-6672-4796-8f30-6d3e862428d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:59:23 np0005604943 nova_compute[238883]: 2026-02-02 11:59:23.585 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 135 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.5 MiB/s wr, 211 op/s
Feb  2 06:59:24 np0005604943 nova_compute[238883]: 2026-02-02 11:59:24.470 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:24 np0005604943 nova_compute[238883]: 2026-02-02 11:59:24.995 238887 DEBUG nova.compute.manager [req-c3e7fce4-e8b6-4fba-97e8-582a826a4180 req-8104a56a-deb2-498f-be77-162883b692a4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Received event network-changed-86db1a97-63b9-4069-a69f-bc0ef1f8342f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:59:24 np0005604943 nova_compute[238883]: 2026-02-02 11:59:24.995 238887 DEBUG nova.compute.manager [req-c3e7fce4-e8b6-4fba-97e8-582a826a4180 req-8104a56a-deb2-498f-be77-162883b692a4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Refreshing instance network info cache due to event network-changed-86db1a97-63b9-4069-a69f-bc0ef1f8342f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:59:24 np0005604943 nova_compute[238883]: 2026-02-02 11:59:24.995 238887 DEBUG oslo_concurrency.lockutils [req-c3e7fce4-e8b6-4fba-97e8-582a826a4180 req-8104a56a-deb2-498f-be77-162883b692a4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-b6e0af38-f069-4516-848d-2b7093956fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:59:24 np0005604943 nova_compute[238883]: 2026-02-02 11:59:24.996 238887 DEBUG oslo_concurrency.lockutils [req-c3e7fce4-e8b6-4fba-97e8-582a826a4180 req-8104a56a-deb2-498f-be77-162883b692a4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-b6e0af38-f069-4516-848d-2b7093956fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:59:24 np0005604943 nova_compute[238883]: 2026-02-02 11:59:24.996 238887 DEBUG nova.network.neutron [req-c3e7fce4-e8b6-4fba-97e8-582a826a4180 req-8104a56a-deb2-498f-be77-162883b692a4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Refreshing network info cache for port 86db1a97-63b9-4069-a69f-bc0ef1f8342f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:59:26 np0005604943 nova_compute[238883]: 2026-02-02 11:59:26.153 238887 DEBUG nova.network.neutron [req-c3e7fce4-e8b6-4fba-97e8-582a826a4180 req-8104a56a-deb2-498f-be77-162883b692a4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Updated VIF entry in instance network info cache for port 86db1a97-63b9-4069-a69f-bc0ef1f8342f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:59:26 np0005604943 nova_compute[238883]: 2026-02-02 11:59:26.154 238887 DEBUG nova.network.neutron [req-c3e7fce4-e8b6-4fba-97e8-582a826a4180 req-8104a56a-deb2-498f-be77-162883b692a4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Updating instance_info_cache with network_info: [{"id": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "address": "fa:16:3e:ed:66:a5", "network": {"id": "0d5311c0-0d13-45dd-abcb-d46d409b1a1d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1807408621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdcfa3aaa83541878311def7781b5b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86db1a97-63", "ovs_interfaceid": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:59:26 np0005604943 nova_compute[238883]: 2026-02-02 11:59:26.157 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:26 np0005604943 nova_compute[238883]: 2026-02-02 11:59:26.177 238887 DEBUG oslo_concurrency.lockutils [req-c3e7fce4-e8b6-4fba-97e8-582a826a4180 req-8104a56a-deb2-498f-be77-162883b692a4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-b6e0af38-f069-4516-848d-2b7093956fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:59:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 135 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.5 MiB/s wr, 211 op/s
Feb  2 06:59:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:59:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 135 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.2 MiB/s wr, 190 op/s
Feb  2 06:59:28 np0005604943 nova_compute[238883]: 2026-02-02 11:59:28.590 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:28 np0005604943 nova_compute[238883]: 2026-02-02 11:59:28.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:59:29 np0005604943 nova_compute[238883]: 2026-02-02 11:59:29.213 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:29 np0005604943 nova_compute[238883]: 2026-02-02 11:59:29.668 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:59:29 np0005604943 nova_compute[238883]: 2026-02-02 11:59:29.669 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 06:59:29 np0005604943 nova_compute[238883]: 2026-02-02 11:59:29.669 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.070 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.071 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.071 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.071 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.072 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 135 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 36 KiB/s wr, 142 op/s
Feb  2 06:59:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:59:30 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/907607090' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.621 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.692 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.693 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.698 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.698 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.877 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.879 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4254MB free_disk=59.946321008726954GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.879 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.879 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.963 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance 484e5b46-6672-4796-8f30-6d3e862428d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.963 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance b6e0af38-f069-4516-848d-2b7093956fa0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.964 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 06:59:30 np0005604943 nova_compute[238883]: 2026-02-02 11:59:30.964 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 06:59:31 np0005604943 nova_compute[238883]: 2026-02-02 11:59:31.009 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:31 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:31Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:92:d4:07 10.100.0.7
Feb  2 06:59:31 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:31Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:92:d4:07 10.100.0.7
Feb  2 06:59:31 np0005604943 nova_compute[238883]: 2026-02-02 11:59:31.154 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:59:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2912190909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:59:31 np0005604943 nova_compute[238883]: 2026-02-02 11:59:31.556 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:31 np0005604943 nova_compute[238883]: 2026-02-02 11:59:31.563 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:59:31 np0005604943 nova_compute[238883]: 2026-02-02 11:59:31.580 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:59:31 np0005604943 nova_compute[238883]: 2026-02-02 11:59:31.598 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 06:59:31 np0005604943 nova_compute[238883]: 2026-02-02 11:59:31.599 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 135 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 137 op/s
Feb  2 06:59:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:59:32 np0005604943 nova_compute[238883]: 2026-02-02 11:59:32.572 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:59:32 np0005604943 nova_compute[238883]: 2026-02-02 11:59:32.573 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:59:32 np0005604943 nova_compute[238883]: 2026-02-02 11:59:32.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:59:32 np0005604943 nova_compute[238883]: 2026-02-02 11:59:32.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 06:59:32 np0005604943 nova_compute[238883]: 2026-02-02 11:59:32.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 06:59:33 np0005604943 podman[253523]: 2026-02-02 11:59:33.041921804 +0000 UTC m=+0.058011805 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb  2 06:59:33 np0005604943 podman[253522]: 2026-02-02 11:59:33.05208956 +0000 UTC m=+0.073787083 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 06:59:33 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:33Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ed:66:a5 10.100.0.3
Feb  2 06:59:33 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:33Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ed:66:a5 10.100.0.3
Feb  2 06:59:33 np0005604943 nova_compute[238883]: 2026-02-02 11:59:33.592 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:33 np0005604943 nova_compute[238883]: 2026-02-02 11:59:33.695 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "refresh_cache-484e5b46-6672-4796-8f30-6d3e862428d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:59:33 np0005604943 nova_compute[238883]: 2026-02-02 11:59:33.695 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquired lock "refresh_cache-484e5b46-6672-4796-8f30-6d3e862428d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:59:33 np0005604943 nova_compute[238883]: 2026-02-02 11:59:33.695 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 06:59:33 np0005604943 nova_compute[238883]: 2026-02-02 11:59:33.695 238887 DEBUG nova.objects.instance [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 484e5b46-6672-4796-8f30-6d3e862428d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:59:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 192 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.2 MiB/s wr, 256 op/s
Feb  2 06:59:36 np0005604943 nova_compute[238883]: 2026-02-02 11:59:36.155 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 192 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 632 KiB/s rd, 4.2 MiB/s wr, 118 op/s
Feb  2 06:59:36 np0005604943 nova_compute[238883]: 2026-02-02 11:59:36.946 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Updating instance_info_cache with network_info: [{"id": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "address": "fa:16:3e:92:d4:07", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea4af23-2f", "ovs_interfaceid": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:59:36 np0005604943 nova_compute[238883]: 2026-02-02 11:59:36.973 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Releasing lock "refresh_cache-484e5b46-6672-4796-8f30-6d3e862428d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:59:36 np0005604943 nova_compute[238883]: 2026-02-02 11:59:36.973 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 06:59:36 np0005604943 nova_compute[238883]: 2026-02-02 11:59:36.973 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:59:36 np0005604943 nova_compute[238883]: 2026-02-02 11:59:36.973 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.178 238887 DEBUG oslo_concurrency.lockutils [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "484e5b46-6672-4796-8f30-6d3e862428d3" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.178 238887 DEBUG oslo_concurrency.lockutils [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.196 238887 DEBUG nova.objects.instance [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lazy-loading 'flavor' on Instance uuid 484e5b46-6672-4796-8f30-6d3e862428d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.226 238887 INFO nova.virt.libvirt.driver [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Ignoring supplied device name: /dev/vdb#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.301 238887 DEBUG oslo_concurrency.lockutils [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.505 238887 DEBUG oslo_concurrency.lockutils [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "484e5b46-6672-4796-8f30-6d3e862428d3" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.506 238887 DEBUG oslo_concurrency.lockutils [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.506 238887 INFO nova.compute.manager [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Attaching volume 63f6256e-2171-493c-8888-ea8c800ad577 to /dev/vdb#033[00m
Feb  2 06:59:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.664 238887 DEBUG os_brick.utils [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.665 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.672 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.673 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9bb311-ba73-4c20-a5f2-ca4994bba8d0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.673 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.678 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.678 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[346e0d7e-a07a-4ad4-b1b2-a947efaca21b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.680 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.684 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.684 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[8efe996b-38fd-451b-8201-f9d82e27be7a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.685 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7749e5-f280-485a-a71e-cfa23b5b2fe8]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.685 238887 DEBUG oslo_concurrency.processutils [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.700 238887 DEBUG oslo_concurrency.processutils [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "nvme version" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.701 238887 DEBUG os_brick.initiator.connectors.lightos [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.702 238887 DEBUG os_brick.initiator.connectors.lightos [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.702 238887 DEBUG os_brick.initiator.connectors.lightos [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.702 238887 DEBUG os_brick.utils [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] <== get_connector_properties: return (36ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 06:59:37 np0005604943 nova_compute[238883]: 2026-02-02 11:59:37.702 238887 DEBUG nova.virt.block_device [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Updating existing volume attachment record: eb9fc6c3-6ff1-427d-b722-5aff763d2f3d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 06:59:38 np0005604943 nova_compute[238883]: 2026-02-02 11:59:38.072 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 662 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Feb  2 06:59:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:59:38 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3323391472' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:59:38 np0005604943 nova_compute[238883]: 2026-02-02 11:59:38.564 238887 DEBUG nova.objects.instance [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lazy-loading 'flavor' on Instance uuid 484e5b46-6672-4796-8f30-6d3e862428d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:59:38 np0005604943 nova_compute[238883]: 2026-02-02 11:59:38.592 238887 DEBUG nova.virt.libvirt.driver [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Attempting to attach volume 63f6256e-2171-493c-8888-ea8c800ad577 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 06:59:38 np0005604943 nova_compute[238883]: 2026-02-02 11:59:38.594 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:38 np0005604943 nova_compute[238883]: 2026-02-02 11:59:38.597 238887 DEBUG nova.virt.libvirt.guest [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 06:59:38 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:59:38 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-63f6256e-2171-493c-8888-ea8c800ad577">
Feb  2 06:59:38 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:59:38 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:59:38 np0005604943 nova_compute[238883]:  <auth username="openstack">
Feb  2 06:59:38 np0005604943 nova_compute[238883]:    <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:59:38 np0005604943 nova_compute[238883]:  </auth>
Feb  2 06:59:38 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:59:38 np0005604943 nova_compute[238883]:  <serial>63f6256e-2171-493c-8888-ea8c800ad577</serial>
Feb  2 06:59:38 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:59:38 np0005604943 nova_compute[238883]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 06:59:38 np0005604943 nova_compute[238883]: 2026-02-02 11:59:38.693 238887 DEBUG nova.virt.libvirt.driver [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:59:38 np0005604943 nova_compute[238883]: 2026-02-02 11:59:38.693 238887 DEBUG nova.virt.libvirt.driver [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:59:38 np0005604943 nova_compute[238883]: 2026-02-02 11:59:38.695 238887 DEBUG nova.virt.libvirt.driver [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:59:38 np0005604943 nova_compute[238883]: 2026-02-02 11:59:38.695 238887 DEBUG nova.virt.libvirt.driver [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] No VIF found with MAC fa:16:3e:92:d4:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 06:59:38 np0005604943 nova_compute[238883]: 2026-02-02 11:59:38.921 238887 DEBUG oslo_concurrency.lockutils [None req-6a8b26b6-23c1-46ed-b54e-03c9c0ca9909 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:39 np0005604943 nova_compute[238883]: 2026-02-02 11:59:39.966 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:59:39 np0005604943 nova_compute[238883]: 2026-02-02 11:59:39.966 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 06:59:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 663 KiB/s rd, 4.3 MiB/s wr, 131 op/s
Feb  2 06:59:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:59:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:59:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:59:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:59:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 06:59:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 06:59:40 np0005604943 nova_compute[238883]: 2026-02-02 11:59:40.903 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:59:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2494098000' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.182 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.407 238887 DEBUG oslo_concurrency.lockutils [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Acquiring lock "b6e0af38-f069-4516-848d-2b7093956fa0" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.408 238887 DEBUG oslo_concurrency.lockutils [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.423 238887 DEBUG nova.objects.instance [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lazy-loading 'flavor' on Instance uuid b6e0af38-f069-4516-848d-2b7093956fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.453 238887 DEBUG oslo_concurrency.lockutils [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.632 238887 DEBUG oslo_concurrency.lockutils [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Acquiring lock "b6e0af38-f069-4516-848d-2b7093956fa0" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.633 238887 DEBUG oslo_concurrency.lockutils [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.633 238887 INFO nova.compute.manager [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Attaching volume aca90834-e148-4459-aed0-0c337f51a2bf to /dev/vdb#033[00m
Feb  2 06:59:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Feb  2 06:59:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Feb  2 06:59:41 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.762 238887 DEBUG os_brick.utils [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.763 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.770 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.770 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[5159c40e-80db-4846-81e2-c0069e71081c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.771 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.776 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.776 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[45bd5b27-c40b-49aa-beb6-be2b57f0e5e5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.777 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.782 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.782 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[1f476fa5-89f3-4cca-afdb-1a5bb316e548]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.783 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[e70cdd45-9dfd-4854-9e88-89ef2b774bed]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.783 238887 DEBUG oslo_concurrency.processutils [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.795 238887 DEBUG oslo_concurrency.processutils [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] CMD "nvme version" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.797 238887 DEBUG os_brick.initiator.connectors.lightos [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.797 238887 DEBUG os_brick.initiator.connectors.lightos [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.797 238887 DEBUG os_brick.initiator.connectors.lightos [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.797 238887 DEBUG os_brick.utils [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] <== get_connector_properties: return (34ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 06:59:41 np0005604943 nova_compute[238883]: 2026-02-02 11:59:41.798 238887 DEBUG nova.virt.block_device [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Updating existing volume attachment record: c8d4451f-c30d-4f8d-8af4-fd9ebb28b87d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 06:59:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 796 KiB/s rd, 5.1 MiB/s wr, 158 op/s
Feb  2 06:59:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:59:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:59:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1729095275' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:59:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Feb  2 06:59:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Feb  2 06:59:42 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Feb  2 06:59:42 np0005604943 nova_compute[238883]: 2026-02-02 11:59:42.889 238887 DEBUG os_brick.encryptors [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Using volume encryption metadata '{'encryption_key_id': '88b3bbaf-b642-49f0-9998-51b170957091', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-aca90834-e148-4459-aed0-0c337f51a2bf', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'aca90834-e148-4459-aed0-0c337f51a2bf', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'b6e0af38-f069-4516-848d-2b7093956fa0', 'attached_at': '', 'detached_at': '', 'volume_id': 'aca90834-e148-4459-aed0-0c337f51a2bf', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 06:59:42 np0005604943 nova_compute[238883]: 2026-02-02 11:59:42.894 238887 DEBUG barbicanclient.client [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 06:59:42 np0005604943 nova_compute[238883]: 2026-02-02 11:59:42.920 238887 DEBUG barbicanclient.v1.secrets [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/88b3bbaf-b642-49f0-9998-51b170957091 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 06:59:42 np0005604943 nova_compute[238883]: 2026-02-02 11:59:42.920 238887 INFO barbicanclient.base [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Calculated Secrets uuid ref: secrets/88b3bbaf-b642-49f0-9998-51b170957091#033[00m
Feb  2 06:59:42 np0005604943 nova_compute[238883]: 2026-02-02 11:59:42.959 238887 DEBUG barbicanclient.client [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 06:59:42 np0005604943 nova_compute[238883]: 2026-02-02 11:59:42.960 238887 INFO barbicanclient.base [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Calculated Secrets uuid ref: secrets/88b3bbaf-b642-49f0-9998-51b170957091#033[00m
Feb  2 06:59:42 np0005604943 nova_compute[238883]: 2026-02-02 11:59:42.995 238887 DEBUG barbicanclient.client [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 06:59:42 np0005604943 nova_compute[238883]: 2026-02-02 11:59:42.996 238887 INFO barbicanclient.base [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Calculated Secrets uuid ref: secrets/88b3bbaf-b642-49f0-9998-51b170957091#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.019 238887 DEBUG barbicanclient.client [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.020 238887 INFO barbicanclient.base [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Calculated Secrets uuid ref: secrets/88b3bbaf-b642-49f0-9998-51b170957091#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.045 238887 DEBUG barbicanclient.client [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.046 238887 INFO barbicanclient.base [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Calculated Secrets uuid ref: secrets/88b3bbaf-b642-49f0-9998-51b170957091#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.078 238887 DEBUG barbicanclient.client [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.078 238887 INFO barbicanclient.base [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Calculated Secrets uuid ref: secrets/88b3bbaf-b642-49f0-9998-51b170957091#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.098 238887 DEBUG barbicanclient.client [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.098 238887 INFO barbicanclient.base [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Calculated Secrets uuid ref: secrets/88b3bbaf-b642-49f0-9998-51b170957091#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.118 238887 DEBUG barbicanclient.client [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.119 238887 INFO barbicanclient.base [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Calculated Secrets uuid ref: secrets/88b3bbaf-b642-49f0-9998-51b170957091#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.139 238887 DEBUG barbicanclient.client [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.139 238887 INFO barbicanclient.base [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Calculated Secrets uuid ref: secrets/88b3bbaf-b642-49f0-9998-51b170957091#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.170 238887 DEBUG barbicanclient.client [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.171 238887 INFO barbicanclient.base [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Calculated Secrets uuid ref: secrets/88b3bbaf-b642-49f0-9998-51b170957091#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.200 238887 DEBUG barbicanclient.client [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.201 238887 INFO barbicanclient.base [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Calculated Secrets uuid ref: secrets/88b3bbaf-b642-49f0-9998-51b170957091#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.234 238887 DEBUG barbicanclient.client [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.235 238887 INFO barbicanclient.base [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Calculated Secrets uuid ref: secrets/88b3bbaf-b642-49f0-9998-51b170957091#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.256 238887 DEBUG barbicanclient.client [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.257 238887 INFO barbicanclient.base [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Calculated Secrets uuid ref: secrets/88b3bbaf-b642-49f0-9998-51b170957091#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.281 238887 DEBUG barbicanclient.client [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.281 238887 INFO barbicanclient.base [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Calculated Secrets uuid ref: secrets/88b3bbaf-b642-49f0-9998-51b170957091#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.302 238887 DEBUG barbicanclient.client [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.303 238887 INFO barbicanclient.base [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Calculated Secrets uuid ref: secrets/88b3bbaf-b642-49f0-9998-51b170957091#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.327 238887 DEBUG barbicanclient.client [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.328 238887 DEBUG nova.virt.libvirt.host [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 06:59:43 np0005604943 nova_compute[238883]:  <usage type="volume">
Feb  2 06:59:43 np0005604943 nova_compute[238883]:    <volume>aca90834-e148-4459-aed0-0c337f51a2bf</volume>
Feb  2 06:59:43 np0005604943 nova_compute[238883]:  </usage>
Feb  2 06:59:43 np0005604943 nova_compute[238883]: </secret>
Feb  2 06:59:43 np0005604943 nova_compute[238883]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.338 238887 DEBUG nova.objects.instance [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lazy-loading 'flavor' on Instance uuid b6e0af38-f069-4516-848d-2b7093956fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.368 238887 DEBUG nova.virt.libvirt.driver [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Attempting to attach volume aca90834-e148-4459-aed0-0c337f51a2bf with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.370 238887 DEBUG nova.virt.libvirt.guest [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 06:59:43 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:59:43 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-aca90834-e148-4459-aed0-0c337f51a2bf">
Feb  2 06:59:43 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:59:43 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:59:43 np0005604943 nova_compute[238883]:  <auth username="openstack">
Feb  2 06:59:43 np0005604943 nova_compute[238883]:    <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:59:43 np0005604943 nova_compute[238883]:  </auth>
Feb  2 06:59:43 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:59:43 np0005604943 nova_compute[238883]:  <serial>aca90834-e148-4459-aed0-0c337f51a2bf</serial>
Feb  2 06:59:43 np0005604943 nova_compute[238883]:  <encryption format="luks">
Feb  2 06:59:43 np0005604943 nova_compute[238883]:    <secret type="passphrase" uuid="c3c8a530-f3a5-4b10-9119-8971e3f9f2f2"/>
Feb  2 06:59:43 np0005604943 nova_compute[238883]:  </encryption>
Feb  2 06:59:43 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:59:43 np0005604943 nova_compute[238883]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 06:59:43 np0005604943 nova_compute[238883]: 2026-02-02 11:59:43.596 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:59:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3101931663' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:59:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 200 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 83 KiB/s wr, 43 op/s
Feb  2 06:59:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Feb  2 06:59:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Feb  2 06:59:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Feb  2 06:59:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:59:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/331967443' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:59:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:59:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/331967443' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:59:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Feb  2 06:59:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Feb  2 06:59:45 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Feb  2 06:59:45 np0005604943 nova_compute[238883]: 2026-02-02 11:59:45.806 238887 DEBUG nova.virt.libvirt.driver [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:59:45 np0005604943 nova_compute[238883]: 2026-02-02 11:59:45.807 238887 DEBUG nova.virt.libvirt.driver [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:59:45 np0005604943 nova_compute[238883]: 2026-02-02 11:59:45.807 238887 DEBUG nova.virt.libvirt.driver [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:59:45 np0005604943 nova_compute[238883]: 2026-02-02 11:59:45.807 238887 DEBUG nova.virt.libvirt.driver [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] No VIF found with MAC fa:16:3e:ed:66:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.001 238887 DEBUG oslo_concurrency.lockutils [None req-f4e900ae-ef8a-45a3-b0a4-4dba655db37b 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.044 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.045 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.057 238887 DEBUG nova.compute.manager [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.137 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.138 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.144 238887 DEBUG nova.virt.hardware [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.145 238887 INFO nova.compute.claims [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.183 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 200 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 10 KiB/s wr, 42 op/s
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.274 238887 DEBUG oslo_concurrency.processutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.554 238887 DEBUG oslo_concurrency.lockutils [None req-ffa9b782-7a0c-4baf-b063-9b1a87873fdc 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Acquiring lock "b6e0af38-f069-4516-848d-2b7093956fa0" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.554 238887 DEBUG oslo_concurrency.lockutils [None req-ffa9b782-7a0c-4baf-b063-9b1a87873fdc 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.569 238887 INFO nova.compute.manager [None req-ffa9b782-7a0c-4baf-b063-9b1a87873fdc 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Detaching volume aca90834-e148-4459-aed0-0c337f51a2bf#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.692 238887 INFO nova.virt.block_device [None req-ffa9b782-7a0c-4baf-b063-9b1a87873fdc 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Attempting to driver detach volume aca90834-e148-4459-aed0-0c337f51a2bf from mountpoint /dev/vdb#033[00m
Feb  2 06:59:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:59:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2594254781' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.788 238887 DEBUG oslo_concurrency.processutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.793 238887 DEBUG nova.compute.provider_tree [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.809 238887 DEBUG nova.scheduler.client.report [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.828 238887 DEBUG os_brick.encryptors [None req-ffa9b782-7a0c-4baf-b063-9b1a87873fdc 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Using volume encryption metadata '{'encryption_key_id': '88b3bbaf-b642-49f0-9998-51b170957091', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-aca90834-e148-4459-aed0-0c337f51a2bf', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'aca90834-e148-4459-aed0-0c337f51a2bf', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'b6e0af38-f069-4516-848d-2b7093956fa0', 'attached_at': '', 'detached_at': '', 'volume_id': 'aca90834-e148-4459-aed0-0c337f51a2bf', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.836 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.837 238887 DEBUG nova.compute.manager [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.842 238887 DEBUG nova.virt.libvirt.driver [None req-ffa9b782-7a0c-4baf-b063-9b1a87873fdc 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Attempting to detach device vdb from instance b6e0af38-f069-4516-848d-2b7093956fa0 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.843 238887 DEBUG nova.virt.libvirt.guest [None req-ffa9b782-7a0c-4baf-b063-9b1a87873fdc 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 06:59:46 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:59:46 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-aca90834-e148-4459-aed0-0c337f51a2bf">
Feb  2 06:59:46 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:59:46 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:59:46 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:59:46 np0005604943 nova_compute[238883]:  <serial>aca90834-e148-4459-aed0-0c337f51a2bf</serial>
Feb  2 06:59:46 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 06:59:46 np0005604943 nova_compute[238883]:  <encryption format="luks">
Feb  2 06:59:46 np0005604943 nova_compute[238883]:    <secret type="passphrase" uuid="c3c8a530-f3a5-4b10-9119-8971e3f9f2f2"/>
Feb  2 06:59:46 np0005604943 nova_compute[238883]:  </encryption>
Feb  2 06:59:46 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:59:46 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.852 238887 INFO nova.virt.libvirt.driver [None req-ffa9b782-7a0c-4baf-b063-9b1a87873fdc 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Successfully detached device vdb from instance b6e0af38-f069-4516-848d-2b7093956fa0 from the persistent domain config.#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.853 238887 DEBUG nova.virt.libvirt.driver [None req-ffa9b782-7a0c-4baf-b063-9b1a87873fdc 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b6e0af38-f069-4516-848d-2b7093956fa0 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.853 238887 DEBUG nova.virt.libvirt.guest [None req-ffa9b782-7a0c-4baf-b063-9b1a87873fdc 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 06:59:46 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:59:46 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-aca90834-e148-4459-aed0-0c337f51a2bf">
Feb  2 06:59:46 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:59:46 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:59:46 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:59:46 np0005604943 nova_compute[238883]:  <serial>aca90834-e148-4459-aed0-0c337f51a2bf</serial>
Feb  2 06:59:46 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 06:59:46 np0005604943 nova_compute[238883]:  <encryption format="luks">
Feb  2 06:59:46 np0005604943 nova_compute[238883]:    <secret type="passphrase" uuid="c3c8a530-f3a5-4b10-9119-8971e3f9f2f2"/>
Feb  2 06:59:46 np0005604943 nova_compute[238883]:  </encryption>
Feb  2 06:59:46 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:59:46 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.881 238887 DEBUG nova.compute.manager [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.882 238887 DEBUG nova.network.neutron [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.897 238887 INFO nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.901 238887 DEBUG nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Received event <DeviceRemovedEvent: 1770033586.901087, b6e0af38-f069-4516-848d-2b7093956fa0 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.903 238887 DEBUG nova.virt.libvirt.driver [None req-ffa9b782-7a0c-4baf-b063-9b1a87873fdc 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b6e0af38-f069-4516-848d-2b7093956fa0 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.905 238887 INFO nova.virt.libvirt.driver [None req-ffa9b782-7a0c-4baf-b063-9b1a87873fdc 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Successfully detached device vdb from instance b6e0af38-f069-4516-848d-2b7093956fa0 from the live domain config.#033[00m
Feb  2 06:59:46 np0005604943 nova_compute[238883]: 2026-02-02 11:59:46.913 238887 DEBUG nova.compute.manager [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.001 238887 DEBUG nova.compute.manager [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.002 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.003 238887 INFO nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Creating image(s)#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.022 238887 DEBUG nova.storage.rbd_utils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] rbd image 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.043 238887 DEBUG nova.storage.rbd_utils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] rbd image 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.069 238887 DEBUG nova.storage.rbd_utils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] rbd image 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.074 238887 DEBUG oslo_concurrency.processutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.097 238887 DEBUG nova.objects.instance [None req-ffa9b782-7a0c-4baf-b063-9b1a87873fdc 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lazy-loading 'flavor' on Instance uuid b6e0af38-f069-4516-848d-2b7093956fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.111 238887 DEBUG nova.policy [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '55f5d320b54948c9a8f465d017972291', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '82fc9ca354da4dd4bdccf919f13d3561', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.134 238887 DEBUG oslo_concurrency.lockutils [None req-ffa9b782-7a0c-4baf-b063-9b1a87873fdc 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.154 238887 DEBUG oslo_concurrency.processutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.155 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.155 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.156 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.177 238887 DEBUG nova.storage.rbd_utils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] rbd image 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.182 238887 DEBUG oslo_concurrency.processutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.372 238887 DEBUG oslo_concurrency.processutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.433 238887 DEBUG nova.storage.rbd_utils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] resizing rbd image 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.514 238887 DEBUG nova.objects.instance [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lazy-loading 'migration_context' on Instance uuid 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:59:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.550 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.551 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Ensure instance console log exists: /var/lib/nova/instances/1b038c3f-57e2-4f69-a27c-2ba8d465dfc1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.551 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.552 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.552 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:47 np0005604943 nova_compute[238883]: 2026-02-02 11:59:47.722 238887 DEBUG nova.network.neutron [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Successfully created port: b03048b5-3014-4343-9639-e364514f44d0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 06:59:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Feb  2 06:59:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Feb  2 06:59:47 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.053 238887 DEBUG oslo_concurrency.lockutils [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Acquiring lock "b6e0af38-f069-4516-848d-2b7093956fa0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.054 238887 DEBUG oslo_concurrency.lockutils [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.054 238887 DEBUG oslo_concurrency.lockutils [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Acquiring lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.054 238887 DEBUG oslo_concurrency.lockutils [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.055 238887 DEBUG oslo_concurrency.lockutils [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.056 238887 INFO nova.compute.manager [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Terminating instance#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.057 238887 DEBUG nova.compute.manager [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 06:59:48 np0005604943 kernel: tap86db1a97-63 (unregistering): left promiscuous mode
Feb  2 06:59:48 np0005604943 NetworkManager[49093]: <info>  [1770033588.1081] device (tap86db1a97-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.118 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:48 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:48Z|00101|binding|INFO|Releasing lport 86db1a97-63b9-4069-a69f-bc0ef1f8342f from this chassis (sb_readonly=0)
Feb  2 06:59:48 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:48Z|00102|binding|INFO|Setting lport 86db1a97-63b9-4069-a69f-bc0ef1f8342f down in Southbound
Feb  2 06:59:48 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:48Z|00103|binding|INFO|Removing iface tap86db1a97-63 ovn-installed in OVS
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.124 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:48.130 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:66:a5 10.100.0.3'], port_security=['fa:16:3e:ed:66:a5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b6e0af38-f069-4516-848d-2b7093956fa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d5311c0-0d13-45dd-abcb-d46d409b1a1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cdcfa3aaa83541878311def7781b5b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '284e4bd7-1dab-4d9d-9034-a93c0bfa4056', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1724d015-9fba-41e3-bd31-dd97100bf6bd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=86db1a97-63b9-4069-a69f-bc0ef1f8342f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.131 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:48.132 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 86db1a97-63b9-4069-a69f-bc0ef1f8342f in datapath 0d5311c0-0d13-45dd-abcb-d46d409b1a1d unbound from our chassis#033[00m
Feb  2 06:59:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:48.134 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0d5311c0-0d13-45dd-abcb-d46d409b1a1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 06:59:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:48.135 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[6440b478-3970-451f-85b5-fce66907315f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:48.136 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d namespace which is not needed anymore#033[00m
Feb  2 06:59:48 np0005604943 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Feb  2 06:59:48 np0005604943 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 14.850s CPU time.
Feb  2 06:59:48 np0005604943 systemd-machined[206973]: Machine qemu-10-instance-0000000a terminated.
Feb  2 06:59:48 np0005604943 neutron-haproxy-ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d[253460]: [NOTICE]   (253465) : haproxy version is 2.8.14-c23fe91
Feb  2 06:59:48 np0005604943 neutron-haproxy-ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d[253460]: [NOTICE]   (253465) : path to executable is /usr/sbin/haproxy
Feb  2 06:59:48 np0005604943 neutron-haproxy-ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d[253460]: [WARNING]  (253465) : Exiting Master process...
Feb  2 06:59:48 np0005604943 neutron-haproxy-ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d[253460]: [ALERT]    (253465) : Current worker (253467) exited with code 143 (Terminated)
Feb  2 06:59:48 np0005604943 neutron-haproxy-ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d[253460]: [WARNING]  (253465) : All workers exited. Exiting... (0)
Feb  2 06:59:48 np0005604943 systemd[1]: libpod-d81f8ec38d076eb87a664f0c9b603f147025cc957c4dca0a555a426ec77b28b4.scope: Deactivated successfully.
Feb  2 06:59:48 np0005604943 podman[253837]: 2026-02-02 11:59:48.249757762 +0000 UTC m=+0.038657071 container died d81f8ec38d076eb87a664f0c9b603f147025cc957c4dca0a555a426ec77b28b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb  2 06:59:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 229 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 1.5 MiB/s wr, 101 op/s
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.276 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:48 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d81f8ec38d076eb87a664f0c9b603f147025cc957c4dca0a555a426ec77b28b4-userdata-shm.mount: Deactivated successfully.
Feb  2 06:59:48 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f5c895f4a4accc9081bd9aec7e9d8492b4530282908163da8d409623b9ade303-merged.mount: Deactivated successfully.
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.280 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.288 238887 INFO nova.virt.libvirt.driver [-] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Instance destroyed successfully.#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.289 238887 DEBUG nova.objects.instance [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lazy-loading 'resources' on Instance uuid b6e0af38-f069-4516-848d-2b7093956fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:59:48 np0005604943 podman[253837]: 2026-02-02 11:59:48.296360807 +0000 UTC m=+0.085260096 container cleanup d81f8ec38d076eb87a664f0c9b603f147025cc957c4dca0a555a426ec77b28b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.300 238887 DEBUG nova.virt.libvirt.vif [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:59:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1701407797',display_name='tempest-TestEncryptedCinderVolumes-server-1701407797',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1701407797',id=10,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEX84H+sJrhf5p3KTmgGpTv7aL1JeHX+jKomKR+iVFNA5sje0qIr0cbd2tkcYekHu8KBhE73g1auIf5O4mdKv3J1DJFzRrlsaIenzFIP4e0B3zCPKMXcvCNhgAiOs0HKTA==',key_name='tempest-keypair-1106499579',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:59:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cdcfa3aaa83541878311def7781b5b82',ramdisk_id='',reservation_id='r-vpsnetyn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-600410399',owner_user_name='tempest-TestEncryptedCinderVolumes-600410399-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:59:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0d0b5cfd8d84432894bd264065bcb0ba',uuid=b6e0af38-f069-4516-848d-2b7093956fa0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "address": "fa:16:3e:ed:66:a5", "network": {"id": "0d5311c0-0d13-45dd-abcb-d46d409b1a1d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1807408621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdcfa3aaa83541878311def7781b5b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86db1a97-63", "ovs_interfaceid": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.301 238887 DEBUG nova.network.os_vif_util [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Converting VIF {"id": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "address": "fa:16:3e:ed:66:a5", "network": {"id": "0d5311c0-0d13-45dd-abcb-d46d409b1a1d", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1807408621-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cdcfa3aaa83541878311def7781b5b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86db1a97-63", "ovs_interfaceid": "86db1a97-63b9-4069-a69f-bc0ef1f8342f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.302 238887 DEBUG nova.network.os_vif_util [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:66:a5,bridge_name='br-int',has_traffic_filtering=True,id=86db1a97-63b9-4069-a69f-bc0ef1f8342f,network=Network(0d5311c0-0d13-45dd-abcb-d46d409b1a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86db1a97-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:59:48 np0005604943 systemd[1]: libpod-conmon-d81f8ec38d076eb87a664f0c9b603f147025cc957c4dca0a555a426ec77b28b4.scope: Deactivated successfully.
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.302 238887 DEBUG os_vif [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:66:a5,bridge_name='br-int',has_traffic_filtering=True,id=86db1a97-63b9-4069-a69f-bc0ef1f8342f,network=Network(0d5311c0-0d13-45dd-abcb-d46d409b1a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86db1a97-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.303 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.304 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86db1a97-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.307 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.309 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.311 238887 INFO os_vif [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:66:a5,bridge_name='br-int',has_traffic_filtering=True,id=86db1a97-63b9-4069-a69f-bc0ef1f8342f,network=Network(0d5311c0-0d13-45dd-abcb-d46d409b1a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86db1a97-63')#033[00m
Feb  2 06:59:48 np0005604943 podman[253877]: 2026-02-02 11:59:48.353737694 +0000 UTC m=+0.041377324 container remove d81f8ec38d076eb87a664f0c9b603f147025cc957c4dca0a555a426ec77b28b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:59:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:48.358 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[75980c6f-eb28-4e60-b48a-85e2a09261f8]: (4, ('Mon Feb  2 11:59:48 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d (d81f8ec38d076eb87a664f0c9b603f147025cc957c4dca0a555a426ec77b28b4)\nd81f8ec38d076eb87a664f0c9b603f147025cc957c4dca0a555a426ec77b28b4\nMon Feb  2 11:59:48 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d (d81f8ec38d076eb87a664f0c9b603f147025cc957c4dca0a555a426ec77b28b4)\nd81f8ec38d076eb87a664f0c9b603f147025cc957c4dca0a555a426ec77b28b4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:48.360 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb10e8b-af33-4ab9-af5b-fbadac241a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:48.361 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d5311c0-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.363 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:48 np0005604943 kernel: tap0d5311c0-00: left promiscuous mode
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.370 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:48.372 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ff73d1c5-4834-4050-b860-9899f6da2f33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:48.391 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[e0df0e88-eb27-497f-b27a-e505c5d6f4bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:48.392 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb69c52-2e00-4002-a168-5838b0f7ea7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:48.404 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[06009850-1db2-4d70-b68e-5960f4ec7424]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400478, 'reachable_time': 23987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253910, 'error': None, 'target': 'ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:48 np0005604943 systemd[1]: run-netns-ovnmeta\x2d0d5311c0\x2d0d13\x2d45dd\x2dabcb\x2dd46d409b1a1d.mount: Deactivated successfully.
Feb  2 06:59:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:48.406 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0d5311c0-0d13-45dd-abcb-d46d409b1a1d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 06:59:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:48.406 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[4c849cfc-b398-4486-adaf-d1d1f9dbd31d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.539 238887 DEBUG nova.network.neutron [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Successfully updated port: b03048b5-3014-4343-9639-e364514f44d0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.554 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "refresh_cache-1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.554 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquired lock "refresh_cache-1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.554 238887 DEBUG nova.network.neutron [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.576 238887 INFO nova.virt.libvirt.driver [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Deleting instance files /var/lib/nova/instances/b6e0af38-f069-4516-848d-2b7093956fa0_del#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.577 238887 INFO nova.virt.libvirt.driver [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Deletion of /var/lib/nova/instances/b6e0af38-f069-4516-848d-2b7093956fa0_del complete#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.587 238887 DEBUG nova.compute.manager [req-1d913895-6cab-4048-9bf9-92bac9947149 req-c242d147-6876-43bf-9a10-bb38aeb55d41 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Received event network-vif-unplugged-86db1a97-63b9-4069-a69f-bc0ef1f8342f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.587 238887 DEBUG oslo_concurrency.lockutils [req-1d913895-6cab-4048-9bf9-92bac9947149 req-c242d147-6876-43bf-9a10-bb38aeb55d41 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.588 238887 DEBUG oslo_concurrency.lockutils [req-1d913895-6cab-4048-9bf9-92bac9947149 req-c242d147-6876-43bf-9a10-bb38aeb55d41 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.588 238887 DEBUG oslo_concurrency.lockutils [req-1d913895-6cab-4048-9bf9-92bac9947149 req-c242d147-6876-43bf-9a10-bb38aeb55d41 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.589 238887 DEBUG nova.compute.manager [req-1d913895-6cab-4048-9bf9-92bac9947149 req-c242d147-6876-43bf-9a10-bb38aeb55d41 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] No waiting events found dispatching network-vif-unplugged-86db1a97-63b9-4069-a69f-bc0ef1f8342f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.589 238887 DEBUG nova.compute.manager [req-1d913895-6cab-4048-9bf9-92bac9947149 req-c242d147-6876-43bf-9a10-bb38aeb55d41 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Received event network-vif-unplugged-86db1a97-63b9-4069-a69f-bc0ef1f8342f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.627 238887 INFO nova.compute.manager [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.628 238887 DEBUG oslo.service.loopingcall [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.628 238887 DEBUG nova.compute.manager [-] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.628 238887 DEBUG nova.network.neutron [-] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.638 238887 DEBUG nova.compute.manager [req-aae36be9-fb22-4a75-bd11-c08f9b536316 req-d2115e77-3285-4c97-9b02-53d6cf99d318 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received event network-changed-b03048b5-3014-4343-9639-e364514f44d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.638 238887 DEBUG nova.compute.manager [req-aae36be9-fb22-4a75-bd11-c08f9b536316 req-d2115e77-3285-4c97-9b02-53d6cf99d318 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Refreshing instance network info cache due to event network-changed-b03048b5-3014-4343-9639-e364514f44d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.639 238887 DEBUG oslo_concurrency.lockutils [req-aae36be9-fb22-4a75-bd11-c08f9b536316 req-d2115e77-3285-4c97-9b02-53d6cf99d318 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:59:48 np0005604943 nova_compute[238883]: 2026-02-02 11:59:48.704 238887 DEBUG nova.network.neutron [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.564 238887 DEBUG nova.network.neutron [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Updating instance_info_cache with network_info: [{"id": "b03048b5-3014-4343-9639-e364514f44d0", "address": "fa:16:3e:d1:cf:fd", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03048b5-30", "ovs_interfaceid": "b03048b5-3014-4343-9639-e364514f44d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.587 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Releasing lock "refresh_cache-1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.587 238887 DEBUG nova.compute.manager [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Instance network_info: |[{"id": "b03048b5-3014-4343-9639-e364514f44d0", "address": "fa:16:3e:d1:cf:fd", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03048b5-30", "ovs_interfaceid": "b03048b5-3014-4343-9639-e364514f44d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.588 238887 DEBUG oslo_concurrency.lockutils [req-aae36be9-fb22-4a75-bd11-c08f9b536316 req-d2115e77-3285-4c97-9b02-53d6cf99d318 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.588 238887 DEBUG nova.network.neutron [req-aae36be9-fb22-4a75-bd11-c08f9b536316 req-d2115e77-3285-4c97-9b02-53d6cf99d318 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Refreshing network info cache for port b03048b5-3014-4343-9639-e364514f44d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.591 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Start _get_guest_xml network_info=[{"id": "b03048b5-3014-4343-9639-e364514f44d0", "address": "fa:16:3e:d1:cf:fd", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03048b5-30", "ovs_interfaceid": "b03048b5-3014-4343-9639-e364514f44d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'image_id': '21b263f0-00f1-47be-b8b1-e3c07da0a6a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.598 238887 WARNING nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.606 238887 DEBUG nova.virt.libvirt.host [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.607 238887 DEBUG nova.virt.libvirt.host [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.611 238887 DEBUG nova.virt.libvirt.host [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.611 238887 DEBUG nova.virt.libvirt.host [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.612 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.612 238887 DEBUG nova.virt.hardware [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.612 238887 DEBUG nova.virt.hardware [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.612 238887 DEBUG nova.virt.hardware [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.613 238887 DEBUG nova.virt.hardware [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.613 238887 DEBUG nova.virt.hardware [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.613 238887 DEBUG nova.virt.hardware [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.613 238887 DEBUG nova.virt.hardware [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.613 238887 DEBUG nova.virt.hardware [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.613 238887 DEBUG nova.virt.hardware [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.614 238887 DEBUG nova.virt.hardware [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.614 238887 DEBUG nova.virt.hardware [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.616 238887 DEBUG oslo_concurrency.processutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.627 238887 DEBUG nova.network.neutron [-] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.643 238887 INFO nova.compute.manager [-] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Took 1.01 seconds to deallocate network for instance.#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.695 238887 DEBUG oslo_concurrency.lockutils [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.696 238887 DEBUG oslo_concurrency.lockutils [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:49 np0005604943 nova_compute[238883]: 2026-02-02 11:59:49.790 238887 DEBUG oslo_concurrency.processutils [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:59:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2574287143' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:59:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:59:50 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2072544677' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.162 238887 DEBUG oslo_concurrency.processutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.182 238887 DEBUG nova.storage.rbd_utils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] rbd image 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.186 238887 DEBUG oslo_concurrency.processutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 222 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 3.6 MiB/s wr, 101 op/s
Feb  2 06:59:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 06:59:50 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3174245858' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.315 238887 DEBUG oslo_concurrency.processutils [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.320 238887 DEBUG nova.compute.provider_tree [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.336 238887 DEBUG nova.scheduler.client.report [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.361 238887 DEBUG oslo_concurrency.lockutils [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.387 238887 INFO nova.scheduler.client.report [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Deleted allocations for instance b6e0af38-f069-4516-848d-2b7093956fa0#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.461 238887 DEBUG oslo_concurrency.lockutils [None req-b88b3533-3604-4287-80eb-a947563d0859 0d0b5cfd8d84432894bd264065bcb0ba cdcfa3aaa83541878311def7781b5b82 - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.407s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.463 238887 DEBUG nova.network.neutron [req-aae36be9-fb22-4a75-bd11-c08f9b536316 req-d2115e77-3285-4c97-9b02-53d6cf99d318 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Updated VIF entry in instance network info cache for port b03048b5-3014-4343-9639-e364514f44d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.463 238887 DEBUG nova.network.neutron [req-aae36be9-fb22-4a75-bd11-c08f9b536316 req-d2115e77-3285-4c97-9b02-53d6cf99d318 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Updating instance_info_cache with network_info: [{"id": "b03048b5-3014-4343-9639-e364514f44d0", "address": "fa:16:3e:d1:cf:fd", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03048b5-30", "ovs_interfaceid": "b03048b5-3014-4343-9639-e364514f44d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.484 238887 DEBUG oslo_concurrency.lockutils [req-aae36be9-fb22-4a75-bd11-c08f9b536316 req-d2115e77-3285-4c97-9b02-53d6cf99d318 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.670 238887 DEBUG nova.compute.manager [req-5314e50a-cc56-4d83-8728-cbefeff60b64 req-b126e3ea-4309-4324-9b69-decd8d538860 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Received event network-vif-plugged-86db1a97-63b9-4069-a69f-bc0ef1f8342f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.670 238887 DEBUG oslo_concurrency.lockutils [req-5314e50a-cc56-4d83-8728-cbefeff60b64 req-b126e3ea-4309-4324-9b69-decd8d538860 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.671 238887 DEBUG oslo_concurrency.lockutils [req-5314e50a-cc56-4d83-8728-cbefeff60b64 req-b126e3ea-4309-4324-9b69-decd8d538860 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.671 238887 DEBUG oslo_concurrency.lockutils [req-5314e50a-cc56-4d83-8728-cbefeff60b64 req-b126e3ea-4309-4324-9b69-decd8d538860 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "b6e0af38-f069-4516-848d-2b7093956fa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.671 238887 DEBUG nova.compute.manager [req-5314e50a-cc56-4d83-8728-cbefeff60b64 req-b126e3ea-4309-4324-9b69-decd8d538860 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] No waiting events found dispatching network-vif-plugged-86db1a97-63b9-4069-a69f-bc0ef1f8342f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.671 238887 WARNING nova.compute.manager [req-5314e50a-cc56-4d83-8728-cbefeff60b64 req-b126e3ea-4309-4324-9b69-decd8d538860 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Received unexpected event network-vif-plugged-86db1a97-63b9-4069-a69f-bc0ef1f8342f for instance with vm_state deleted and task_state None.#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.672 238887 DEBUG nova.compute.manager [req-5314e50a-cc56-4d83-8728-cbefeff60b64 req-b126e3ea-4309-4324-9b69-decd8d538860 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Received event network-vif-deleted-86db1a97-63b9-4069-a69f-bc0ef1f8342f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:59:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 06:59:50 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1984473257' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.707 238887 DEBUG oslo_concurrency.processutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.708 238887 DEBUG nova.virt.libvirt.vif [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:59:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-620872658',display_name='tempest-TestStampPattern-server-620872658',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-620872658',id=11,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHkgfVtqBda0LVlF5slmF25Lo/XwS8Q8Sghn9kMaubVvv9bxRUWvKYk1te57NsoxW3EiHAVoG8/mfQ9ewKRmH/t5lWTLgWAau4XX+kOaKUVaGSh/OmZZNeyoLD4n3OeH0A==',key_name='tempest-TestStampPattern-80198922',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='82fc9ca354da4dd4bdccf919f13d3561',ramdisk_id='',reservation_id='r-jmm0e7q7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-577361379',owner_user_name='tempest-TestStampPattern-577361379-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:59:46Z,user_data=None,user_id='55f5d320b54948c9a8f465d017972291',uuid=1b038c3f-57e2-4f69-a27c-2ba8d465dfc1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b03048b5-3014-4343-9639-e364514f44d0", "address": "fa:16:3e:d1:cf:fd", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03048b5-30", "ovs_interfaceid": "b03048b5-3014-4343-9639-e364514f44d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.709 238887 DEBUG nova.network.os_vif_util [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Converting VIF {"id": "b03048b5-3014-4343-9639-e364514f44d0", "address": "fa:16:3e:d1:cf:fd", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03048b5-30", "ovs_interfaceid": "b03048b5-3014-4343-9639-e364514f44d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.709 238887 DEBUG nova.network.os_vif_util [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:cf:fd,bridge_name='br-int',has_traffic_filtering=True,id=b03048b5-3014-4343-9639-e364514f44d0,network=Network(de22212f-33f4-472b-8b67-05be2c5418f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03048b5-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.711 238887 DEBUG nova.objects.instance [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.725 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] End _get_guest_xml xml=<domain type="kvm">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  <uuid>1b038c3f-57e2-4f69-a27c-2ba8d465dfc1</uuid>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  <name>instance-0000000b</name>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <nova:name>tempest-TestStampPattern-server-620872658</nova:name>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 11:59:49</nova:creationTime>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:        <nova:user uuid="55f5d320b54948c9a8f465d017972291">tempest-TestStampPattern-577361379-project-member</nova:user>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:        <nova:project uuid="82fc9ca354da4dd4bdccf919f13d3561">tempest-TestStampPattern-577361379</nova:project>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <nova:root type="image" uuid="21b263f0-00f1-47be-b8b1-e3c07da0a6a2"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:        <nova:port uuid="b03048b5-3014-4343-9639-e364514f44d0">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <system>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <entry name="serial">1b038c3f-57e2-4f69-a27c-2ba8d465dfc1</entry>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <entry name="uuid">1b038c3f-57e2-4f69-a27c-2ba8d465dfc1</entry>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    </system>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  <os>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  </os>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  <features>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  </features>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  </clock>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  <devices>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk.config">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      </source>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      </auth>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    </disk>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:d1:cf:fd"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <target dev="tapb03048b5-30"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    </interface>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/1b038c3f-57e2-4f69-a27c-2ba8d465dfc1/console.log" append="off"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    </serial>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <video>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    </video>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    </rng>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 06:59:50 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 06:59:50 np0005604943 nova_compute[238883]:  </devices>
Feb  2 06:59:50 np0005604943 nova_compute[238883]: </domain>
Feb  2 06:59:50 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.726 238887 DEBUG nova.compute.manager [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Preparing to wait for external event network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.726 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.726 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.726 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.727 238887 DEBUG nova.virt.libvirt.vif [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T11:59:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-620872658',display_name='tempest-TestStampPattern-server-620872658',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-620872658',id=11,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHkgfVtqBda0LVlF5slmF25Lo/XwS8Q8Sghn9kMaubVvv9bxRUWvKYk1te57NsoxW3EiHAVoG8/mfQ9ewKRmH/t5lWTLgWAau4XX+kOaKUVaGSh/OmZZNeyoLD4n3OeH0A==',key_name='tempest-TestStampPattern-80198922',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='82fc9ca354da4dd4bdccf919f13d3561',ramdisk_id='',reservation_id='r-jmm0e7q7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-577361379',owner_user_name='tempest-TestStampPattern-577361379-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T11:59:46Z,user_data=None,user_id='55f5d320b54948c9a8f465d017972291',uuid=1b038c3f-57e2-4f69-a27c-2ba8d465dfc1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b03048b5-3014-4343-9639-e364514f44d0", "address": "fa:16:3e:d1:cf:fd", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03048b5-30", "ovs_interfaceid": "b03048b5-3014-4343-9639-e364514f44d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.727 238887 DEBUG nova.network.os_vif_util [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Converting VIF {"id": "b03048b5-3014-4343-9639-e364514f44d0", "address": "fa:16:3e:d1:cf:fd", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03048b5-30", "ovs_interfaceid": "b03048b5-3014-4343-9639-e364514f44d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.728 238887 DEBUG nova.network.os_vif_util [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:cf:fd,bridge_name='br-int',has_traffic_filtering=True,id=b03048b5-3014-4343-9639-e364514f44d0,network=Network(de22212f-33f4-472b-8b67-05be2c5418f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03048b5-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.728 238887 DEBUG os_vif [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:cf:fd,bridge_name='br-int',has_traffic_filtering=True,id=b03048b5-3014-4343-9639-e364514f44d0,network=Network(de22212f-33f4-472b-8b67-05be2c5418f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03048b5-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.729 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.729 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.730 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.732 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.732 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb03048b5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.733 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb03048b5-30, col_values=(('external_ids', {'iface-id': 'b03048b5-3014-4343-9639-e364514f44d0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:cf:fd', 'vm-uuid': '1b038c3f-57e2-4f69-a27c-2ba8d465dfc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.734 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:50 np0005604943 NetworkManager[49093]: <info>  [1770033590.7358] manager: (tapb03048b5-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.737 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.740 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.742 238887 INFO os_vif [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:cf:fd,bridge_name='br-int',has_traffic_filtering=True,id=b03048b5-3014-4343-9639-e364514f44d0,network=Network(de22212f-33f4-472b-8b67-05be2c5418f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03048b5-30')#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.798 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.798 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.798 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] No VIF found with MAC fa:16:3e:d1:cf:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.799 238887 INFO nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Using config drive#033[00m
Feb  2 06:59:50 np0005604943 nova_compute[238883]: 2026-02-02 11:59:50.821 238887 DEBUG nova.storage.rbd_utils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] rbd image 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Feb  2 06:59:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Feb  2 06:59:50 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.125 238887 INFO nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Creating config drive at /var/lib/nova/instances/1b038c3f-57e2-4f69-a27c-2ba8d465dfc1/disk.config#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.129 238887 DEBUG oslo_concurrency.processutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1b038c3f-57e2-4f69-a27c-2ba8d465dfc1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsxk234fp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.185 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.246 238887 DEBUG oslo_concurrency.processutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1b038c3f-57e2-4f69-a27c-2ba8d465dfc1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsxk234fp" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.268 238887 DEBUG nova.storage.rbd_utils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] rbd image 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.271 238887 DEBUG oslo_concurrency.processutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1b038c3f-57e2-4f69-a27c-2ba8d465dfc1/disk.config 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.373 238887 DEBUG oslo_concurrency.processutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1b038c3f-57e2-4f69-a27c-2ba8d465dfc1/disk.config 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.374 238887 INFO nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Deleting local config drive /var/lib/nova/instances/1b038c3f-57e2-4f69-a27c-2ba8d465dfc1/disk.config because it was imported into RBD.#033[00m
Feb  2 06:59:51 np0005604943 kernel: tapb03048b5-30: entered promiscuous mode
Feb  2 06:59:51 np0005604943 NetworkManager[49093]: <info>  [1770033591.4077] manager: (tapb03048b5-30): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Feb  2 06:59:51 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:51Z|00104|binding|INFO|Claiming lport b03048b5-3014-4343-9639-e364514f44d0 for this chassis.
Feb  2 06:59:51 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:51Z|00105|binding|INFO|b03048b5-3014-4343-9639-e364514f44d0: Claiming fa:16:3e:d1:cf:fd 10.100.0.13
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.409 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.416 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:cf:fd 10.100.0.13'], port_security=['fa:16:3e:d1:cf:fd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1b038c3f-57e2-4f69-a27c-2ba8d465dfc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de22212f-33f4-472b-8b67-05be2c5418f5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc9ca354da4dd4bdccf919f13d3561', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c1acc81f-70be-41e7-925b-e46224557e82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d9a408e-4ec8-415c-980d-60f0a24de8bc, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=b03048b5-3014-4343-9639-e364514f44d0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.417 155011 INFO neutron.agent.ovn.metadata.agent [-] Port b03048b5-3014-4343-9639-e364514f44d0 in datapath de22212f-33f4-472b-8b67-05be2c5418f5 bound to our chassis#033[00m
Feb  2 06:59:51 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:51Z|00106|binding|INFO|Setting lport b03048b5-3014-4343-9639-e364514f44d0 ovn-installed in OVS
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.418 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network de22212f-33f4-472b-8b67-05be2c5418f5#033[00m
Feb  2 06:59:51 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:51Z|00107|binding|INFO|Setting lport b03048b5-3014-4343-9639-e364514f44d0 up in Southbound
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.420 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.426 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ceacaac2-db89-4ab7-b760-00db02371ff1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.427 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapde22212f-31 in ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.430 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapde22212f-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.430 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fe90945f-8c1f-4ea7-8ddc-ee508da3cf5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.431 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d9562f26-fc58-4a15-a6b7-2347f4aa0f02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:51 np0005604943 systemd-machined[206973]: New machine qemu-11-instance-0000000b.
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.439 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[bd14bf25-7820-44ee-94f0-536e9aaf532a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:51 np0005604943 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Feb  2 06:59:51 np0005604943 systemd-udevd[254072]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.459 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1359d957-af16-4cec-86e0-611cf8723795]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:51 np0005604943 NetworkManager[49093]: <info>  [1770033591.4663] device (tapb03048b5-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 06:59:51 np0005604943 NetworkManager[49093]: <info>  [1770033591.4671] device (tapb03048b5-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.476 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[140a68f9-681e-4e64-a065-c11c127037b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.480 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[26fc3ff8-4f49-4f33-b07b-2fada3fa0277]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:51 np0005604943 systemd-udevd[254075]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 06:59:51 np0005604943 NetworkManager[49093]: <info>  [1770033591.4814] manager: (tapde22212f-30): new Veth device (/org/freedesktop/NetworkManager/Devices/65)
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.502 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[1b758d08-ca4a-421a-af30-e5d0088b127d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.505 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[385ff679-6776-44d6-9639-999845fa8e11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:51 np0005604943 NetworkManager[49093]: <info>  [1770033591.5243] device (tapde22212f-30): carrier: link connected
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.526 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a87e11-5bbc-4ef8-b59a-9e75d2556f40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.542 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d93f9560-24af-47fc-85bb-0e7a8808f2d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde22212f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:22:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403687, 'reachable_time': 42411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254102, 'error': None, 'target': 'ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.555 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[af091807-7ece-497d-8b1a-15b5cbad937a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:22ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403687, 'tstamp': 403687}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254103, 'error': None, 'target': 'ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.571 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1e3075-72e5-46e2-ab4e-5f52e060f19f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde22212f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:22:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403687, 'reachable_time': 42411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254104, 'error': None, 'target': 'ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.596 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ceda5d88-e087-42ce-a5a7-3ce2dcc14ede]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.641 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9268ff-cbf5-4b79-b568-9d06c7502dde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.643 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde22212f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.643 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.644 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde22212f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:51 np0005604943 NetworkManager[49093]: <info>  [1770033591.6465] manager: (tapde22212f-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.646 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:51 np0005604943 kernel: tapde22212f-30: entered promiscuous mode
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.648 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapde22212f-30, col_values=(('external_ids', {'iface-id': '6001fd23-eaf7-4f4e-bf94-96506f1de9d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:51 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:51Z|00108|binding|INFO|Releasing lport 6001fd23-eaf7-4f4e-bf94-96506f1de9d4 from this chassis (sb_readonly=0)
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.656 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.656 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/de22212f-33f4-472b-8b67-05be2c5418f5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/de22212f-33f4-472b-8b67-05be2c5418f5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.658 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[de73c553-4c39-4aee-9f2f-58b72aa4f511]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.659 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-de22212f-33f4-472b-8b67-05be2c5418f5
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/de22212f-33f4-472b-8b67-05be2c5418f5.pid.haproxy
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID de22212f-33f4-472b-8b67-05be2c5418f5
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 06:59:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:51.659 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5', 'env', 'PROCESS_TAG=haproxy-de22212f-33f4-472b-8b67-05be2c5418f5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/de22212f-33f4-472b-8b67-05be2c5418f5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.748 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033591.7481673, 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.749 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] VM Started (Lifecycle Event)#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.769 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.773 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033591.7483723, 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.773 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] VM Paused (Lifecycle Event)#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.794 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.796 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.808 238887 DEBUG nova.compute.manager [req-7948671d-095f-4c7b-914f-bb46b16934f6 req-9c08ab71-451b-4dc3-960b-74f4e468295d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received event network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.808 238887 DEBUG oslo_concurrency.lockutils [req-7948671d-095f-4c7b-914f-bb46b16934f6 req-9c08ab71-451b-4dc3-960b-74f4e468295d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.809 238887 DEBUG oslo_concurrency.lockutils [req-7948671d-095f-4c7b-914f-bb46b16934f6 req-9c08ab71-451b-4dc3-960b-74f4e468295d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.809 238887 DEBUG oslo_concurrency.lockutils [req-7948671d-095f-4c7b-914f-bb46b16934f6 req-9c08ab71-451b-4dc3-960b-74f4e468295d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.809 238887 DEBUG nova.compute.manager [req-7948671d-095f-4c7b-914f-bb46b16934f6 req-9c08ab71-451b-4dc3-960b-74f4e468295d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Processing event network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.809 238887 DEBUG nova.compute.manager [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.812 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.814 238887 INFO nova.virt.libvirt.driver [-] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Instance spawned successfully.#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.814 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.819 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.819 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033591.811933, 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.819 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] VM Resumed (Lifecycle Event)#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.829 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.830 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.830 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.831 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.831 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.831 238887 DEBUG nova.virt.libvirt.driver [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.835 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:59:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.839 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 06:59:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Feb  2 06:59:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.855 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.880 238887 INFO nova.compute.manager [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Took 4.88 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.881 238887 DEBUG nova.compute.manager [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.937 238887 INFO nova.compute.manager [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Took 5.84 seconds to build instance.#033[00m
Feb  2 06:59:51 np0005604943 nova_compute[238883]: 2026-02-02 11:59:51.953 238887 DEBUG oslo_concurrency.lockutils [None req-dac65604-e300-4023-9b39-5e650a4fc4a9 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:51 np0005604943 podman[254177]: 2026-02-02 11:59:51.982698432 +0000 UTC m=+0.053080992 container create 4ce89a93ab265ed97fc68f70c042ff9fe0c32fc236a4d1aa08744f250e06bc9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 06:59:52 np0005604943 systemd[1]: Started libpod-conmon-4ce89a93ab265ed97fc68f70c042ff9fe0c32fc236a4d1aa08744f250e06bc9f.scope.
Feb  2 06:59:52 np0005604943 systemd[1]: Started libcrun container.
Feb  2 06:59:52 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ab7892671dce0e4b402c8286cd513e25d16ba370fd07f414a70702ed7c26ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 06:59:52 np0005604943 podman[254177]: 2026-02-02 11:59:51.951831914 +0000 UTC m=+0.022214494 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 06:59:52 np0005604943 podman[254177]: 2026-02-02 11:59:52.052740433 +0000 UTC m=+0.123123033 container init 4ce89a93ab265ed97fc68f70c042ff9fe0c32fc236a4d1aa08744f250e06bc9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 06:59:52 np0005604943 podman[254177]: 2026-02-02 11:59:52.057296337 +0000 UTC m=+0.127678897 container start 4ce89a93ab265ed97fc68f70c042ff9fe0c32fc236a4d1aa08744f250e06bc9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 06:59:52 np0005604943 neutron-haproxy-ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5[254193]: [NOTICE]   (254197) : New worker (254199) forked
Feb  2 06:59:52 np0005604943 neutron-haproxy-ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5[254193]: [NOTICE]   (254197) : Loading success.
Feb  2 06:59:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 222 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 3.6 MiB/s wr, 101 op/s
Feb  2 06:59:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 06:59:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2568471054' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 06:59:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 06:59:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2568471054' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 06:59:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:59:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Feb  2 06:59:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Feb  2 06:59:52 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Feb  2 06:59:53 np0005604943 nova_compute[238883]: 2026-02-02 11:59:53.913 238887 DEBUG nova.compute.manager [req-22016e49-8c4b-4af2-bc5e-63765bb91b65 req-ef060ae3-4734-4d13-a622-c7f129b71181 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received event network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:59:53 np0005604943 nova_compute[238883]: 2026-02-02 11:59:53.914 238887 DEBUG oslo_concurrency.lockutils [req-22016e49-8c4b-4af2-bc5e-63765bb91b65 req-ef060ae3-4734-4d13-a622-c7f129b71181 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:53 np0005604943 nova_compute[238883]: 2026-02-02 11:59:53.914 238887 DEBUG oslo_concurrency.lockutils [req-22016e49-8c4b-4af2-bc5e-63765bb91b65 req-ef060ae3-4734-4d13-a622-c7f129b71181 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:53 np0005604943 nova_compute[238883]: 2026-02-02 11:59:53.914 238887 DEBUG oslo_concurrency.lockutils [req-22016e49-8c4b-4af2-bc5e-63765bb91b65 req-ef060ae3-4734-4d13-a622-c7f129b71181 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:53 np0005604943 nova_compute[238883]: 2026-02-02 11:59:53.914 238887 DEBUG nova.compute.manager [req-22016e49-8c4b-4af2-bc5e-63765bb91b65 req-ef060ae3-4734-4d13-a622-c7f129b71181 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] No waiting events found dispatching network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:59:53 np0005604943 nova_compute[238883]: 2026-02-02 11:59:53.915 238887 WARNING nova.compute.manager [req-22016e49-8c4b-4af2-bc5e-63765bb91b65 req-ef060ae3-4734-4d13-a622-c7f129b71181 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received unexpected event network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 for instance with vm_state active and task_state None.#033[00m
Feb  2 06:59:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 289 op/s
Feb  2 06:59:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Feb  2 06:59:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Feb  2 06:59:54 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Feb  2 06:59:55 np0005604943 nova_compute[238883]: 2026-02-02 11:59:55.737 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.007 238887 DEBUG nova.compute.manager [req-164304d7-d876-43cf-8ff4-0bb9466dd82b req-2959bf60-d372-453c-8046-54225dee9983 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received event network-changed-b03048b5-3014-4343-9639-e364514f44d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.007 238887 DEBUG nova.compute.manager [req-164304d7-d876-43cf-8ff4-0bb9466dd82b req-2959bf60-d372-453c-8046-54225dee9983 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Refreshing instance network info cache due to event network-changed-b03048b5-3014-4343-9639-e364514f44d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.008 238887 DEBUG oslo_concurrency.lockutils [req-164304d7-d876-43cf-8ff4-0bb9466dd82b req-2959bf60-d372-453c-8046-54225dee9983 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.008 238887 DEBUG oslo_concurrency.lockutils [req-164304d7-d876-43cf-8ff4-0bb9466dd82b req-2959bf60-d372-453c-8046-54225dee9983 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.008 238887 DEBUG nova.network.neutron [req-164304d7-d876-43cf-8ff4-0bb9466dd82b req-2959bf60-d372-453c-8046-54225dee9983 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Refreshing network info cache for port b03048b5-3014-4343-9639-e364514f44d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.186 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 36 KiB/s wr, 279 op/s
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.318 238887 DEBUG oslo_concurrency.lockutils [None req-1ccb62f6-7cfd-49bd-ba5b-48f3f3282569 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "484e5b46-6672-4796-8f30-6d3e862428d3" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.319 238887 DEBUG oslo_concurrency.lockutils [None req-1ccb62f6-7cfd-49bd-ba5b-48f3f3282569 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.331 238887 INFO nova.compute.manager [None req-1ccb62f6-7cfd-49bd-ba5b-48f3f3282569 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Detaching volume 63f6256e-2171-493c-8888-ea8c800ad577#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.459 238887 INFO nova.virt.block_device [None req-1ccb62f6-7cfd-49bd-ba5b-48f3f3282569 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Attempting to driver detach volume 63f6256e-2171-493c-8888-ea8c800ad577 from mountpoint /dev/vdb#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.469 238887 DEBUG nova.virt.libvirt.driver [None req-1ccb62f6-7cfd-49bd-ba5b-48f3f3282569 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Attempting to detach device vdb from instance 484e5b46-6672-4796-8f30-6d3e862428d3 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.470 238887 DEBUG nova.virt.libvirt.guest [None req-1ccb62f6-7cfd-49bd-ba5b-48f3f3282569 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 06:59:56 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:59:56 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-63f6256e-2171-493c-8888-ea8c800ad577">
Feb  2 06:59:56 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:59:56 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:59:56 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:59:56 np0005604943 nova_compute[238883]:  <serial>63f6256e-2171-493c-8888-ea8c800ad577</serial>
Feb  2 06:59:56 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 06:59:56 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:59:56 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.478 238887 INFO nova.virt.libvirt.driver [None req-1ccb62f6-7cfd-49bd-ba5b-48f3f3282569 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Successfully detached device vdb from instance 484e5b46-6672-4796-8f30-6d3e862428d3 from the persistent domain config.#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.478 238887 DEBUG nova.virt.libvirt.driver [None req-1ccb62f6-7cfd-49bd-ba5b-48f3f3282569 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 484e5b46-6672-4796-8f30-6d3e862428d3 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.479 238887 DEBUG nova.virt.libvirt.guest [None req-1ccb62f6-7cfd-49bd-ba5b-48f3f3282569 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 06:59:56 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 06:59:56 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-63f6256e-2171-493c-8888-ea8c800ad577">
Feb  2 06:59:56 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 06:59:56 np0005604943 nova_compute[238883]:  </source>
Feb  2 06:59:56 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 06:59:56 np0005604943 nova_compute[238883]:  <serial>63f6256e-2171-493c-8888-ea8c800ad577</serial>
Feb  2 06:59:56 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 06:59:56 np0005604943 nova_compute[238883]: </disk>
Feb  2 06:59:56 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.587 238887 DEBUG nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Received event <DeviceRemovedEvent: 1770033596.5875, 484e5b46-6672-4796-8f30-6d3e862428d3 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.591 238887 DEBUG nova.virt.libvirt.driver [None req-1ccb62f6-7cfd-49bd-ba5b-48f3f3282569 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 484e5b46-6672-4796-8f30-6d3e862428d3 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.593 238887 INFO nova.virt.libvirt.driver [None req-1ccb62f6-7cfd-49bd-ba5b-48f3f3282569 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Successfully detached device vdb from instance 484e5b46-6672-4796-8f30-6d3e862428d3 from the live domain config.#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.749 238887 DEBUG nova.objects.instance [None req-1ccb62f6-7cfd-49bd-ba5b-48f3f3282569 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lazy-loading 'flavor' on Instance uuid 484e5b46-6672-4796-8f30-6d3e862428d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:59:56 np0005604943 nova_compute[238883]: 2026-02-02 11:59:56.779 238887 DEBUG oslo_concurrency.lockutils [None req-1ccb62f6-7cfd-49bd-ba5b-48f3f3282569 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:57 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:57Z|00109|binding|INFO|Releasing lport 6001fd23-eaf7-4f4e-bf94-96506f1de9d4 from this chassis (sb_readonly=0)
Feb  2 06:59:57 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:57Z|00110|binding|INFO|Releasing lport 7f7a24e7-2e36-4c1c-8857-8367e857534f from this chassis (sb_readonly=0)
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.073 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.417 238887 DEBUG nova.network.neutron [req-164304d7-d876-43cf-8ff4-0bb9466dd82b req-2959bf60-d372-453c-8046-54225dee9983 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Updated VIF entry in instance network info cache for port b03048b5-3014-4343-9639-e364514f44d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.419 238887 DEBUG nova.network.neutron [req-164304d7-d876-43cf-8ff4-0bb9466dd82b req-2959bf60-d372-453c-8046-54225dee9983 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Updating instance_info_cache with network_info: [{"id": "b03048b5-3014-4343-9639-e364514f44d0", "address": "fa:16:3e:d1:cf:fd", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03048b5-30", "ovs_interfaceid": "b03048b5-3014-4343-9639-e364514f44d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.438 238887 DEBUG oslo_concurrency.lockutils [req-164304d7-d876-43cf-8ff4-0bb9466dd82b req-2959bf60-d372-453c-8046-54225dee9983 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 06:59:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 06:59:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Feb  2 06:59:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Feb  2 06:59:57 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.573 238887 DEBUG oslo_concurrency.lockutils [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "484e5b46-6672-4796-8f30-6d3e862428d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.574 238887 DEBUG oslo_concurrency.lockutils [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.574 238887 DEBUG oslo_concurrency.lockutils [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.574 238887 DEBUG oslo_concurrency.lockutils [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.575 238887 DEBUG oslo_concurrency.lockutils [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:57.575 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.576 238887 INFO nova.compute.manager [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Terminating instance#033[00m
Feb  2 06:59:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:57.576 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.578 238887 DEBUG nova.compute.manager [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.578 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:57 np0005604943 kernel: tap5ea4af23-2f (unregistering): left promiscuous mode
Feb  2 06:59:57 np0005604943 NetworkManager[49093]: <info>  [1770033597.6300] device (tap5ea4af23-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.635 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:57 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:57Z|00111|binding|INFO|Releasing lport 5ea4af23-2f74-4e93-8aa4-42a49865dbf4 from this chassis (sb_readonly=0)
Feb  2 06:59:57 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:57Z|00112|binding|INFO|Setting lport 5ea4af23-2f74-4e93-8aa4-42a49865dbf4 down in Southbound
Feb  2 06:59:57 np0005604943 ovn_controller[145056]: 2026-02-02T11:59:57Z|00113|binding|INFO|Removing iface tap5ea4af23-2f ovn-installed in OVS
Feb  2 06:59:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:57.647 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:d4:07 10.100.0.7'], port_security=['fa:16:3e:92:d4:07 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '484e5b46-6672-4796-8f30-6d3e862428d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-302d1601-7819-4001-9e16-ee97183eb73b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61afd70cadc143c2a9c65f6cec8dc9e8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f4789968-3c4c-4b11-a2c2-fa2dafeb7088', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.211'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb72e047-676c-4da5-9d5d-6a9b44c0057a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=5ea4af23-2f74-4e93-8aa4-42a49865dbf4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 06:59:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:57.648 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 5ea4af23-2f74-4e93-8aa4-42a49865dbf4 in datapath 302d1601-7819-4001-9e16-ee97183eb73b unbound from our chassis#033[00m
Feb  2 06:59:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:57.649 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 302d1601-7819-4001-9e16-ee97183eb73b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 06:59:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:57.650 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[cea5fc89-05e7-4675-9c30-c005d0bf9f21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:57.651 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b namespace which is not needed anymore#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.658 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:57 np0005604943 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Feb  2 06:59:57 np0005604943 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 13.148s CPU time.
Feb  2 06:59:57 np0005604943 systemd-machined[206973]: Machine qemu-9-instance-00000009 terminated.
Feb  2 06:59:57 np0005604943 neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b[253187]: [NOTICE]   (253191) : haproxy version is 2.8.14-c23fe91
Feb  2 06:59:57 np0005604943 neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b[253187]: [NOTICE]   (253191) : path to executable is /usr/sbin/haproxy
Feb  2 06:59:57 np0005604943 neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b[253187]: [WARNING]  (253191) : Exiting Master process...
Feb  2 06:59:57 np0005604943 neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b[253187]: [WARNING]  (253191) : Exiting Master process...
Feb  2 06:59:57 np0005604943 neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b[253187]: [ALERT]    (253191) : Current worker (253193) exited with code 143 (Terminated)
Feb  2 06:59:57 np0005604943 neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b[253187]: [WARNING]  (253191) : All workers exited. Exiting... (0)
Feb  2 06:59:57 np0005604943 systemd[1]: libpod-0ad10784a1e0608887a8fda3e883bec873b3305bef297292afe54601fa9ace41.scope: Deactivated successfully.
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.796 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:57 np0005604943 podman[254230]: 2026-02-02 11:59:57.796864534 +0000 UTC m=+0.075128379 container died 0ad10784a1e0608887a8fda3e883bec873b3305bef297292afe54601fa9ace41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.802 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.817 238887 INFO nova.virt.libvirt.driver [-] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Instance destroyed successfully.#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.818 238887 DEBUG nova.objects.instance [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lazy-loading 'resources' on Instance uuid 484e5b46-6672-4796-8f30-6d3e862428d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.831 238887 DEBUG nova.virt.libvirt.vif [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:59:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-949220200',display_name='tempest-VolumesBackupsTest-instance-949220200',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-949220200',id=9,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNxTZ14bn8vPrO9oxOInbihHMGj10qQvaC4jOXS0xXmtEhvsgYoxZPSH8wDBRxFEFRVuxr8jHsawf9NRli3KMqWhhXStjp7DSe1XQieULnJNXv/iowk8UImq0y2/s8Et5g==',key_name='tempest-keypair-225548884',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:59:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='61afd70cadc143c2a9c65f6cec8dc9e8',ramdisk_id='',reservation_id='r-r0o2x8zy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-1949354358',owner_user_name='tempest-VolumesBackupsTest-1949354358-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T11:59:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='619ce2f20dd849f6a462d2162bcccc7a',uuid=484e5b46-6672-4796-8f30-6d3e862428d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "address": "fa:16:3e:92:d4:07", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea4af23-2f", "ovs_interfaceid": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.832 238887 DEBUG nova.network.os_vif_util [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Converting VIF {"id": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "address": "fa:16:3e:92:d4:07", "network": {"id": "302d1601-7819-4001-9e16-ee97183eb73b", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-255519272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61afd70cadc143c2a9c65f6cec8dc9e8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea4af23-2f", "ovs_interfaceid": "5ea4af23-2f74-4e93-8aa4-42a49865dbf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.833 238887 DEBUG nova.network.os_vif_util [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:92:d4:07,bridge_name='br-int',has_traffic_filtering=True,id=5ea4af23-2f74-4e93-8aa4-42a49865dbf4,network=Network(302d1601-7819-4001-9e16-ee97183eb73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea4af23-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.833 238887 DEBUG os_vif [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:92:d4:07,bridge_name='br-int',has_traffic_filtering=True,id=5ea4af23-2f74-4e93-8aa4-42a49865dbf4,network=Network(302d1601-7819-4001-9e16-ee97183eb73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea4af23-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.834 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.834 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ea4af23-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.836 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.837 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.839 238887 INFO os_vif [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:92:d4:07,bridge_name='br-int',has_traffic_filtering=True,id=5ea4af23-2f74-4e93-8aa4-42a49865dbf4,network=Network(302d1601-7819-4001-9e16-ee97183eb73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea4af23-2f')#033[00m
Feb  2 06:59:57 np0005604943 systemd[1]: var-lib-containers-storage-overlay-708cf6674ca98aecb2bd3d3b8a772c550dc0637d72d5f2b63c8c278e063f02ff-merged.mount: Deactivated successfully.
Feb  2 06:59:57 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ad10784a1e0608887a8fda3e883bec873b3305bef297292afe54601fa9ace41-userdata-shm.mount: Deactivated successfully.
Feb  2 06:59:57 np0005604943 podman[254230]: 2026-02-02 11:59:57.863985206 +0000 UTC m=+0.142249021 container cleanup 0ad10784a1e0608887a8fda3e883bec873b3305bef297292afe54601fa9ace41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb  2 06:59:57 np0005604943 systemd[1]: libpod-conmon-0ad10784a1e0608887a8fda3e883bec873b3305bef297292afe54601fa9ace41.scope: Deactivated successfully.
Feb  2 06:59:57 np0005604943 podman[254282]: 2026-02-02 11:59:57.911145137 +0000 UTC m=+0.033317006 container remove 0ad10784a1e0608887a8fda3e883bec873b3305bef297292afe54601fa9ace41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 06:59:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:57.915 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[db9052d7-1bf9-4e8b-a55b-dd586846c593]: (4, ('Mon Feb  2 11:59:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b (0ad10784a1e0608887a8fda3e883bec873b3305bef297292afe54601fa9ace41)\n0ad10784a1e0608887a8fda3e883bec873b3305bef297292afe54601fa9ace41\nMon Feb  2 11:59:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b (0ad10784a1e0608887a8fda3e883bec873b3305bef297292afe54601fa9ace41)\n0ad10784a1e0608887a8fda3e883bec873b3305bef297292afe54601fa9ace41\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:57.917 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[54360da0-a89e-46d2-a511-cd6e3193d4ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:57.918 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap302d1601-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.920 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:57 np0005604943 kernel: tap302d1601-70: left promiscuous mode
Feb  2 06:59:57 np0005604943 nova_compute[238883]: 2026-02-02 11:59:57.928 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 06:59:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:57.931 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7541a8ca-d2d0-431a-a17f-3c8f87adc975]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:57.945 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[0dbbd36a-6287-49e8-b46a-0b975507a151]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:57.947 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fb889325-fdc0-4f06-90c3-5f7f758ccaf2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:57.963 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[dec489ec-beb1-4a5a-adc8-aa529a719e11]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400168, 'reachable_time': 18082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254299, 'error': None, 'target': 'ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:57 np0005604943 systemd[1]: run-netns-ovnmeta\x2d302d1601\x2d7819\x2d4001\x2d9e16\x2dee97183eb73b.mount: Deactivated successfully.
Feb  2 06:59:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:57.971 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-302d1601-7819-4001-9e16-ee97183eb73b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 06:59:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 11:59:57.971 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[80b954ec-6b75-466f-926d-a78c8f282da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 06:59:58 np0005604943 nova_compute[238883]: 2026-02-02 11:59:58.114 238887 INFO nova.virt.libvirt.driver [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Deleting instance files /var/lib/nova/instances/484e5b46-6672-4796-8f30-6d3e862428d3_del#033[00m
Feb  2 06:59:58 np0005604943 nova_compute[238883]: 2026-02-02 11:59:58.115 238887 INFO nova.virt.libvirt.driver [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Deletion of /var/lib/nova/instances/484e5b46-6672-4796-8f30-6d3e862428d3_del complete#033[00m
Feb  2 06:59:58 np0005604943 nova_compute[238883]: 2026-02-02 11:59:58.160 238887 INFO nova.compute.manager [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Took 0.58 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 06:59:58 np0005604943 nova_compute[238883]: 2026-02-02 11:59:58.160 238887 DEBUG oslo.service.loopingcall [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 06:59:58 np0005604943 nova_compute[238883]: 2026-02-02 11:59:58.161 238887 DEBUG nova.compute.manager [-] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 06:59:58 np0005604943 nova_compute[238883]: 2026-02-02 11:59:58.161 238887 DEBUG nova.network.neutron [-] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 06:59:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 33 KiB/s wr, 343 op/s
Feb  2 06:59:58 np0005604943 nova_compute[238883]: 2026-02-02 11:59:58.876 238887 DEBUG nova.compute.manager [req-7dc959b5-52be-4436-be7e-8931f400bc0f req-c3ea405d-9cab-4c70-ad9f-f34c00dd834d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Received event network-vif-unplugged-5ea4af23-2f74-4e93-8aa4-42a49865dbf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 06:59:58 np0005604943 nova_compute[238883]: 2026-02-02 11:59:58.876 238887 DEBUG oslo_concurrency.lockutils [req-7dc959b5-52be-4436-be7e-8931f400bc0f req-c3ea405d-9cab-4c70-ad9f-f34c00dd834d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 06:59:58 np0005604943 nova_compute[238883]: 2026-02-02 11:59:58.876 238887 DEBUG oslo_concurrency.lockutils [req-7dc959b5-52be-4436-be7e-8931f400bc0f req-c3ea405d-9cab-4c70-ad9f-f34c00dd834d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 06:59:58 np0005604943 nova_compute[238883]: 2026-02-02 11:59:58.877 238887 DEBUG oslo_concurrency.lockutils [req-7dc959b5-52be-4436-be7e-8931f400bc0f req-c3ea405d-9cab-4c70-ad9f-f34c00dd834d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 06:59:58 np0005604943 nova_compute[238883]: 2026-02-02 11:59:58.877 238887 DEBUG nova.compute.manager [req-7dc959b5-52be-4436-be7e-8931f400bc0f req-c3ea405d-9cab-4c70-ad9f-f34c00dd834d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] No waiting events found dispatching network-vif-unplugged-5ea4af23-2f74-4e93-8aa4-42a49865dbf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 06:59:58 np0005604943 nova_compute[238883]: 2026-02-02 11:59:58.877 238887 DEBUG nova.compute.manager [req-7dc959b5-52be-4436-be7e-8931f400bc0f req-c3ea405d-9cab-4c70-ad9f-f34c00dd834d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Received event network-vif-unplugged-5ea4af23-2f74-4e93-8aa4-42a49865dbf4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.023 238887 DEBUG nova.network.neutron [-] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.038 238887 INFO nova.compute.manager [-] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Took 1.88 seconds to deallocate network for instance.#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.080 238887 DEBUG oslo_concurrency.lockutils [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.081 238887 DEBUG oslo_concurrency.lockutils [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.104 238887 DEBUG nova.compute.manager [req-c62d20c3-f22f-4242-a21e-b43de1e9dd06 req-a557dc47-5b52-47bc-a906-a8785c76ce3f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Received event network-vif-deleted-5ea4af23-2f74-4e93-8aa4-42a49865dbf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.183 238887 DEBUG oslo_concurrency.processutils [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:00:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 140 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 24 KiB/s wr, 181 op/s
Feb  2 07:00:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:00:00 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/18781512' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.752 238887 DEBUG oslo_concurrency.processutils [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.757 238887 DEBUG nova.compute.provider_tree [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.779 238887 DEBUG nova.scheduler.client.report [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.802 238887 DEBUG oslo_concurrency.lockutils [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.823 238887 INFO nova.scheduler.client.report [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Deleted allocations for instance 484e5b46-6672-4796-8f30-6d3e862428d3#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.896 238887 DEBUG oslo_concurrency.lockutils [None req-8a891e03-05c2-4e7e-a33c-722df2ccfd08 619ce2f20dd849f6a462d2162bcccc7a 61afd70cadc143c2a9c65f6cec8dc9e8 - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.957 238887 DEBUG nova.compute.manager [req-71591487-7721-4537-b7ab-c5708b032a78 req-a5300e97-27df-4e47-95f0-f7125a805ee0 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Received event network-vif-plugged-5ea4af23-2f74-4e93-8aa4-42a49865dbf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.959 238887 DEBUG oslo_concurrency.lockutils [req-71591487-7721-4537-b7ab-c5708b032a78 req-a5300e97-27df-4e47-95f0-f7125a805ee0 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.959 238887 DEBUG oslo_concurrency.lockutils [req-71591487-7721-4537-b7ab-c5708b032a78 req-a5300e97-27df-4e47-95f0-f7125a805ee0 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.960 238887 DEBUG oslo_concurrency.lockutils [req-71591487-7721-4537-b7ab-c5708b032a78 req-a5300e97-27df-4e47-95f0-f7125a805ee0 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "484e5b46-6672-4796-8f30-6d3e862428d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.960 238887 DEBUG nova.compute.manager [req-71591487-7721-4537-b7ab-c5708b032a78 req-a5300e97-27df-4e47-95f0-f7125a805ee0 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] No waiting events found dispatching network-vif-plugged-5ea4af23-2f74-4e93-8aa4-42a49865dbf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:00:00 np0005604943 nova_compute[238883]: 2026-02-02 12:00:00.960 238887 WARNING nova.compute.manager [req-71591487-7721-4537-b7ab-c5708b032a78 req-a5300e97-27df-4e47-95f0-f7125a805ee0 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Received unexpected event network-vif-plugged-5ea4af23-2f74-4e93-8aa4-42a49865dbf4 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:00:01 np0005604943 nova_compute[238883]: 2026-02-02 12:00:01.188 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 140 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1023 B/s wr, 84 op/s
Feb  2 07:00:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:00:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Feb  2 07:00:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Feb  2 07:00:02 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Feb  2 07:00:02 np0005604943 nova_compute[238883]: 2026-02-02 12:00:02.836 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:03 np0005604943 nova_compute[238883]: 2026-02-02 12:00:03.288 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033588.287336, b6e0af38-f069-4516-848d-2b7093956fa0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:00:03 np0005604943 nova_compute[238883]: 2026-02-02 12:00:03.288 238887 INFO nova.compute.manager [-] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:00:03 np0005604943 nova_compute[238883]: 2026-02-02 12:00:03.310 238887 DEBUG nova.compute.manager [None req-060621ac-1696-4e8c-91c0-9ef67db34bc3 - - - - - -] [instance: b6e0af38-f069-4516-848d-2b7093956fa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:00:03 np0005604943 ovn_controller[145056]: 2026-02-02T12:00:03Z|00114|binding|INFO|Releasing lport 6001fd23-eaf7-4f4e-bf94-96506f1de9d4 from this chassis (sb_readonly=0)
Feb  2 07:00:03 np0005604943 nova_compute[238883]: 2026-02-02 12:00:03.443 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:04 np0005604943 ovn_controller[145056]: 2026-02-02T12:00:04Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d1:cf:fd 10.100.0.13
Feb  2 07:00:04 np0005604943 ovn_controller[145056]: 2026-02-02T12:00:04Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d1:cf:fd 10.100.0.13
Feb  2 07:00:04 np0005604943 podman[254323]: 2026-02-02 12:00:04.055857124 +0000 UTC m=+0.070848603 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:00:04 np0005604943 podman[254324]: 2026-02-02 12:00:04.05897294 +0000 UTC m=+0.073954069 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb  2 07:00:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 103 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 143 op/s
Feb  2 07:00:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Feb  2 07:00:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Feb  2 07:00:04 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Feb  2 07:00:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:00:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:00:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 07:00:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:00:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 07:00:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:00:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 07:00:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 07:00:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 07:00:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:00:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:00:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:00:05 np0005604943 podman[254511]: 2026-02-02 12:00:05.273977361 +0000 UTC m=+0.038760183 container create e4d64bf045866a2d423f873267e9f79fd2a35ec365a3c0cfa9aeeb3a85ca45a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_clarke, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:00:05 np0005604943 systemd[1]: Started libpod-conmon-e4d64bf045866a2d423f873267e9f79fd2a35ec365a3c0cfa9aeeb3a85ca45a2.scope.
Feb  2 07:00:05 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:00:05 np0005604943 podman[254511]: 2026-02-02 12:00:05.334299339 +0000 UTC m=+0.099082161 container init e4d64bf045866a2d423f873267e9f79fd2a35ec365a3c0cfa9aeeb3a85ca45a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_clarke, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 07:00:05 np0005604943 podman[254511]: 2026-02-02 12:00:05.339450388 +0000 UTC m=+0.104233180 container start e4d64bf045866a2d423f873267e9f79fd2a35ec365a3c0cfa9aeeb3a85ca45a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 07:00:05 np0005604943 nifty_clarke[254527]: 167 167
Feb  2 07:00:05 np0005604943 podman[254511]: 2026-02-02 12:00:05.344172307 +0000 UTC m=+0.108955129 container attach e4d64bf045866a2d423f873267e9f79fd2a35ec365a3c0cfa9aeeb3a85ca45a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_clarke, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:00:05 np0005604943 systemd[1]: libpod-e4d64bf045866a2d423f873267e9f79fd2a35ec365a3c0cfa9aeeb3a85ca45a2.scope: Deactivated successfully.
Feb  2 07:00:05 np0005604943 conmon[254527]: conmon e4d64bf045866a2d423f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4d64bf045866a2d423f873267e9f79fd2a35ec365a3c0cfa9aeeb3a85ca45a2.scope/container/memory.events
Feb  2 07:00:05 np0005604943 podman[254511]: 2026-02-02 12:00:05.345747709 +0000 UTC m=+0.110530521 container died e4d64bf045866a2d423f873267e9f79fd2a35ec365a3c0cfa9aeeb3a85ca45a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:00:05 np0005604943 podman[254511]: 2026-02-02 12:00:05.254194804 +0000 UTC m=+0.018977616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:00:05 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a1914b4eb1bb4511691575489c93645251e5fb0424de42ceac09690136a5a5cf-merged.mount: Deactivated successfully.
Feb  2 07:00:05 np0005604943 podman[254511]: 2026-02-02 12:00:05.3966107 +0000 UTC m=+0.161393502 container remove e4d64bf045866a2d423f873267e9f79fd2a35ec365a3c0cfa9aeeb3a85ca45a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_clarke, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:00:05 np0005604943 systemd[1]: libpod-conmon-e4d64bf045866a2d423f873267e9f79fd2a35ec365a3c0cfa9aeeb3a85ca45a2.scope: Deactivated successfully.
Feb  2 07:00:05 np0005604943 podman[254551]: 2026-02-02 12:00:05.525277022 +0000 UTC m=+0.041869297 container create edb08b93aa94e2cdb6d950734a73c2d77db88b656e841f409a2620e0c6931343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williams, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:00:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Feb  2 07:00:05 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:00:05 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:00:05 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:00:05 np0005604943 systemd[1]: Started libpod-conmon-edb08b93aa94e2cdb6d950734a73c2d77db88b656e841f409a2620e0c6931343.scope.
Feb  2 07:00:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Feb  2 07:00:05 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Feb  2 07:00:05 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:00:05 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212315a140f2e6f0b6aa90c3cd8abdb36cd2a9e08d29ba0f6ea6c689ef08e8fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:00:05 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212315a140f2e6f0b6aa90c3cd8abdb36cd2a9e08d29ba0f6ea6c689ef08e8fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:00:05 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212315a140f2e6f0b6aa90c3cd8abdb36cd2a9e08d29ba0f6ea6c689ef08e8fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:00:05 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212315a140f2e6f0b6aa90c3cd8abdb36cd2a9e08d29ba0f6ea6c689ef08e8fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:00:05 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212315a140f2e6f0b6aa90c3cd8abdb36cd2a9e08d29ba0f6ea6c689ef08e8fa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 07:00:05 np0005604943 podman[254551]: 2026-02-02 12:00:05.505443354 +0000 UTC m=+0.022035639 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:00:05 np0005604943 podman[254551]: 2026-02-02 12:00:05.601644836 +0000 UTC m=+0.118237121 container init edb08b93aa94e2cdb6d950734a73c2d77db88b656e841f409a2620e0c6931343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williams, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:00:05 np0005604943 podman[254551]: 2026-02-02 12:00:05.612033928 +0000 UTC m=+0.128626193 container start edb08b93aa94e2cdb6d950734a73c2d77db88b656e841f409a2620e0c6931343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williams, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:00:05 np0005604943 podman[254551]: 2026-02-02 12:00:05.615708218 +0000 UTC m=+0.132300533 container attach edb08b93aa94e2cdb6d950734a73c2d77db88b656e841f409a2620e0c6931343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williams, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:00:06 np0005604943 tender_williams[254568]: --> passed data devices: 0 physical, 3 LVM
Feb  2 07:00:06 np0005604943 tender_williams[254568]: --> All data devices are unavailable
Feb  2 07:00:06 np0005604943 systemd[1]: libpod-edb08b93aa94e2cdb6d950734a73c2d77db88b656e841f409a2620e0c6931343.scope: Deactivated successfully.
Feb  2 07:00:06 np0005604943 conmon[254568]: conmon edb08b93aa94e2cdb6d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-edb08b93aa94e2cdb6d950734a73c2d77db88b656e841f409a2620e0c6931343.scope/container/memory.events
Feb  2 07:00:06 np0005604943 podman[254551]: 2026-02-02 12:00:06.03800048 +0000 UTC m=+0.554592745 container died edb08b93aa94e2cdb6d950734a73c2d77db88b656e841f409a2620e0c6931343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 07:00:06 np0005604943 systemd[1]: var-lib-containers-storage-overlay-212315a140f2e6f0b6aa90c3cd8abdb36cd2a9e08d29ba0f6ea6c689ef08e8fa-merged.mount: Deactivated successfully.
Feb  2 07:00:06 np0005604943 podman[254551]: 2026-02-02 12:00:06.090456755 +0000 UTC m=+0.607049020 container remove edb08b93aa94e2cdb6d950734a73c2d77db88b656e841f409a2620e0c6931343 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_williams, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 07:00:06 np0005604943 systemd[1]: libpod-conmon-edb08b93aa94e2cdb6d950734a73c2d77db88b656e841f409a2620e0c6931343.scope: Deactivated successfully.
Feb  2 07:00:06 np0005604943 nova_compute[238883]: 2026-02-02 12:00:06.194 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 103 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 246 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Feb  2 07:00:06 np0005604943 podman[254663]: 2026-02-02 12:00:06.477558452 +0000 UTC m=+0.035798342 container create c9bb729a0011a579acbcaf6d8b846e60a25a7737af28a9b37546a93ba11060b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 07:00:06 np0005604943 systemd[1]: Started libpod-conmon-c9bb729a0011a579acbcaf6d8b846e60a25a7737af28a9b37546a93ba11060b8.scope.
Feb  2 07:00:06 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:00:06 np0005604943 podman[254663]: 2026-02-02 12:00:06.461902387 +0000 UTC m=+0.020142297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:00:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Feb  2 07:00:06 np0005604943 podman[254663]: 2026-02-02 12:00:06.567236806 +0000 UTC m=+0.125476716 container init c9bb729a0011a579acbcaf6d8b846e60a25a7737af28a9b37546a93ba11060b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:00:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Feb  2 07:00:06 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Feb  2 07:00:06 np0005604943 podman[254663]: 2026-02-02 12:00:06.576429506 +0000 UTC m=+0.134669396 container start c9bb729a0011a579acbcaf6d8b846e60a25a7737af28a9b37546a93ba11060b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:00:06 np0005604943 podman[254663]: 2026-02-02 12:00:06.580470366 +0000 UTC m=+0.138710276 container attach c9bb729a0011a579acbcaf6d8b846e60a25a7737af28a9b37546a93ba11060b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_gauss, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:00:06 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:06.579 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:00:06 np0005604943 busy_gauss[254679]: 167 167
Feb  2 07:00:06 np0005604943 systemd[1]: libpod-c9bb729a0011a579acbcaf6d8b846e60a25a7737af28a9b37546a93ba11060b8.scope: Deactivated successfully.
Feb  2 07:00:06 np0005604943 podman[254663]: 2026-02-02 12:00:06.585473031 +0000 UTC m=+0.143712931 container died c9bb729a0011a579acbcaf6d8b846e60a25a7737af28a9b37546a93ba11060b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_gauss, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:00:06 np0005604943 systemd[1]: var-lib-containers-storage-overlay-4777ff07c9a6608ff6735bbdeafbb79961ce9581a39aac8b6f19d527fb5cc809-merged.mount: Deactivated successfully.
Feb  2 07:00:06 np0005604943 podman[254663]: 2026-02-02 12:00:06.632915669 +0000 UTC m=+0.191155599 container remove c9bb729a0011a579acbcaf6d8b846e60a25a7737af28a9b37546a93ba11060b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_gauss, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:00:06 np0005604943 systemd[1]: libpod-conmon-c9bb729a0011a579acbcaf6d8b846e60a25a7737af28a9b37546a93ba11060b8.scope: Deactivated successfully.
Feb  2 07:00:06 np0005604943 podman[254702]: 2026-02-02 12:00:06.783082525 +0000 UTC m=+0.043575113 container create d37d26041ffa0305fc6f004e2dcbb015cbb83e903ea3c0038ffa3592e29dce47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bassi, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 07:00:06 np0005604943 systemd[1]: Started libpod-conmon-d37d26041ffa0305fc6f004e2dcbb015cbb83e903ea3c0038ffa3592e29dce47.scope.
Feb  2 07:00:06 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:00:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79412c1bcbc2dbbf4c3a8a954a417bf9d1a66ed664d2df413d4d0bacfcb42627/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:00:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79412c1bcbc2dbbf4c3a8a954a417bf9d1a66ed664d2df413d4d0bacfcb42627/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:00:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79412c1bcbc2dbbf4c3a8a954a417bf9d1a66ed664d2df413d4d0bacfcb42627/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:00:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79412c1bcbc2dbbf4c3a8a954a417bf9d1a66ed664d2df413d4d0bacfcb42627/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:00:06 np0005604943 podman[254702]: 2026-02-02 12:00:06.854042622 +0000 UTC m=+0.114535230 container init d37d26041ffa0305fc6f004e2dcbb015cbb83e903ea3c0038ffa3592e29dce47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bassi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 07:00:06 np0005604943 podman[254702]: 2026-02-02 12:00:06.762328152 +0000 UTC m=+0.022820790 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:00:06 np0005604943 podman[254702]: 2026-02-02 12:00:06.85988123 +0000 UTC m=+0.120373818 container start d37d26041ffa0305fc6f004e2dcbb015cbb83e903ea3c0038ffa3592e29dce47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:00:06 np0005604943 podman[254702]: 2026-02-02 12:00:06.863133318 +0000 UTC m=+0.123625926 container attach d37d26041ffa0305fc6f004e2dcbb015cbb83e903ea3c0038ffa3592e29dce47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bassi, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:00:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:00:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3023461112' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:00:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:00:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3023461112' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]: {
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:    "0": [
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:        {
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "devices": [
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "/dev/loop3"
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            ],
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "lv_name": "ceph_lv0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "lv_size": "21470642176",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "name": "ceph_lv0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "tags": {
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.cluster_name": "ceph",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.crush_device_class": "",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.encrypted": "0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.objectstore": "bluestore",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.osd_id": "0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.type": "block",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.vdo": "0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.with_tpm": "0"
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            },
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "type": "block",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "vg_name": "ceph_vg0"
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:        }
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:    ],
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:    "1": [
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:        {
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "devices": [
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "/dev/loop4"
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            ],
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "lv_name": "ceph_lv1",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "lv_size": "21470642176",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "name": "ceph_lv1",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "tags": {
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.cluster_name": "ceph",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.crush_device_class": "",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.encrypted": "0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.objectstore": "bluestore",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.osd_id": "1",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.type": "block",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.vdo": "0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.with_tpm": "0"
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            },
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "type": "block",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "vg_name": "ceph_vg1"
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:        }
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:    ],
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:    "2": [
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:        {
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "devices": [
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "/dev/loop5"
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            ],
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "lv_name": "ceph_lv2",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "lv_size": "21470642176",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "name": "ceph_lv2",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "tags": {
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.cluster_name": "ceph",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.crush_device_class": "",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.encrypted": "0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.objectstore": "bluestore",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.osd_id": "2",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.type": "block",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.vdo": "0",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:                "ceph.with_tpm": "0"
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            },
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "type": "block",
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:            "vg_name": "ceph_vg2"
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:        }
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]:    ]
Feb  2 07:00:07 np0005604943 unruffled_bassi[254719]: }
Feb  2 07:00:07 np0005604943 systemd[1]: libpod-d37d26041ffa0305fc6f004e2dcbb015cbb83e903ea3c0038ffa3592e29dce47.scope: Deactivated successfully.
Feb  2 07:00:07 np0005604943 podman[254702]: 2026-02-02 12:00:07.134532125 +0000 UTC m=+0.395024743 container died d37d26041ffa0305fc6f004e2dcbb015cbb83e903ea3c0038ffa3592e29dce47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:00:07 np0005604943 systemd[1]: var-lib-containers-storage-overlay-79412c1bcbc2dbbf4c3a8a954a417bf9d1a66ed664d2df413d4d0bacfcb42627-merged.mount: Deactivated successfully.
Feb  2 07:00:07 np0005604943 podman[254702]: 2026-02-02 12:00:07.171340985 +0000 UTC m=+0.431833563 container remove d37d26041ffa0305fc6f004e2dcbb015cbb83e903ea3c0038ffa3592e29dce47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 07:00:07 np0005604943 systemd[1]: libpod-conmon-d37d26041ffa0305fc6f004e2dcbb015cbb83e903ea3c0038ffa3592e29dce47.scope: Deactivated successfully.
Feb  2 07:00:07 np0005604943 podman[254802]: 2026-02-02 12:00:07.533243759 +0000 UTC m=+0.040397608 container create df382dbe50703011efa0c7aa6e5c99f06eec7844fa3a099b51dc12a4dd59f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_knuth, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb  2 07:00:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:00:07 np0005604943 systemd[1]: Started libpod-conmon-df382dbe50703011efa0c7aa6e5c99f06eec7844fa3a099b51dc12a4dd59f0f4.scope.
Feb  2 07:00:07 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:00:07 np0005604943 podman[254802]: 2026-02-02 12:00:07.513650077 +0000 UTC m=+0.020803956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:00:07 np0005604943 podman[254802]: 2026-02-02 12:00:07.621401632 +0000 UTC m=+0.128555531 container init df382dbe50703011efa0c7aa6e5c99f06eec7844fa3a099b51dc12a4dd59f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:00:07 np0005604943 podman[254802]: 2026-02-02 12:00:07.627491587 +0000 UTC m=+0.134645446 container start df382dbe50703011efa0c7aa6e5c99f06eec7844fa3a099b51dc12a4dd59f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_knuth, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 07:00:07 np0005604943 podman[254802]: 2026-02-02 12:00:07.630863239 +0000 UTC m=+0.138017098 container attach df382dbe50703011efa0c7aa6e5c99f06eec7844fa3a099b51dc12a4dd59f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_knuth, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Feb  2 07:00:07 np0005604943 gracious_knuth[254818]: 167 167
Feb  2 07:00:07 np0005604943 systemd[1]: libpod-df382dbe50703011efa0c7aa6e5c99f06eec7844fa3a099b51dc12a4dd59f0f4.scope: Deactivated successfully.
Feb  2 07:00:07 np0005604943 podman[254802]: 2026-02-02 12:00:07.632929085 +0000 UTC m=+0.140082974 container died df382dbe50703011efa0c7aa6e5c99f06eec7844fa3a099b51dc12a4dd59f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_knuth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:00:07 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f39354fce1e3b7091198b53cbe8826d8398fe4d341420e88b47925fc962aa76b-merged.mount: Deactivated successfully.
Feb  2 07:00:07 np0005604943 podman[254802]: 2026-02-02 12:00:07.669349913 +0000 UTC m=+0.176503762 container remove df382dbe50703011efa0c7aa6e5c99f06eec7844fa3a099b51dc12a4dd59f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_knuth, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  2 07:00:07 np0005604943 systemd[1]: libpod-conmon-df382dbe50703011efa0c7aa6e5c99f06eec7844fa3a099b51dc12a4dd59f0f4.scope: Deactivated successfully.
Feb  2 07:00:07 np0005604943 podman[254842]: 2026-02-02 12:00:07.821244097 +0000 UTC m=+0.034999971 container create 4b658f2cb5fd8013b9c23aba8ba17434ca06a9845be62ce067e78e58162ec6b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 07:00:07 np0005604943 nova_compute[238883]: 2026-02-02 12:00:07.843 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:07 np0005604943 systemd[1]: Started libpod-conmon-4b658f2cb5fd8013b9c23aba8ba17434ca06a9845be62ce067e78e58162ec6b9.scope.
Feb  2 07:00:07 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:00:07 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b611ca57650458fe3656f08bca8a9314edb8287e237eb47aa28f39c9c0ceea0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:00:07 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b611ca57650458fe3656f08bca8a9314edb8287e237eb47aa28f39c9c0ceea0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:00:07 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b611ca57650458fe3656f08bca8a9314edb8287e237eb47aa28f39c9c0ceea0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:00:07 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b611ca57650458fe3656f08bca8a9314edb8287e237eb47aa28f39c9c0ceea0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:00:07 np0005604943 podman[254842]: 2026-02-02 12:00:07.874835001 +0000 UTC m=+0.088590905 container init 4b658f2cb5fd8013b9c23aba8ba17434ca06a9845be62ce067e78e58162ec6b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:00:07 np0005604943 podman[254842]: 2026-02-02 12:00:07.879807207 +0000 UTC m=+0.093563071 container start 4b658f2cb5fd8013b9c23aba8ba17434ca06a9845be62ce067e78e58162ec6b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 07:00:07 np0005604943 podman[254842]: 2026-02-02 12:00:07.882960962 +0000 UTC m=+0.096716856 container attach 4b658f2cb5fd8013b9c23aba8ba17434ca06a9845be62ce067e78e58162ec6b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 07:00:07 np0005604943 podman[254842]: 2026-02-02 12:00:07.806796734 +0000 UTC m=+0.020552628 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:00:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 642 KiB/s rd, 4.5 MiB/s wr, 152 op/s
Feb  2 07:00:08 np0005604943 lvm[254935]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:00:08 np0005604943 lvm[254935]: VG ceph_vg0 finished
Feb  2 07:00:08 np0005604943 lvm[254938]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 07:00:08 np0005604943 lvm[254938]: VG ceph_vg1 finished
Feb  2 07:00:08 np0005604943 lvm[254940]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 07:00:08 np0005604943 lvm[254940]: VG ceph_vg2 finished
Feb  2 07:00:08 np0005604943 lvm[254941]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:00:08 np0005604943 lvm[254941]: VG ceph_vg0 finished
Feb  2 07:00:08 np0005604943 optimistic_bhabha[254859]: {}
Feb  2 07:00:08 np0005604943 systemd[1]: libpod-4b658f2cb5fd8013b9c23aba8ba17434ca06a9845be62ce067e78e58162ec6b9.scope: Deactivated successfully.
Feb  2 07:00:08 np0005604943 podman[254842]: 2026-02-02 12:00:08.615730533 +0000 UTC m=+0.829486407 container died 4b658f2cb5fd8013b9c23aba8ba17434ca06a9845be62ce067e78e58162ec6b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 07:00:08 np0005604943 systemd[1]: libpod-4b658f2cb5fd8013b9c23aba8ba17434ca06a9845be62ce067e78e58162ec6b9.scope: Consumed 1.030s CPU time.
Feb  2 07:00:08 np0005604943 systemd[1]: var-lib-containers-storage-overlay-0b611ca57650458fe3656f08bca8a9314edb8287e237eb47aa28f39c9c0ceea0-merged.mount: Deactivated successfully.
Feb  2 07:00:08 np0005604943 podman[254842]: 2026-02-02 12:00:08.653003765 +0000 UTC m=+0.866759639 container remove 4b658f2cb5fd8013b9c23aba8ba17434ca06a9845be62ce067e78e58162ec6b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bhabha, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 07:00:08 np0005604943 systemd[1]: libpod-conmon-4b658f2cb5fd8013b9c23aba8ba17434ca06a9845be62ce067e78e58162ec6b9.scope: Deactivated successfully.
Feb  2 07:00:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:00:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:00:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:00:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:00:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_12:00:09
Feb  2 07:00:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 07:00:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 07:00:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', '.mgr', 'volumes']
Feb  2 07:00:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 07:00:09 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:00:09 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:00:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:10.025 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:00:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:10.026 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:00:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:10.027 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 121 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 420 KiB/s rd, 2.1 MiB/s wr, 134 op/s
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:00:10 np0005604943 ovn_controller[145056]: 2026-02-02T12:00:10Z|00115|binding|INFO|Releasing lport 6001fd23-eaf7-4f4e-bf94-96506f1de9d4 from this chassis (sb_readonly=0)
Feb  2 07:00:10 np0005604943 nova_compute[238883]: 2026-02-02 12:00:10.868 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:00:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:00:11 np0005604943 nova_compute[238883]: 2026-02-02 12:00:11.197 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:11 np0005604943 nova_compute[238883]: 2026-02-02 12:00:11.982 238887 DEBUG oslo_concurrency.lockutils [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:00:11 np0005604943 nova_compute[238883]: 2026-02-02 12:00:11.983 238887 DEBUG oslo_concurrency.lockutils [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:11.999 238887 DEBUG nova.objects.instance [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lazy-loading 'flavor' on Instance uuid 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.039 238887 DEBUG oslo_concurrency.lockutils [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.233 238887 DEBUG oslo_concurrency.lockutils [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.233 238887 DEBUG oslo_concurrency.lockutils [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.234 238887 INFO nova.compute.manager [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Attaching volume 5ca9f3ac-4ae9-44e5-a527-38c5069d5aed to /dev/vdb#033[00m
Feb  2 07:00:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 121 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 1.6 MiB/s wr, 104 op/s
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.369 238887 DEBUG os_brick.utils [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.370 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.378 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.378 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[7b8f159a-f0e2-48ef-b025-a6b0c75520f0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.380 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.384 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.385 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[fdb0b9b5-4897-4bd4-bbd6-f48954a38c56]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.386 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.391 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.391 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[207d4daf-5b78-41ab-9143-272289c8ea24]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.393 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[543b0d19-dd0a-499c-92a5-9d5f5d606ced]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.393 238887 DEBUG oslo_concurrency.processutils [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:00:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:00:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1034610505' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.406 238887 DEBUG oslo_concurrency.processutils [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CMD "nvme version" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.408 238887 DEBUG os_brick.initiator.connectors.lightos [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.408 238887 DEBUG os_brick.initiator.connectors.lightos [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.409 238887 DEBUG os_brick.initiator.connectors.lightos [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.409 238887 DEBUG os_brick.utils [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] <== get_connector_properties: return (38ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.410 238887 DEBUG nova.virt.block_device [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Updating existing volume attachment record: 3c8be046-449a-4a7f-96b0-593e53475ce0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:00:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:00:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Feb  2 07:00:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Feb  2 07:00:12 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.815 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033597.808011, 484e5b46-6672-4796-8f30-6d3e862428d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.815 238887 INFO nova.compute.manager [-] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.839 238887 DEBUG nova.compute.manager [None req-39d6a735-9520-4d89-bda4-b4f32e93123c - - - - - -] [instance: 484e5b46-6672-4796-8f30-6d3e862428d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:00:12 np0005604943 nova_compute[238883]: 2026-02-02 12:00:12.847 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:00:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2228445231' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:00:13 np0005604943 nova_compute[238883]: 2026-02-02 12:00:13.278 238887 DEBUG nova.objects.instance [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lazy-loading 'flavor' on Instance uuid 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:00:13 np0005604943 nova_compute[238883]: 2026-02-02 12:00:13.298 238887 DEBUG nova.virt.libvirt.driver [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Attempting to attach volume 5ca9f3ac-4ae9-44e5-a527-38c5069d5aed with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 07:00:13 np0005604943 nova_compute[238883]: 2026-02-02 12:00:13.301 238887 DEBUG nova.virt.libvirt.guest [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 07:00:13 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:00:13 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-5ca9f3ac-4ae9-44e5-a527-38c5069d5aed">
Feb  2 07:00:13 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:00:13 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:00:13 np0005604943 nova_compute[238883]:  <auth username="openstack">
Feb  2 07:00:13 np0005604943 nova_compute[238883]:    <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:00:13 np0005604943 nova_compute[238883]:  </auth>
Feb  2 07:00:13 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:00:13 np0005604943 nova_compute[238883]:  <serial>5ca9f3ac-4ae9-44e5-a527-38c5069d5aed</serial>
Feb  2 07:00:13 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:00:13 np0005604943 nova_compute[238883]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 07:00:13 np0005604943 virtqemud[238654]: End of file while reading data: Input/output error
Feb  2 07:00:13 np0005604943 nova_compute[238883]: 2026-02-02 12:00:13.393 238887 DEBUG nova.virt.libvirt.driver [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:00:13 np0005604943 nova_compute[238883]: 2026-02-02 12:00:13.393 238887 DEBUG nova.virt.libvirt.driver [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:00:13 np0005604943 nova_compute[238883]: 2026-02-02 12:00:13.393 238887 DEBUG nova.virt.libvirt.driver [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:00:13 np0005604943 nova_compute[238883]: 2026-02-02 12:00:13.394 238887 DEBUG nova.virt.libvirt.driver [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] No VIF found with MAC fa:16:3e:d1:cf:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:00:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Feb  2 07:00:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Feb  2 07:00:13 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Feb  2 07:00:13 np0005604943 nova_compute[238883]: 2026-02-02 12:00:13.612 238887 DEBUG oslo_concurrency.lockutils [None req-2ae7db2d-d662-4f2f-8983-6b2b7743c7be 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:00:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 121 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 1.6 MiB/s wr, 117 op/s
Feb  2 07:00:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Feb  2 07:00:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Feb  2 07:00:14 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Feb  2 07:00:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Feb  2 07:00:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Feb  2 07:00:15 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Feb  2 07:00:16 np0005604943 nova_compute[238883]: 2026-02-02 12:00:16.199 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 121 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 39 KiB/s wr, 24 op/s
Feb  2 07:00:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:00:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3298275717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:00:16 np0005604943 nova_compute[238883]: 2026-02-02 12:00:16.528 238887 DEBUG oslo_concurrency.lockutils [None req-8f3fdc6e-59cc-417f-807e-d16a22555a0f 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:00:16 np0005604943 nova_compute[238883]: 2026-02-02 12:00:16.528 238887 DEBUG oslo_concurrency.lockutils [None req-8f3fdc6e-59cc-417f-807e-d16a22555a0f 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:00:16 np0005604943 nova_compute[238883]: 2026-02-02 12:00:16.548 238887 INFO nova.compute.manager [None req-8f3fdc6e-59cc-417f-807e-d16a22555a0f 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Detaching volume 5ca9f3ac-4ae9-44e5-a527-38c5069d5aed#033[00m
Feb  2 07:00:16 np0005604943 nova_compute[238883]: 2026-02-02 12:00:16.654 238887 INFO nova.virt.block_device [None req-8f3fdc6e-59cc-417f-807e-d16a22555a0f 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Attempting to driver detach volume 5ca9f3ac-4ae9-44e5-a527-38c5069d5aed from mountpoint /dev/vdb#033[00m
Feb  2 07:00:16 np0005604943 nova_compute[238883]: 2026-02-02 12:00:16.665 238887 DEBUG nova.virt.libvirt.driver [None req-8f3fdc6e-59cc-417f-807e-d16a22555a0f 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Attempting to detach device vdb from instance 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 07:00:16 np0005604943 nova_compute[238883]: 2026-02-02 12:00:16.666 238887 DEBUG nova.virt.libvirt.guest [None req-8f3fdc6e-59cc-417f-807e-d16a22555a0f 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 07:00:16 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:00:16 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-5ca9f3ac-4ae9-44e5-a527-38c5069d5aed">
Feb  2 07:00:16 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:00:16 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:00:16 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:00:16 np0005604943 nova_compute[238883]:  <serial>5ca9f3ac-4ae9-44e5-a527-38c5069d5aed</serial>
Feb  2 07:00:16 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 07:00:16 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:00:16 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 07:00:16 np0005604943 nova_compute[238883]: 2026-02-02 12:00:16.675 238887 INFO nova.virt.libvirt.driver [None req-8f3fdc6e-59cc-417f-807e-d16a22555a0f 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Successfully detached device vdb from instance 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 from the persistent domain config.#033[00m
Feb  2 07:00:16 np0005604943 nova_compute[238883]: 2026-02-02 12:00:16.676 238887 DEBUG nova.virt.libvirt.driver [None req-8f3fdc6e-59cc-417f-807e-d16a22555a0f 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 07:00:16 np0005604943 nova_compute[238883]: 2026-02-02 12:00:16.676 238887 DEBUG nova.virt.libvirt.guest [None req-8f3fdc6e-59cc-417f-807e-d16a22555a0f 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 07:00:16 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:00:16 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-5ca9f3ac-4ae9-44e5-a527-38c5069d5aed">
Feb  2 07:00:16 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:00:16 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:00:16 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:00:16 np0005604943 nova_compute[238883]:  <serial>5ca9f3ac-4ae9-44e5-a527-38c5069d5aed</serial>
Feb  2 07:00:16 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 07:00:16 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:00:16 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 07:00:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Feb  2 07:00:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Feb  2 07:00:16 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Feb  2 07:00:16 np0005604943 nova_compute[238883]: 2026-02-02 12:00:16.784 238887 DEBUG nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Received event <DeviceRemovedEvent: 1770033616.78394, 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 07:00:16 np0005604943 nova_compute[238883]: 2026-02-02 12:00:16.786 238887 DEBUG nova.virt.libvirt.driver [None req-8f3fdc6e-59cc-417f-807e-d16a22555a0f 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 07:00:16 np0005604943 nova_compute[238883]: 2026-02-02 12:00:16.787 238887 INFO nova.virt.libvirt.driver [None req-8f3fdc6e-59cc-417f-807e-d16a22555a0f 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Successfully detached device vdb from instance 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 from the live domain config.#033[00m
Feb  2 07:00:17 np0005604943 nova_compute[238883]: 2026-02-02 12:00:17.015 238887 DEBUG nova.objects.instance [None req-8f3fdc6e-59cc-417f-807e-d16a22555a0f 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lazy-loading 'flavor' on Instance uuid 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:00:17 np0005604943 nova_compute[238883]: 2026-02-02 12:00:17.047 238887 DEBUG oslo_concurrency.lockutils [None req-8f3fdc6e-59cc-417f-807e-d16a22555a0f 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:00:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:00:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Feb  2 07:00:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Feb  2 07:00:17 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Feb  2 07:00:17 np0005604943 nova_compute[238883]: 2026-02-02 12:00:17.851 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 123 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 575 KiB/s wr, 107 op/s
Feb  2 07:00:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:00:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1404874283' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:00:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Feb  2 07:00:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Feb  2 07:00:18 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Feb  2 07:00:19 np0005604943 nova_compute[238883]: 2026-02-02 12:00:19.654 238887 DEBUG nova.compute.manager [None req-678420ad-de54-4b17-9bfc-1687721be1d5 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:00:19 np0005604943 nova_compute[238883]: 2026-02-02 12:00:19.703 238887 INFO nova.compute.manager [None req-678420ad-de54-4b17-9bfc-1687721be1d5 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] instance snapshotting#033[00m
Feb  2 07:00:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Feb  2 07:00:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Feb  2 07:00:19 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Feb  2 07:00:19 np0005604943 nova_compute[238883]: 2026-02-02 12:00:19.905 238887 INFO nova.virt.libvirt.driver [None req-678420ad-de54-4b17-9bfc-1687721be1d5 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Beginning live snapshot process#033[00m
Feb  2 07:00:20 np0005604943 nova_compute[238883]: 2026-02-02 12:00:20.037 238887 DEBUG nova.virt.libvirt.imagebackend [None req-678420ad-de54-4b17-9bfc-1687721be1d5 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] No parent info for 21b263f0-00f1-47be-b8b1-e3c07da0a6a2; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb  2 07:00:20 np0005604943 nova_compute[238883]: 2026-02-02 12:00:20.205 238887 DEBUG nova.storage.rbd_utils [None req-678420ad-de54-4b17-9bfc-1687721be1d5 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] creating snapshot(ba2b78c4bd714f63abaca9c85c31d267) on rbd image(1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb  2 07:00:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 123 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 584 KiB/s wr, 173 op/s
Feb  2 07:00:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Feb  2 07:00:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Feb  2 07:00:20 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Feb  2 07:00:20 np0005604943 nova_compute[238883]: 2026-02-02 12:00:20.900 238887 DEBUG nova.storage.rbd_utils [None req-678420ad-de54-4b17-9bfc-1687721be1d5 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] cloning vms/1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk@ba2b78c4bd714f63abaca9c85c31d267 to images/3c21bba8-7447-4f7b-8add-32d60d531dee clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb  2 07:00:21 np0005604943 nova_compute[238883]: 2026-02-02 12:00:21.012 238887 DEBUG nova.storage.rbd_utils [None req-678420ad-de54-4b17-9bfc-1687721be1d5 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] flattening images/3c21bba8-7447-4f7b-8add-32d60d531dee flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb  2 07:00:21 np0005604943 nova_compute[238883]: 2026-02-02 12:00:21.225 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:21 np0005604943 nova_compute[238883]: 2026-02-02 12:00:21.418 238887 DEBUG nova.storage.rbd_utils [None req-678420ad-de54-4b17-9bfc-1687721be1d5 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] removing snapshot(ba2b78c4bd714f63abaca9c85c31d267) on rbd image(1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000762355229021974 of space, bias 1.0, pg target 0.22870656870659217 quantized to 32 (current 32)
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 4.199242509575012e-05 of space, bias 1.0, pg target 0.012597727528725036 quantized to 32 (current 32)
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 7.013607950440964e-07 of space, bias 1.0, pg target 0.00021040823851322892 quantized to 32 (current 32)
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661044671804542 of space, bias 1.0, pg target 0.19983134015413626 quantized to 32 (current 32)
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.095433777614583e-06 of space, bias 4.0, pg target 0.0013145205331374994 quantized to 16 (current 16)
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:00:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 07:00:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Feb  2 07:00:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Feb  2 07:00:21 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Feb  2 07:00:21 np0005604943 nova_compute[238883]: 2026-02-02 12:00:21.861 238887 DEBUG nova.storage.rbd_utils [None req-678420ad-de54-4b17-9bfc-1687721be1d5 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] creating snapshot(snap) on rbd image(3c21bba8-7447-4f7b-8add-32d60d531dee) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb  2 07:00:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:00:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1057897679' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:00:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:00:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1057897679' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:00:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 123 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 8.7 KiB/s wr, 66 op/s
Feb  2 07:00:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:00:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Feb  2 07:00:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Feb  2 07:00:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Feb  2 07:00:22 np0005604943 nova_compute[238883]: 2026-02-02 12:00:22.870 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:23 np0005604943 nova_compute[238883]: 2026-02-02 12:00:23.763 238887 INFO nova.virt.libvirt.driver [None req-678420ad-de54-4b17-9bfc-1687721be1d5 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Snapshot image upload complete#033[00m
Feb  2 07:00:23 np0005604943 nova_compute[238883]: 2026-02-02 12:00:23.764 238887 INFO nova.compute.manager [None req-678420ad-de54-4b17-9bfc-1687721be1d5 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Took 4.06 seconds to snapshot the instance on the hypervisor.#033[00m
Feb  2 07:00:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 202 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 401 op/s
Feb  2 07:00:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:00:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3082504306' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:00:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Feb  2 07:00:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Feb  2 07:00:25 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Feb  2 07:00:26 np0005604943 nova_compute[238883]: 2026-02-02 12:00:26.253 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 202 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 8.8 MiB/s rd, 8.6 MiB/s wr, 328 op/s
Feb  2 07:00:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Feb  2 07:00:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Feb  2 07:00:26 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Feb  2 07:00:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:00:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Feb  2 07:00:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Feb  2 07:00:27 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Feb  2 07:00:27 np0005604943 nova_compute[238883]: 2026-02-02 12:00:27.627 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:00:27 np0005604943 nova_compute[238883]: 2026-02-02 12:00:27.628 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:00:27 np0005604943 nova_compute[238883]: 2026-02-02 12:00:27.652 238887 DEBUG nova.compute.manager [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:00:27 np0005604943 nova_compute[238883]: 2026-02-02 12:00:27.726 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:00:27 np0005604943 nova_compute[238883]: 2026-02-02 12:00:27.727 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:00:27 np0005604943 nova_compute[238883]: 2026-02-02 12:00:27.736 238887 DEBUG nova.virt.hardware [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:00:27 np0005604943 nova_compute[238883]: 2026-02-02 12:00:27.737 238887 INFO nova.compute.claims [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:00:27 np0005604943 nova_compute[238883]: 2026-02-02 12:00:27.816 238887 DEBUG nova.scheduler.client.report [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Refreshing inventories for resource provider 30401227-b88f-415d-9c2d-3119bd1baf61 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  2 07:00:27 np0005604943 nova_compute[238883]: 2026-02-02 12:00:27.839 238887 DEBUG nova.scheduler.client.report [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Updating ProviderTree inventory for provider 30401227-b88f-415d-9c2d-3119bd1baf61 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  2 07:00:27 np0005604943 nova_compute[238883]: 2026-02-02 12:00:27.840 238887 DEBUG nova.compute.provider_tree [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Updating inventory in ProviderTree for provider 30401227-b88f-415d-9c2d-3119bd1baf61 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 07:00:27 np0005604943 nova_compute[238883]: 2026-02-02 12:00:27.855 238887 DEBUG nova.scheduler.client.report [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Refreshing aggregate associations for resource provider 30401227-b88f-415d-9c2d-3119bd1baf61, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  2 07:00:27 np0005604943 nova_compute[238883]: 2026-02-02 12:00:27.874 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:27 np0005604943 nova_compute[238883]: 2026-02-02 12:00:27.885 238887 DEBUG nova.scheduler.client.report [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Refreshing trait associations for resource provider 30401227-b88f-415d-9c2d-3119bd1baf61, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  2 07:00:27 np0005604943 nova_compute[238883]: 2026-02-02 12:00:27.974 238887 DEBUG oslo_concurrency.processutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:00:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 202 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 6.0 MiB/s wr, 227 op/s
Feb  2 07:00:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:00:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1462486391' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.514 238887 DEBUG oslo_concurrency.processutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.520 238887 DEBUG nova.compute.provider_tree [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.537 238887 DEBUG nova.scheduler.client.report [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.560 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.561 238887 DEBUG nova.compute.manager [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.606 238887 DEBUG nova.compute.manager [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.606 238887 DEBUG nova.network.neutron [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.622 238887 INFO nova.virt.libvirt.driver [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.643 238887 DEBUG nova.compute.manager [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.779 238887 DEBUG nova.compute.manager [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.780 238887 DEBUG nova.virt.libvirt.driver [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.781 238887 INFO nova.virt.libvirt.driver [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Creating image(s)#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.808 238887 DEBUG nova.storage.rbd_utils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] rbd image 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.837 238887 DEBUG nova.storage.rbd_utils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] rbd image 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.869 238887 DEBUG nova.storage.rbd_utils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] rbd image 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.874 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "025e6b6381dcfbbeae7b83278d10e0f71ee0b88c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.875 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "025e6b6381dcfbbeae7b83278d10e0f71ee0b88c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:00:28 np0005604943 nova_compute[238883]: 2026-02-02 12:00:28.882 238887 DEBUG nova.policy [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '55f5d320b54948c9a8f465d017972291', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '82fc9ca354da4dd4bdccf919f13d3561', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:00:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:00:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/14354024' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:00:29 np0005604943 nova_compute[238883]: 2026-02-02 12:00:29.132 238887 DEBUG nova.virt.libvirt.imagebackend [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Image locations are: [{'url': 'rbd://4548a36b-7cdc-5e3e-a814-4e1571be1fae/images/3c21bba8-7447-4f7b-8add-32d60d531dee/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://4548a36b-7cdc-5e3e-a814-4e1571be1fae/images/3c21bba8-7447-4f7b-8add-32d60d531dee/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Feb  2 07:00:29 np0005604943 nova_compute[238883]: 2026-02-02 12:00:29.187 238887 DEBUG nova.virt.libvirt.imagebackend [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Selected location: {'url': 'rbd://4548a36b-7cdc-5e3e-a814-4e1571be1fae/images/3c21bba8-7447-4f7b-8add-32d60d531dee/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Feb  2 07:00:29 np0005604943 nova_compute[238883]: 2026-02-02 12:00:29.188 238887 DEBUG nova.storage.rbd_utils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] cloning images/3c21bba8-7447-4f7b-8add-32d60d531dee@snap to None/959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb  2 07:00:29 np0005604943 nova_compute[238883]: 2026-02-02 12:00:29.287 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "025e6b6381dcfbbeae7b83278d10e0f71ee0b88c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:00:29 np0005604943 nova_compute[238883]: 2026-02-02 12:00:29.441 238887 DEBUG nova.objects.instance [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lazy-loading 'migration_context' on Instance uuid 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:00:29 np0005604943 nova_compute[238883]: 2026-02-02 12:00:29.463 238887 DEBUG nova.virt.libvirt.driver [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 07:00:29 np0005604943 nova_compute[238883]: 2026-02-02 12:00:29.463 238887 DEBUG nova.virt.libvirt.driver [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Ensure instance console log exists: /var/lib/nova/instances/959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:00:29 np0005604943 nova_compute[238883]: 2026-02-02 12:00:29.464 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:00:29 np0005604943 nova_compute[238883]: 2026-02-02 12:00:29.465 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:00:29 np0005604943 nova_compute[238883]: 2026-02-02 12:00:29.465 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:00:29 np0005604943 nova_compute[238883]: 2026-02-02 12:00:29.741 238887 DEBUG nova.network.neutron [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Successfully created port: 1d1b9b21-b452-4b32-a535-8b2ecfac26e6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:00:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Feb  2 07:00:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Feb  2 07:00:29 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Feb  2 07:00:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 202 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 5.9 KiB/s wr, 101 op/s
Feb  2 07:00:30 np0005604943 nova_compute[238883]: 2026-02-02 12:00:30.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:00:30 np0005604943 nova_compute[238883]: 2026-02-02 12:00:30.661 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:00:30 np0005604943 nova_compute[238883]: 2026-02-02 12:00:30.662 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:00:30 np0005604943 nova_compute[238883]: 2026-02-02 12:00:30.662 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:00:30 np0005604943 nova_compute[238883]: 2026-02-02 12:00:30.662 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 07:00:30 np0005604943 nova_compute[238883]: 2026-02-02 12:00:30.663 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:00:30 np0005604943 nova_compute[238883]: 2026-02-02 12:00:30.686 238887 DEBUG nova.network.neutron [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Successfully updated port: 1d1b9b21-b452-4b32-a535-8b2ecfac26e6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:00:30 np0005604943 nova_compute[238883]: 2026-02-02 12:00:30.710 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "refresh_cache-959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:00:30 np0005604943 nova_compute[238883]: 2026-02-02 12:00:30.711 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquired lock "refresh_cache-959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:00:30 np0005604943 nova_compute[238883]: 2026-02-02 12:00:30.711 238887 DEBUG nova.network.neutron [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:00:30 np0005604943 nova_compute[238883]: 2026-02-02 12:00:30.800 238887 DEBUG nova.compute.manager [req-79043a8d-937d-46e9-a6d1-2b8b29116a0e req-8046c4f5-3402-43a2-b807-c4da9cd7088e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Received event network-changed-1d1b9b21-b452-4b32-a535-8b2ecfac26e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:00:30 np0005604943 nova_compute[238883]: 2026-02-02 12:00:30.801 238887 DEBUG nova.compute.manager [req-79043a8d-937d-46e9-a6d1-2b8b29116a0e req-8046c4f5-3402-43a2-b807-c4da9cd7088e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Refreshing instance network info cache due to event network-changed-1d1b9b21-b452-4b32-a535-8b2ecfac26e6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:00:30 np0005604943 nova_compute[238883]: 2026-02-02 12:00:30.802 238887 DEBUG oslo_concurrency.lockutils [req-79043a8d-937d-46e9-a6d1-2b8b29116a0e req-8046c4f5-3402-43a2-b807-c4da9cd7088e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:00:30 np0005604943 nova_compute[238883]: 2026-02-02 12:00:30.861 238887 DEBUG nova.network.neutron [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:00:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Feb  2 07:00:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Feb  2 07:00:30 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Feb  2 07:00:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:00:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3179904213' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.246 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.288 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.318 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.319 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.491 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.492 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4326MB free_disk=59.942537064664066GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.492 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.493 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.562 238887 DEBUG nova.network.neutron [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Updating instance_info_cache with network_info: [{"id": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "address": "fa:16:3e:74:a4:df", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d1b9b21-b4", "ovs_interfaceid": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.564 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.564 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.564 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.564 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.579 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Releasing lock "refresh_cache-959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.580 238887 DEBUG nova.compute.manager [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Instance network_info: |[{"id": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "address": "fa:16:3e:74:a4:df", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d1b9b21-b4", "ovs_interfaceid": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.580 238887 DEBUG oslo_concurrency.lockutils [req-79043a8d-937d-46e9-a6d1-2b8b29116a0e req-8046c4f5-3402-43a2-b807-c4da9cd7088e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.580 238887 DEBUG nova.network.neutron [req-79043a8d-937d-46e9-a6d1-2b8b29116a0e req-8046c4f5-3402-43a2-b807-c4da9cd7088e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Refreshing network info cache for port 1d1b9b21-b452-4b32-a535-8b2ecfac26e6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.583 238887 DEBUG nova.virt.libvirt.driver [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Start _get_guest_xml network_info=[{"id": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "address": "fa:16:3e:74:a4:df", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d1b9b21-b4", "ovs_interfaceid": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-02T12:00:19Z,direct_url=<?>,disk_format='raw',id=3c21bba8-7447-4f7b-8add-32d60d531dee,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1491073825',owner='82fc9ca354da4dd4bdccf919f13d3561',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-02T12:00:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'image_id': '3c21bba8-7447-4f7b-8add-32d60d531dee'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.590 238887 WARNING nova.virt.libvirt.driver [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.595 238887 DEBUG nova.virt.libvirt.host [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.596 238887 DEBUG nova.virt.libvirt.host [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.606 238887 DEBUG nova.virt.libvirt.host [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.607 238887 DEBUG nova.virt.libvirt.host [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.607 238887 DEBUG nova.virt.libvirt.driver [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.608 238887 DEBUG nova.virt.hardware [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-02T12:00:19Z,direct_url=<?>,disk_format='raw',id=3c21bba8-7447-4f7b-8add-32d60d531dee,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1491073825',owner='82fc9ca354da4dd4bdccf919f13d3561',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-02T12:00:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.608 238887 DEBUG nova.virt.hardware [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.609 238887 DEBUG nova.virt.hardware [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.609 238887 DEBUG nova.virt.hardware [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.609 238887 DEBUG nova.virt.hardware [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.609 238887 DEBUG nova.virt.hardware [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.610 238887 DEBUG nova.virt.hardware [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.610 238887 DEBUG nova.virt.hardware [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.610 238887 DEBUG nova.virt.hardware [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.610 238887 DEBUG nova.virt.hardware [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.611 238887 DEBUG nova.virt.hardware [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.613 238887 DEBUG oslo_concurrency.processutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:00:31 np0005604943 nova_compute[238883]: 2026-02-02 12:00:31.629 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:00:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:00:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2905955764' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.155 238887 DEBUG oslo_concurrency.processutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:00:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:00:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2703608095' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.185 238887 DEBUG nova.storage.rbd_utils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] rbd image 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.192 238887 DEBUG oslo_concurrency.processutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.218 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.224 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.250 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:00:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 202 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 4.8 KiB/s wr, 82 op/s
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.287 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.288 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:00:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.754 238887 DEBUG nova.network.neutron [req-79043a8d-937d-46e9-a6d1-2b8b29116a0e req-8046c4f5-3402-43a2-b807-c4da9cd7088e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Updated VIF entry in instance network info cache for port 1d1b9b21-b452-4b32-a535-8b2ecfac26e6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.755 238887 DEBUG nova.network.neutron [req-79043a8d-937d-46e9-a6d1-2b8b29116a0e req-8046c4f5-3402-43a2-b807-c4da9cd7088e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Updating instance_info_cache with network_info: [{"id": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "address": "fa:16:3e:74:a4:df", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d1b9b21-b4", "ovs_interfaceid": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:00:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:00:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3563205974' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.771 238887 DEBUG oslo_concurrency.lockutils [req-79043a8d-937d-46e9-a6d1-2b8b29116a0e req-8046c4f5-3402-43a2-b807-c4da9cd7088e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:00:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:00:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2826442593' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.808 238887 DEBUG oslo_concurrency.processutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.810 238887 DEBUG nova.virt.libvirt.vif [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:00:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-863790867',display_name='tempest-TestStampPattern-server-863790867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-863790867',id=12,image_ref='3c21bba8-7447-4f7b-8add-32d60d531dee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHkgfVtqBda0LVlF5slmF25Lo/XwS8Q8Sghn9kMaubVvv9bxRUWvKYk1te57NsoxW3EiHAVoG8/mfQ9ewKRmH/t5lWTLgWAau4XX+kOaKUVaGSh/OmZZNeyoLD4n3OeH0A==',key_name='tempest-TestStampPattern-80198922',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='82fc9ca354da4dd4bdccf919f13d3561',ramdisk_id='',reservation_id='r-zzuf9m4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='1b038c3f-57e2-4f69-a27c-2ba8d465dfc1',image_min_disk='1',image_min_ram='0',image_owner_id='82fc9ca354da4dd4bdccf919f13d3561',image_owner_project_name='tempest-TestStampPattern-577361379',image_owner_user_name='tempest-TestStampPattern-577361379-project-member',image_user_id='55f5d320b54948c9a8f465d017972291',network_allocated='True',owner_project_name='tempest-TestStampPattern-577361379',owner_user_name='tempest-TestStampPattern-577361379-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:00:28Z,user_data=None,user_id='55f5d320b54948c9a8f465d017972291',uuid=959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "address": "fa:16:3e:74:a4:df", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d1b9b21-b4", "ovs_interfaceid": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.811 238887 DEBUG nova.network.os_vif_util [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Converting VIF {"id": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "address": "fa:16:3e:74:a4:df", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d1b9b21-b4", "ovs_interfaceid": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.812 238887 DEBUG nova.network.os_vif_util [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:a4:df,bridge_name='br-int',has_traffic_filtering=True,id=1d1b9b21-b452-4b32-a535-8b2ecfac26e6,network=Network(de22212f-33f4-472b-8b67-05be2c5418f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d1b9b21-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.813 238887 DEBUG nova.objects.instance [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lazy-loading 'pci_devices' on Instance uuid 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.827 238887 DEBUG nova.virt.libvirt.driver [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  <uuid>959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8</uuid>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  <name>instance-0000000c</name>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <nova:name>tempest-TestStampPattern-server-863790867</nova:name>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:00:31</nova:creationTime>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:        <nova:user uuid="55f5d320b54948c9a8f465d017972291">tempest-TestStampPattern-577361379-project-member</nova:user>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:        <nova:project uuid="82fc9ca354da4dd4bdccf919f13d3561">tempest-TestStampPattern-577361379</nova:project>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <nova:root type="image" uuid="3c21bba8-7447-4f7b-8add-32d60d531dee"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:        <nova:port uuid="1d1b9b21-b452-4b32-a535-8b2ecfac26e6">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <entry name="serial">959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8</entry>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <entry name="uuid">959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8</entry>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8_disk">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8_disk.config">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:74:a4:df"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <target dev="tap1d1b9b21-b4"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8/console.log" append="off"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <input type="keyboard" bus="usb"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:00:32 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:00:32 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:00:32 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:00:32 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.829 238887 DEBUG nova.compute.manager [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Preparing to wait for external event network-vif-plugged-1d1b9b21-b452-4b32-a535-8b2ecfac26e6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.830 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.830 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.830 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.831 238887 DEBUG nova.virt.libvirt.vif [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:00:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-863790867',display_name='tempest-TestStampPattern-server-863790867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-863790867',id=12,image_ref='3c21bba8-7447-4f7b-8add-32d60d531dee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHkgfVtqBda0LVlF5slmF25Lo/XwS8Q8Sghn9kMaubVvv9bxRUWvKYk1te57NsoxW3EiHAVoG8/mfQ9ewKRmH/t5lWTLgWAau4XX+kOaKUVaGSh/OmZZNeyoLD4n3OeH0A==',key_name='tempest-TestStampPattern-80198922',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='82fc9ca354da4dd4bdccf919f13d3561',ramdisk_id='',reservation_id='r-zzuf9m4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='1b038c3f-57e2-4f69-a27c-2ba8d465dfc1',image_min_disk='1',image_min_ram='0',image_owner_id='82fc9ca354da4dd4bdccf919f13d3561',image_owner_project_name='tempest-TestStampPattern-577361379',image_owner_user_name='tempest-TestStampPattern-577361379-project-member',image_user_id='55f5d320b54948c9a8f465d017972291',network_allocated='True',owner_project_name='tempest-TestStampPattern-577361379',owner_user_name='tempest-TestStampPattern-577361379-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:00:28Z,user_data=None,user_id='55f5d320b54948c9a8f465d017972291',uuid=959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "address": "fa:16:3e:74:a4:df", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d1b9b21-b4", "ovs_interfaceid": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.831 238887 DEBUG nova.network.os_vif_util [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Converting VIF {"id": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "address": "fa:16:3e:74:a4:df", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d1b9b21-b4", "ovs_interfaceid": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.832 238887 DEBUG nova.network.os_vif_util [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:a4:df,bridge_name='br-int',has_traffic_filtering=True,id=1d1b9b21-b452-4b32-a535-8b2ecfac26e6,network=Network(de22212f-33f4-472b-8b67-05be2c5418f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d1b9b21-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.832 238887 DEBUG os_vif [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:a4:df,bridge_name='br-int',has_traffic_filtering=True,id=1d1b9b21-b452-4b32-a535-8b2ecfac26e6,network=Network(de22212f-33f4-472b-8b67-05be2c5418f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d1b9b21-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.833 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.834 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.834 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.839 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.840 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d1b9b21-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.841 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d1b9b21-b4, col_values=(('external_ids', {'iface-id': '1d1b9b21-b452-4b32-a535-8b2ecfac26e6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:74:a4:df', 'vm-uuid': '959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.843 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:32 np0005604943 NetworkManager[49093]: <info>  [1770033632.8444] manager: (tap1d1b9b21-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.845 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.851 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.853 238887 INFO os_vif [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:a4:df,bridge_name='br-int',has_traffic_filtering=True,id=1d1b9b21-b452-4b32-a535-8b2ecfac26e6,network=Network(de22212f-33f4-472b-8b67-05be2c5418f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d1b9b21-b4')#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.906 238887 DEBUG nova.virt.libvirt.driver [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.907 238887 DEBUG nova.virt.libvirt.driver [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.907 238887 DEBUG nova.virt.libvirt.driver [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] No VIF found with MAC fa:16:3e:74:a4:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.907 238887 INFO nova.virt.libvirt.driver [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Using config drive#033[00m
Feb  2 07:00:32 np0005604943 nova_compute[238883]: 2026-02-02 12:00:32.939 238887 DEBUG nova.storage.rbd_utils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] rbd image 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:00:33 np0005604943 nova_compute[238883]: 2026-02-02 12:00:33.288 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:00:33 np0005604943 nova_compute[238883]: 2026-02-02 12:00:33.289 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 07:00:33 np0005604943 nova_compute[238883]: 2026-02-02 12:00:33.328 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 07:00:33 np0005604943 nova_compute[238883]: 2026-02-02 12:00:33.329 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:00:33 np0005604943 nova_compute[238883]: 2026-02-02 12:00:33.329 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:00:33 np0005604943 nova_compute[238883]: 2026-02-02 12:00:33.330 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:00:33 np0005604943 nova_compute[238883]: 2026-02-02 12:00:33.330 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 07:00:33 np0005604943 nova_compute[238883]: 2026-02-02 12:00:33.417 238887 INFO nova.virt.libvirt.driver [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Creating config drive at /var/lib/nova/instances/959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8/disk.config#033[00m
Feb  2 07:00:33 np0005604943 nova_compute[238883]: 2026-02-02 12:00:33.420 238887 DEBUG oslo_concurrency.processutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfrkja_ug execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:00:33 np0005604943 nova_compute[238883]: 2026-02-02 12:00:33.546 238887 DEBUG oslo_concurrency.processutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfrkja_ug" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:00:33 np0005604943 nova_compute[238883]: 2026-02-02 12:00:33.575 238887 DEBUG nova.storage.rbd_utils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] rbd image 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:00:33 np0005604943 nova_compute[238883]: 2026-02-02 12:00:33.580 238887 DEBUG oslo_concurrency.processutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8/disk.config 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:00:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.007 238887 DEBUG oslo_concurrency.processutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8/disk.config 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.008 238887 INFO nova.virt.libvirt.driver [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Deleting local config drive /var/lib/nova/instances/959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8/disk.config because it was imported into RBD.#033[00m
Feb  2 07:00:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Feb  2 07:00:34 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Feb  2 07:00:34 np0005604943 kernel: tap1d1b9b21-b4: entered promiscuous mode
Feb  2 07:00:34 np0005604943 NetworkManager[49093]: <info>  [1770033634.0602] manager: (tap1d1b9b21-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.061 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:34 np0005604943 ovn_controller[145056]: 2026-02-02T12:00:34Z|00116|binding|INFO|Claiming lport 1d1b9b21-b452-4b32-a535-8b2ecfac26e6 for this chassis.
Feb  2 07:00:34 np0005604943 ovn_controller[145056]: 2026-02-02T12:00:34Z|00117|binding|INFO|1d1b9b21-b452-4b32-a535-8b2ecfac26e6: Claiming fa:16:3e:74:a4:df 10.100.0.10
Feb  2 07:00:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:34.070 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:a4:df 10.100.0.10'], port_security=['fa:16:3e:74:a4:df 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de22212f-33f4-472b-8b67-05be2c5418f5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc9ca354da4dd4bdccf919f13d3561', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c1acc81f-70be-41e7-925b-e46224557e82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d9a408e-4ec8-415c-980d-60f0a24de8bc, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=1d1b9b21-b452-4b32-a535-8b2ecfac26e6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:00:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:34.071 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 1d1b9b21-b452-4b32-a535-8b2ecfac26e6 in datapath de22212f-33f4-472b-8b67-05be2c5418f5 bound to our chassis#033[00m
Feb  2 07:00:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:34.072 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network de22212f-33f4-472b-8b67-05be2c5418f5#033[00m
Feb  2 07:00:34 np0005604943 ovn_controller[145056]: 2026-02-02T12:00:34Z|00118|binding|INFO|Setting lport 1d1b9b21-b452-4b32-a535-8b2ecfac26e6 ovn-installed in OVS
Feb  2 07:00:34 np0005604943 ovn_controller[145056]: 2026-02-02T12:00:34Z|00119|binding|INFO|Setting lport 1d1b9b21-b452-4b32-a535-8b2ecfac26e6 up in Southbound
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.075 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.079 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:34.092 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7986203e-7829-457a-a75c-7f693389b869]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:00:34 np0005604943 systemd-udevd[255547]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:00:34 np0005604943 systemd-machined[206973]: New machine qemu-12-instance-0000000c.
Feb  2 07:00:34 np0005604943 NetworkManager[49093]: <info>  [1770033634.1165] device (tap1d1b9b21-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:00:34 np0005604943 NetworkManager[49093]: <info>  [1770033634.1171] device (tap1d1b9b21-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:00:34 np0005604943 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Feb  2 07:00:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:34.124 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[c5cd5c3f-dd6e-404e-a17e-1ade0c941b30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:00:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:34.128 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[6501c4ed-7d69-4e1f-a002-0e8553bfd9f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:00:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:34.151 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[2157abc2-9730-45d7-8c83-7bf501ce6d36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:00:34 np0005604943 podman[255527]: 2026-02-02 12:00:34.160217867 +0000 UTC m=+0.074596176 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  2 07:00:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:34.173 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1d86522e-4795-48cb-b036-379e445493d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde22212f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:22:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403687, 'reachable_time': 42411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255576, 'error': None, 'target': 'ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:00:34 np0005604943 podman[255526]: 2026-02-02 12:00:34.194347544 +0000 UTC m=+0.112201827 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller)
Feb  2 07:00:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:34.194 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[4118a18c-8e60-48d6-a1e8-e900c31c9ba9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapde22212f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403696, 'tstamp': 403696}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255587, 'error': None, 'target': 'ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapde22212f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403698, 'tstamp': 403698}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255587, 'error': None, 'target': 'ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:00:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:34.196 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde22212f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.198 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.199 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:34.200 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde22212f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:00:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:34.201 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:00:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:34.202 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapde22212f-30, col_values=(('external_ids', {'iface-id': '6001fd23-eaf7-4f4e-bf94-96506f1de9d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:00:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:34.202 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:00:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 212 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 554 KiB/s wr, 192 op/s
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.400 238887 DEBUG nova.compute.manager [req-0acb994d-a346-4f9c-8736-d8b2cd2b9313 req-c20018b3-fa31-4516-818a-dcbefee98f82 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Received event network-vif-plugged-1d1b9b21-b452-4b32-a535-8b2ecfac26e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.400 238887 DEBUG oslo_concurrency.lockutils [req-0acb994d-a346-4f9c-8736-d8b2cd2b9313 req-c20018b3-fa31-4516-818a-dcbefee98f82 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.400 238887 DEBUG oslo_concurrency.lockutils [req-0acb994d-a346-4f9c-8736-d8b2cd2b9313 req-c20018b3-fa31-4516-818a-dcbefee98f82 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.402 238887 DEBUG oslo_concurrency.lockutils [req-0acb994d-a346-4f9c-8736-d8b2cd2b9313 req-c20018b3-fa31-4516-818a-dcbefee98f82 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.402 238887 DEBUG nova.compute.manager [req-0acb994d-a346-4f9c-8736-d8b2cd2b9313 req-c20018b3-fa31-4516-818a-dcbefee98f82 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Processing event network-vif-plugged-1d1b9b21-b452-4b32-a535-8b2ecfac26e6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.553 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033634.5531635, 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.554 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] VM Started (Lifecycle Event)#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.556 238887 DEBUG nova.compute.manager [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.560 238887 DEBUG nova.virt.libvirt.driver [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.562 238887 INFO nova.virt.libvirt.driver [-] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Instance spawned successfully.#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.563 238887 INFO nova.compute.manager [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Took 5.78 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.563 238887 DEBUG nova.compute.manager [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.570 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.573 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.596 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.596 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033634.556171, 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.597 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.613 238887 INFO nova.compute.manager [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Took 6.91 seconds to build instance.#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.619 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.621 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033634.559051, 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.621 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.628 238887 DEBUG oslo_concurrency.lockutils [None req-d5514e90-e11e-4c0b-a3ee-dbf3d9343422 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.638 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.641 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:00:34 np0005604943 nova_compute[238883]: 2026-02-02 12:00:34.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:00:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Feb  2 07:00:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Feb  2 07:00:35 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Feb  2 07:00:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:00:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2839956559' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:00:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:00:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2839956559' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:00:35 np0005604943 nova_compute[238883]: 2026-02-02 12:00:35.635 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:00:35 np0005604943 nova_compute[238883]: 2026-02-02 12:00:35.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:00:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 212 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 552 KiB/s wr, 160 op/s
Feb  2 07:00:36 np0005604943 nova_compute[238883]: 2026-02-02 12:00:36.290 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:36 np0005604943 nova_compute[238883]: 2026-02-02 12:00:36.486 238887 DEBUG nova.compute.manager [req-875b97cc-fead-40ab-a21f-ab670f1bc488 req-46309d70-a45a-4dbc-8af7-eed80053631d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Received event network-vif-plugged-1d1b9b21-b452-4b32-a535-8b2ecfac26e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:00:36 np0005604943 nova_compute[238883]: 2026-02-02 12:00:36.486 238887 DEBUG oslo_concurrency.lockutils [req-875b97cc-fead-40ab-a21f-ab670f1bc488 req-46309d70-a45a-4dbc-8af7-eed80053631d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:00:36 np0005604943 nova_compute[238883]: 2026-02-02 12:00:36.486 238887 DEBUG oslo_concurrency.lockutils [req-875b97cc-fead-40ab-a21f-ab670f1bc488 req-46309d70-a45a-4dbc-8af7-eed80053631d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:00:36 np0005604943 nova_compute[238883]: 2026-02-02 12:00:36.487 238887 DEBUG oslo_concurrency.lockutils [req-875b97cc-fead-40ab-a21f-ab670f1bc488 req-46309d70-a45a-4dbc-8af7-eed80053631d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:00:36 np0005604943 nova_compute[238883]: 2026-02-02 12:00:36.487 238887 DEBUG nova.compute.manager [req-875b97cc-fead-40ab-a21f-ab670f1bc488 req-46309d70-a45a-4dbc-8af7-eed80053631d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] No waiting events found dispatching network-vif-plugged-1d1b9b21-b452-4b32-a535-8b2ecfac26e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:00:36 np0005604943 nova_compute[238883]: 2026-02-02 12:00:36.487 238887 WARNING nova.compute.manager [req-875b97cc-fead-40ab-a21f-ab670f1bc488 req-46309d70-a45a-4dbc-8af7-eed80053631d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Received unexpected event network-vif-plugged-1d1b9b21-b452-4b32-a535-8b2ecfac26e6 for instance with vm_state active and task_state None.#033[00m
Feb  2 07:00:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:00:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2757816300' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:00:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:00:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2757816300' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:00:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:00:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3321046668' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:00:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:00:37 np0005604943 nova_compute[238883]: 2026-02-02 12:00:37.588 238887 DEBUG nova.compute.manager [req-40d81d83-cb8e-4bbe-8091-7fed91afa95e req-1a2b47a6-37e5-4fda-9b6f-aa8db899031d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Received event network-changed-1d1b9b21-b452-4b32-a535-8b2ecfac26e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:00:37 np0005604943 nova_compute[238883]: 2026-02-02 12:00:37.588 238887 DEBUG nova.compute.manager [req-40d81d83-cb8e-4bbe-8091-7fed91afa95e req-1a2b47a6-37e5-4fda-9b6f-aa8db899031d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Refreshing instance network info cache due to event network-changed-1d1b9b21-b452-4b32-a535-8b2ecfac26e6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:00:37 np0005604943 nova_compute[238883]: 2026-02-02 12:00:37.589 238887 DEBUG oslo_concurrency.lockutils [req-40d81d83-cb8e-4bbe-8091-7fed91afa95e req-1a2b47a6-37e5-4fda-9b6f-aa8db899031d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:00:37 np0005604943 nova_compute[238883]: 2026-02-02 12:00:37.589 238887 DEBUG oslo_concurrency.lockutils [req-40d81d83-cb8e-4bbe-8091-7fed91afa95e req-1a2b47a6-37e5-4fda-9b6f-aa8db899031d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:00:37 np0005604943 nova_compute[238883]: 2026-02-02 12:00:37.589 238887 DEBUG nova.network.neutron [req-40d81d83-cb8e-4bbe-8091-7fed91afa95e req-1a2b47a6-37e5-4fda-9b6f-aa8db899031d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Refreshing network info cache for port 1d1b9b21-b452-4b32-a535-8b2ecfac26e6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:00:37 np0005604943 nova_compute[238883]: 2026-02-02 12:00:37.845 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Feb  2 07:00:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Feb  2 07:00:38 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Feb  2 07:00:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 248 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 3.6 MiB/s wr, 378 op/s
Feb  2 07:00:38 np0005604943 nova_compute[238883]: 2026-02-02 12:00:38.569 238887 DEBUG nova.network.neutron [req-40d81d83-cb8e-4bbe-8091-7fed91afa95e req-1a2b47a6-37e5-4fda-9b6f-aa8db899031d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Updated VIF entry in instance network info cache for port 1d1b9b21-b452-4b32-a535-8b2ecfac26e6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:00:38 np0005604943 nova_compute[238883]: 2026-02-02 12:00:38.570 238887 DEBUG nova.network.neutron [req-40d81d83-cb8e-4bbe-8091-7fed91afa95e req-1a2b47a6-37e5-4fda-9b6f-aa8db899031d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Updating instance_info_cache with network_info: [{"id": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "address": "fa:16:3e:74:a4:df", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d1b9b21-b4", "ovs_interfaceid": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:00:38 np0005604943 nova_compute[238883]: 2026-02-02 12:00:38.602 238887 DEBUG oslo_concurrency.lockutils [req-40d81d83-cb8e-4bbe-8091-7fed91afa95e req-1a2b47a6-37e5-4fda-9b6f-aa8db899031d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:00:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Feb  2 07:00:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Feb  2 07:00:39 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Feb  2 07:00:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Feb  2 07:00:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Feb  2 07:00:40 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Feb  2 07:00:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 248 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.5 MiB/s wr, 361 op/s
Feb  2 07:00:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:00:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:00:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:00:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:00:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:00:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:00:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Feb  2 07:00:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Feb  2 07:00:41 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Feb  2 07:00:41 np0005604943 nova_compute[238883]: 2026-02-02 12:00:41.292 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:00:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/633402555' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:00:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:00:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/633402555' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:00:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 248 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 38 KiB/s wr, 140 op/s
Feb  2 07:00:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:00:42 np0005604943 nova_compute[238883]: 2026-02-02 12:00:42.850 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Feb  2 07:00:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Feb  2 07:00:43 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Feb  2 07:00:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:00:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2026910223' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:00:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:00:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2026910223' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:00:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 7.0 KiB/s wr, 157 op/s
Feb  2 07:00:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:00:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4034057302' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:00:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:00:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4034057302' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:00:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Feb  2 07:00:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Feb  2 07:00:45 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Feb  2 07:00:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Feb  2 07:00:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Feb  2 07:00:46 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Feb  2 07:00:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 7.1 KiB/s wr, 158 op/s
Feb  2 07:00:46 np0005604943 nova_compute[238883]: 2026-02-02 12:00:46.295 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:47 np0005604943 ovn_controller[145056]: 2026-02-02T12:00:47Z|00018|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.10
Feb  2 07:00:47 np0005604943 ovn_controller[145056]: 2026-02-02T12:00:47Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:74:a4:df 10.100.0.10
Feb  2 07:00:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:00:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Feb  2 07:00:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Feb  2 07:00:47 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Feb  2 07:00:47 np0005604943 nova_compute[238883]: 2026-02-02 12:00:47.874 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 257 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 985 KiB/s wr, 226 op/s
Feb  2 07:00:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Feb  2 07:00:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Feb  2 07:00:48 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Feb  2 07:00:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Feb  2 07:00:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Feb  2 07:00:49 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Feb  2 07:00:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:00:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3316925113' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:00:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:00:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3316925113' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:00:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 262 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.5 MiB/s wr, 333 op/s
Feb  2 07:00:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Feb  2 07:00:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Feb  2 07:00:50 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Feb  2 07:00:50 np0005604943 ovn_controller[145056]: 2026-02-02T12:00:50Z|00020|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.10
Feb  2 07:00:50 np0005604943 ovn_controller[145056]: 2026-02-02T12:00:50Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:74:a4:df 10.100.0.10
Feb  2 07:00:51 np0005604943 nova_compute[238883]: 2026-02-02 12:00:51.299 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Feb  2 07:00:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Feb  2 07:00:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Feb  2 07:00:51 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 07:00:51 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5217 writes, 23K keys, 5217 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 5217 writes, 5217 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1856 writes, 8309 keys, 1856 commit groups, 1.0 writes per commit group, ingest: 11.26 MB, 0.02 MB/s#012Interval WAL: 1856 writes, 1856 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    135.4      0.18              0.06        12    0.015       0      0       0.0       0.0#012  L6      1/0    7.32 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    150.5    123.9      0.65              0.19        11    0.059     49K   5796       0.0       0.0#012 Sum      1/0    7.32 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    117.7    126.4      0.83              0.25        23    0.036     49K   5796       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.7    115.0    115.2      0.40              0.09        10    0.040     24K   2589       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    150.5    123.9      0.65              0.19        11    0.059     49K   5796       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    138.7      0.18              0.06        11    0.016       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.1      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.024, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.10 GB read, 0.05 MB/s read, 0.8 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cd5e4c78d0#2 capacity: 304.00 MB usage: 9.40 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 8.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(576,8.99 MB,2.95716%) FilterBlock(24,142.92 KB,0.0459119%) IndexBlock(24,277.45 KB,0.0891284%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 07:00:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:00:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/123053223' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:00:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:00:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/123053223' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:00:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 262 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 788 KiB/s rd, 259 KiB/s wr, 124 op/s
Feb  2 07:00:52 np0005604943 ovn_controller[145056]: 2026-02-02T12:00:52Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:74:a4:df 10.100.0.10
Feb  2 07:00:52 np0005604943 ovn_controller[145056]: 2026-02-02T12:00:52Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:74:a4:df 10.100.0.10
Feb  2 07:00:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:00:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Feb  2 07:00:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Feb  2 07:00:52 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Feb  2 07:00:52 np0005604943 nova_compute[238883]: 2026-02-02 12:00:52.876 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Feb  2 07:00:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Feb  2 07:00:53 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Feb  2 07:00:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:00:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/686273099' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:00:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:00:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/686273099' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:00:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 266 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 181 KiB/s rd, 144 KiB/s wr, 119 op/s
Feb  2 07:00:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Feb  2 07:00:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Feb  2 07:00:55 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Feb  2 07:00:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:00:56 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/983206315' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:00:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:00:56 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/983206315' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:00:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 266 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 123 KiB/s wr, 102 op/s
Feb  2 07:00:56 np0005604943 nova_compute[238883]: 2026-02-02 12:00:56.301 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:00:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Feb  2 07:00:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Feb  2 07:00:57 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Feb  2 07:00:57 np0005604943 nova_compute[238883]: 2026-02-02 12:00:57.880 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:58 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:58.190 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:00:58 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:58.191 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 07:00:58 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:00:58.192 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:00:58 np0005604943 nova_compute[238883]: 2026-02-02 12:00:58.228 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:00:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 266 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 102 KiB/s wr, 157 op/s
Feb  2 07:01:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 266 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 95 KiB/s wr, 133 op/s
Feb  2 07:01:01 np0005604943 nova_compute[238883]: 2026-02-02 12:01:01.302 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 266 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.7 KiB/s wr, 64 op/s
Feb  2 07:01:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:01:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Feb  2 07:01:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Feb  2 07:01:02 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Feb  2 07:01:02 np0005604943 nova_compute[238883]: 2026-02-02 12:01:02.881 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:01:03 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/687047930' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:01:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:03 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2643710796' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:03 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2643710796' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Feb  2 07:01:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Feb  2 07:01:03 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Feb  2 07:01:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 298 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.5 MiB/s wr, 137 op/s
Feb  2 07:01:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Feb  2 07:01:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Feb  2 07:01:04 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Feb  2 07:01:05 np0005604943 podman[255645]: 2026-02-02 12:01:05.049119327 +0000 UTC m=+0.057206014 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb  2 07:01:05 np0005604943 podman[255644]: 2026-02-02 12:01:05.071059932 +0000 UTC m=+0.079017825 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127)
Feb  2 07:01:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Feb  2 07:01:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Feb  2 07:01:05 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Feb  2 07:01:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 298 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 658 KiB/s rd, 5.9 MiB/s wr, 167 op/s
Feb  2 07:01:06 np0005604943 nova_compute[238883]: 2026-02-02 12:01:06.304 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:07 np0005604943 ovn_controller[145056]: 2026-02-02T12:01:07Z|00120|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Feb  2 07:01:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:01:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Feb  2 07:01:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Feb  2 07:01:07 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Feb  2 07:01:07 np0005604943 nova_compute[238883]: 2026-02-02 12:01:07.884 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 269 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 2.3 MiB/s wr, 163 op/s
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:01:08.812119) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033668812174, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2501, "num_deletes": 265, "total_data_size": 3547348, "memory_usage": 3602016, "flush_reason": "Manual Compaction"}
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033668827272, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3488243, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21507, "largest_seqno": 24006, "table_properties": {"data_size": 3476162, "index_size": 8063, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 24888, "raw_average_key_size": 21, "raw_value_size": 3452169, "raw_average_value_size": 2970, "num_data_blocks": 350, "num_entries": 1162, "num_filter_entries": 1162, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770033518, "oldest_key_time": 1770033518, "file_creation_time": 1770033668, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 15191 microseconds, and 7360 cpu microseconds.
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:01:08.827316) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3488243 bytes OK
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:01:08.827337) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:01:08.829621) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:01:08.829664) EVENT_LOG_v1 {"time_micros": 1770033668829654, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:01:08.829689) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3536500, prev total WAL file size 3536541, number of live WAL files 2.
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:01:08.830838) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3406KB)], [50(7496KB)]
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033668830875, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11164349, "oldest_snapshot_seqno": -1}
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5263 keys, 9389380 bytes, temperature: kUnknown
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033668883721, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9389380, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9349463, "index_size": 25663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 129764, "raw_average_key_size": 24, "raw_value_size": 9250017, "raw_average_value_size": 1757, "num_data_blocks": 1057, "num_entries": 5263, "num_filter_entries": 5263, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770033668, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:01:08.884298) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9389380 bytes
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:01:08.886878) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 210.0 rd, 176.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.3 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5799, records dropped: 536 output_compression: NoCompression
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:01:08.886896) EVENT_LOG_v1 {"time_micros": 1770033668886887, "job": 26, "event": "compaction_finished", "compaction_time_micros": 53163, "compaction_time_cpu_micros": 15566, "output_level": 6, "num_output_files": 1, "total_output_size": 9389380, "num_input_records": 5799, "num_output_records": 5263, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033668887387, "job": 26, "event": "table_file_deletion", "file_number": 52}
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033668888175, "job": 26, "event": "table_file_deletion", "file_number": 50}
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:01:08.830755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:01:08.888341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:01:08.888347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:01:08.888349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:01:08.888351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:01:08 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:01:08.888353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3991158125' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3991158125' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:01:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_12:01:09
Feb  2 07:01:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 07:01:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 07:01:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'images', '.mgr', 'volumes', 'backups', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control']
Feb  2 07:01:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 07:01:09 np0005604943 podman[255831]: 2026-02-02 12:01:09.702839813 +0000 UTC m=+0.035391152 container create 933b9e65623822dc97e849cc78682e592d55f71c94bd43474788b3fa2da10351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:01:09 np0005604943 systemd[1]: Started libpod-conmon-933b9e65623822dc97e849cc78682e592d55f71c94bd43474788b3fa2da10351.scope.
Feb  2 07:01:09 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:01:09 np0005604943 podman[255831]: 2026-02-02 12:01:09.783880633 +0000 UTC m=+0.116431992 container init 933b9e65623822dc97e849cc78682e592d55f71c94bd43474788b3fa2da10351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 07:01:09 np0005604943 podman[255831]: 2026-02-02 12:01:09.687182738 +0000 UTC m=+0.019734107 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:01:09 np0005604943 podman[255831]: 2026-02-02 12:01:09.791121269 +0000 UTC m=+0.123672608 container start 933b9e65623822dc97e849cc78682e592d55f71c94bd43474788b3fa2da10351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_lehmann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:01:09 np0005604943 podman[255831]: 2026-02-02 12:01:09.794191353 +0000 UTC m=+0.126742722 container attach 933b9e65623822dc97e849cc78682e592d55f71c94bd43474788b3fa2da10351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_lehmann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 07:01:09 np0005604943 silly_lehmann[255847]: 167 167
Feb  2 07:01:09 np0005604943 systemd[1]: libpod-933b9e65623822dc97e849cc78682e592d55f71c94bd43474788b3fa2da10351.scope: Deactivated successfully.
Feb  2 07:01:09 np0005604943 podman[255831]: 2026-02-02 12:01:09.798997343 +0000 UTC m=+0.131548682 container died 933b9e65623822dc97e849cc78682e592d55f71c94bd43474788b3fa2da10351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:01:09 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:01:09 np0005604943 systemd[1]: var-lib-containers-storage-overlay-ec330e4dbad88f0de06f0cac3d91dd09cc29fab35ee19b23c7a118c208ad1705-merged.mount: Deactivated successfully.
Feb  2 07:01:09 np0005604943 podman[255831]: 2026-02-02 12:01:09.835508134 +0000 UTC m=+0.168059473 container remove 933b9e65623822dc97e849cc78682e592d55f71c94bd43474788b3fa2da10351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 07:01:09 np0005604943 systemd[1]: libpod-conmon-933b9e65623822dc97e849cc78682e592d55f71c94bd43474788b3fa2da10351.scope: Deactivated successfully.
Feb  2 07:01:09 np0005604943 podman[255869]: 2026-02-02 12:01:09.953766685 +0000 UTC m=+0.036329437 container create dd7714ac1dcd006b997ff3f54f833c02d970b962366cc0d4e3bb8e60794ff207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:01:09 np0005604943 systemd[1]: Started libpod-conmon-dd7714ac1dcd006b997ff3f54f833c02d970b962366cc0d4e3bb8e60794ff207.scope.
Feb  2 07:01:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:10.026 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:10 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:01:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:10.029 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:10.031 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3559a052cd8c7073ea74d167d2f48bba599c162843f8ee5c201038ebb036d47f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:01:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3559a052cd8c7073ea74d167d2f48bba599c162843f8ee5c201038ebb036d47f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:01:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3559a052cd8c7073ea74d167d2f48bba599c162843f8ee5c201038ebb036d47f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:01:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3559a052cd8c7073ea74d167d2f48bba599c162843f8ee5c201038ebb036d47f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:01:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3559a052cd8c7073ea74d167d2f48bba599c162843f8ee5c201038ebb036d47f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 07:01:10 np0005604943 podman[255869]: 2026-02-02 12:01:09.937190535 +0000 UTC m=+0.019753307 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:01:10 np0005604943 podman[255869]: 2026-02-02 12:01:10.049954005 +0000 UTC m=+0.132516767 container init dd7714ac1dcd006b997ff3f54f833c02d970b962366cc0d4e3bb8e60794ff207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rosalind, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:01:10 np0005604943 podman[255869]: 2026-02-02 12:01:10.055376462 +0000 UTC m=+0.137939214 container start dd7714ac1dcd006b997ff3f54f833c02d970b962366cc0d4e3bb8e60794ff207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rosalind, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:01:10 np0005604943 podman[255869]: 2026-02-02 12:01:10.058717674 +0000 UTC m=+0.141280546 container attach dd7714ac1dcd006b997ff3f54f833c02d970b962366cc0d4e3bb8e60794ff207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rosalind, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.113 238887 DEBUG oslo_concurrency.lockutils [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.114 238887 DEBUG oslo_concurrency.lockutils [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.147 238887 DEBUG nova.objects.instance [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lazy-loading 'flavor' on Instance uuid 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.182 238887 DEBUG oslo_concurrency.lockutils [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1584080183' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1584080183' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 269 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 12 KiB/s wr, 98 op/s
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.376 238887 DEBUG oslo_concurrency.lockutils [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.376 238887 DEBUG oslo_concurrency.lockutils [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.377 238887 INFO nova.compute.manager [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Attaching volume b765a100-122f-42d6-8e34-79ed7beda2c8 to /dev/vdb#033[00m
Feb  2 07:01:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1925214295' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1925214295' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:10 np0005604943 intelligent_rosalind[255887]: --> passed data devices: 0 physical, 3 LVM
Feb  2 07:01:10 np0005604943 intelligent_rosalind[255887]: --> All data devices are unavailable
Feb  2 07:01:10 np0005604943 systemd[1]: libpod-dd7714ac1dcd006b997ff3f54f833c02d970b962366cc0d4e3bb8e60794ff207.scope: Deactivated successfully.
Feb  2 07:01:10 np0005604943 podman[255869]: 2026-02-02 12:01:10.487838282 +0000 UTC m=+0.570401034 container died dd7714ac1dcd006b997ff3f54f833c02d970b962366cc0d4e3bb8e60794ff207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.502 238887 DEBUG os_brick.utils [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.504 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:01:10 np0005604943 systemd[1]: var-lib-containers-storage-overlay-3559a052cd8c7073ea74d167d2f48bba599c162843f8ee5c201038ebb036d47f-merged.mount: Deactivated successfully.
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.514 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.514 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[506c3454-f577-483d-898b-df53c9cad7ce]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.517 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.524 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.524 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[3ff0621a-dd22-424d-be92-36a751ea79a4]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.526 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:01:10 np0005604943 podman[255869]: 2026-02-02 12:01:10.531062395 +0000 UTC m=+0.613625147 container remove dd7714ac1dcd006b997ff3f54f833c02d970b962366cc0d4e3bb8e60794ff207 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.533 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.534 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[c45ae473-b858-41ca-9335-18fffa97b532]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.535 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[7aff1093-4956-40ec-8dd6-714ff31599e8]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.536 238887 DEBUG oslo_concurrency.processutils [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:01:10 np0005604943 systemd[1]: libpod-conmon-dd7714ac1dcd006b997ff3f54f833c02d970b962366cc0d4e3bb8e60794ff207.scope: Deactivated successfully.
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.559 238887 DEBUG oslo_concurrency.processutils [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.561 238887 DEBUG os_brick.initiator.connectors.lightos [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.562 238887 DEBUG os_brick.initiator.connectors.lightos [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.562 238887 DEBUG os_brick.initiator.connectors.lightos [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.562 238887 DEBUG os_brick.utils [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] <== get_connector_properties: return (59ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:01:10 np0005604943 nova_compute[238883]: 2026-02-02 12:01:10.563 238887 DEBUG nova.virt.block_device [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Updating existing volume attachment record: b0030842-feaa-4831-b872-9fbcd8ac00fe _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:01:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:01:10 np0005604943 podman[255987]: 2026-02-02 12:01:10.952901086 +0000 UTC m=+0.036866092 container create b7e9721f65f7cde4e2139a981cbf48a44f4709b0f13bbb9a3a665ee82c2a5da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hypatia, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 07:01:10 np0005604943 systemd[1]: Started libpod-conmon-b7e9721f65f7cde4e2139a981cbf48a44f4709b0f13bbb9a3a665ee82c2a5da6.scope.
Feb  2 07:01:11 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:01:11 np0005604943 podman[255987]: 2026-02-02 12:01:11.029065114 +0000 UTC m=+0.113030120 container init b7e9721f65f7cde4e2139a981cbf48a44f4709b0f13bbb9a3a665ee82c2a5da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hypatia, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 07:01:11 np0005604943 podman[255987]: 2026-02-02 12:01:10.937753104 +0000 UTC m=+0.021718120 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:01:11 np0005604943 podman[255987]: 2026-02-02 12:01:11.034547041 +0000 UTC m=+0.118512037 container start b7e9721f65f7cde4e2139a981cbf48a44f4709b0f13bbb9a3a665ee82c2a5da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:01:11 np0005604943 podman[255987]: 2026-02-02 12:01:11.037745549 +0000 UTC m=+0.121710565 container attach b7e9721f65f7cde4e2139a981cbf48a44f4709b0f13bbb9a3a665ee82c2a5da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hypatia, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:01:11 np0005604943 admiring_hypatia[256003]: 167 167
Feb  2 07:01:11 np0005604943 systemd[1]: libpod-b7e9721f65f7cde4e2139a981cbf48a44f4709b0f13bbb9a3a665ee82c2a5da6.scope: Deactivated successfully.
Feb  2 07:01:11 np0005604943 podman[255987]: 2026-02-02 12:01:11.039780784 +0000 UTC m=+0.123745780 container died b7e9721f65f7cde4e2139a981cbf48a44f4709b0f13bbb9a3a665ee82c2a5da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hypatia, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:01:11 np0005604943 systemd[1]: var-lib-containers-storage-overlay-2995aab05e8715e92a6c44674a4f9b9b057b189f0916c5e36ef541d0e753b070-merged.mount: Deactivated successfully.
Feb  2 07:01:11 np0005604943 podman[255987]: 2026-02-02 12:01:11.070461827 +0000 UTC m=+0.154426823 container remove b7e9721f65f7cde4e2139a981cbf48a44f4709b0f13bbb9a3a665ee82c2a5da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:01:11 np0005604943 systemd[1]: libpod-conmon-b7e9721f65f7cde4e2139a981cbf48a44f4709b0f13bbb9a3a665ee82c2a5da6.scope: Deactivated successfully.
Feb  2 07:01:11 np0005604943 podman[256027]: 2026-02-02 12:01:11.206732445 +0000 UTC m=+0.035450043 container create c3feebfe7bfa73617692ed1ab4389b7e56d7d96bf9b85b646956e9607eaf4ea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  2 07:01:11 np0005604943 systemd[1]: Started libpod-conmon-c3feebfe7bfa73617692ed1ab4389b7e56d7d96bf9b85b646956e9607eaf4ea0.scope.
Feb  2 07:01:11 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:01:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4deb799554b841689dfdab33ff1f324c5729d8b2a755465502e03650a814fb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:01:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4deb799554b841689dfdab33ff1f324c5729d8b2a755465502e03650a814fb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:01:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4deb799554b841689dfdab33ff1f324c5729d8b2a755465502e03650a814fb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:01:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4deb799554b841689dfdab33ff1f324c5729d8b2a755465502e03650a814fb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:01:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:01:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2555798341' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:01:11 np0005604943 podman[256027]: 2026-02-02 12:01:11.279980434 +0000 UTC m=+0.108698052 container init c3feebfe7bfa73617692ed1ab4389b7e56d7d96bf9b85b646956e9607eaf4ea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 07:01:11 np0005604943 podman[256027]: 2026-02-02 12:01:11.286602414 +0000 UTC m=+0.115320012 container start c3feebfe7bfa73617692ed1ab4389b7e56d7d96bf9b85b646956e9607eaf4ea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chatelet, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:01:11 np0005604943 podman[256027]: 2026-02-02 12:01:11.192680134 +0000 UTC m=+0.021397752 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:01:11 np0005604943 podman[256027]: 2026-02-02 12:01:11.290215902 +0000 UTC m=+0.118933550 container attach c3feebfe7bfa73617692ed1ab4389b7e56d7d96bf9b85b646956e9607eaf4ea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chatelet, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 07:01:11 np0005604943 nova_compute[238883]: 2026-02-02 12:01:11.307 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:11 np0005604943 nova_compute[238883]: 2026-02-02 12:01:11.477 238887 DEBUG nova.objects.instance [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lazy-loading 'flavor' on Instance uuid 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:01:11 np0005604943 nova_compute[238883]: 2026-02-02 12:01:11.501 238887 DEBUG nova.virt.libvirt.driver [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Attempting to attach volume b765a100-122f-42d6-8e34-79ed7beda2c8 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 07:01:11 np0005604943 nova_compute[238883]: 2026-02-02 12:01:11.504 238887 DEBUG nova.virt.libvirt.guest [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 07:01:11 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:01:11 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-b765a100-122f-42d6-8e34-79ed7beda2c8">
Feb  2 07:01:11 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:01:11 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:01:11 np0005604943 nova_compute[238883]:  <auth username="openstack">
Feb  2 07:01:11 np0005604943 nova_compute[238883]:    <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:01:11 np0005604943 nova_compute[238883]:  </auth>
Feb  2 07:01:11 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:01:11 np0005604943 nova_compute[238883]:  <serial>b765a100-122f-42d6-8e34-79ed7beda2c8</serial>
Feb  2 07:01:11 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:01:11 np0005604943 nova_compute[238883]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]: {
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:    "0": [
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:        {
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "devices": [
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "/dev/loop3"
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            ],
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "lv_name": "ceph_lv0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "lv_size": "21470642176",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "name": "ceph_lv0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "tags": {
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.cluster_name": "ceph",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.crush_device_class": "",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.encrypted": "0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.objectstore": "bluestore",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.osd_id": "0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.type": "block",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.vdo": "0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.with_tpm": "0"
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            },
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "type": "block",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "vg_name": "ceph_vg0"
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:        }
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:    ],
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:    "1": [
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:        {
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "devices": [
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "/dev/loop4"
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            ],
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "lv_name": "ceph_lv1",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "lv_size": "21470642176",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "name": "ceph_lv1",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "tags": {
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.cluster_name": "ceph",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.crush_device_class": "",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.encrypted": "0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.objectstore": "bluestore",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.osd_id": "1",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.type": "block",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.vdo": "0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.with_tpm": "0"
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            },
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "type": "block",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "vg_name": "ceph_vg1"
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:        }
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:    ],
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:    "2": [
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:        {
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "devices": [
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "/dev/loop5"
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            ],
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "lv_name": "ceph_lv2",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "lv_size": "21470642176",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "name": "ceph_lv2",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "tags": {
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.cluster_name": "ceph",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.crush_device_class": "",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.encrypted": "0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.objectstore": "bluestore",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.osd_id": "2",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.type": "block",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.vdo": "0",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:                "ceph.with_tpm": "0"
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            },
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "type": "block",
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:            "vg_name": "ceph_vg2"
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:        }
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]:    ]
Feb  2 07:01:11 np0005604943 unruffled_chatelet[256043]: }
Feb  2 07:01:11 np0005604943 systemd[1]: libpod-c3feebfe7bfa73617692ed1ab4389b7e56d7d96bf9b85b646956e9607eaf4ea0.scope: Deactivated successfully.
Feb  2 07:01:11 np0005604943 podman[256027]: 2026-02-02 12:01:11.613127688 +0000 UTC m=+0.441845296 container died c3feebfe7bfa73617692ed1ab4389b7e56d7d96bf9b85b646956e9607eaf4ea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chatelet, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:01:11 np0005604943 nova_compute[238883]: 2026-02-02 12:01:11.616 238887 DEBUG nova.virt.libvirt.driver [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:01:11 np0005604943 nova_compute[238883]: 2026-02-02 12:01:11.617 238887 DEBUG nova.virt.libvirt.driver [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:01:11 np0005604943 nova_compute[238883]: 2026-02-02 12:01:11.617 238887 DEBUG nova.virt.libvirt.driver [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:01:11 np0005604943 nova_compute[238883]: 2026-02-02 12:01:11.617 238887 DEBUG nova.virt.libvirt.driver [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] No VIF found with MAC fa:16:3e:74:a4:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:01:11 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f4deb799554b841689dfdab33ff1f324c5729d8b2a755465502e03650a814fb1-merged.mount: Deactivated successfully.
Feb  2 07:01:11 np0005604943 podman[256027]: 2026-02-02 12:01:11.662131698 +0000 UTC m=+0.490849296 container remove c3feebfe7bfa73617692ed1ab4389b7e56d7d96bf9b85b646956e9607eaf4ea0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chatelet, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Feb  2 07:01:11 np0005604943 systemd[1]: libpod-conmon-c3feebfe7bfa73617692ed1ab4389b7e56d7d96bf9b85b646956e9607eaf4ea0.scope: Deactivated successfully.
Feb  2 07:01:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2067924630' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2067924630' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:11 np0005604943 nova_compute[238883]: 2026-02-02 12:01:11.882 238887 DEBUG oslo_concurrency.lockutils [None req-7b3373c3-ff0d-4177-aaca-5f404a4daa3a 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:12 np0005604943 podman[256144]: 2026-02-02 12:01:12.119357109 +0000 UTC m=+0.046169065 container create da276c817dab70cecb716ca2cdbf87523d5324f1990de14f477fee17c84ff310 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:01:12 np0005604943 systemd[1]: Started libpod-conmon-da276c817dab70cecb716ca2cdbf87523d5324f1990de14f477fee17c84ff310.scope.
Feb  2 07:01:12 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:01:12 np0005604943 podman[256144]: 2026-02-02 12:01:12.174796314 +0000 UTC m=+0.101608290 container init da276c817dab70cecb716ca2cdbf87523d5324f1990de14f477fee17c84ff310 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 07:01:12 np0005604943 podman[256144]: 2026-02-02 12:01:12.180297133 +0000 UTC m=+0.107109089 container start da276c817dab70cecb716ca2cdbf87523d5324f1990de14f477fee17c84ff310 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 07:01:12 np0005604943 pedantic_panini[256161]: 167 167
Feb  2 07:01:12 np0005604943 systemd[1]: libpod-da276c817dab70cecb716ca2cdbf87523d5324f1990de14f477fee17c84ff310.scope: Deactivated successfully.
Feb  2 07:01:12 np0005604943 conmon[256161]: conmon da276c817dab70cecb71 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-da276c817dab70cecb716ca2cdbf87523d5324f1990de14f477fee17c84ff310.scope/container/memory.events
Feb  2 07:01:12 np0005604943 podman[256144]: 2026-02-02 12:01:12.185789643 +0000 UTC m=+0.112601599 container attach da276c817dab70cecb716ca2cdbf87523d5324f1990de14f477fee17c84ff310 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:01:12 np0005604943 podman[256144]: 2026-02-02 12:01:12.186619094 +0000 UTC m=+0.113431060 container died da276c817dab70cecb716ca2cdbf87523d5324f1990de14f477fee17c84ff310 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 07:01:12 np0005604943 podman[256144]: 2026-02-02 12:01:12.094279778 +0000 UTC m=+0.021091764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:01:12 np0005604943 systemd[1]: var-lib-containers-storage-overlay-983b1f5726e7b0f39f418eaa9b8a77f7fded52fbd4af958a0a335be133ef6f03-merged.mount: Deactivated successfully.
Feb  2 07:01:12 np0005604943 podman[256144]: 2026-02-02 12:01:12.220524736 +0000 UTC m=+0.147336692 container remove da276c817dab70cecb716ca2cdbf87523d5324f1990de14f477fee17c84ff310 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_panini, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:01:12 np0005604943 systemd[1]: libpod-conmon-da276c817dab70cecb716ca2cdbf87523d5324f1990de14f477fee17c84ff310.scope: Deactivated successfully.
Feb  2 07:01:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 269 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 10 KiB/s wr, 83 op/s
Feb  2 07:01:12 np0005604943 podman[256184]: 2026-02-02 12:01:12.36399828 +0000 UTC m=+0.039125273 container create e50c15253943dce86f0b94a25aed4d1803b4a43b472991c5126b534cf9529650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_cerf, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 07:01:12 np0005604943 systemd[1]: Started libpod-conmon-e50c15253943dce86f0b94a25aed4d1803b4a43b472991c5126b534cf9529650.scope.
Feb  2 07:01:12 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:01:12 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3d3a39de0c281f2e195d2ceaa55588f9bfc8ead546d18342d7ed260dba7084/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:01:12 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3d3a39de0c281f2e195d2ceaa55588f9bfc8ead546d18342d7ed260dba7084/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:01:12 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3d3a39de0c281f2e195d2ceaa55588f9bfc8ead546d18342d7ed260dba7084/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:01:12 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3d3a39de0c281f2e195d2ceaa55588f9bfc8ead546d18342d7ed260dba7084/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:01:12 np0005604943 podman[256184]: 2026-02-02 12:01:12.424014579 +0000 UTC m=+0.099141592 container init e50c15253943dce86f0b94a25aed4d1803b4a43b472991c5126b534cf9529650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:01:12 np0005604943 podman[256184]: 2026-02-02 12:01:12.433382454 +0000 UTC m=+0.108509447 container start e50c15253943dce86f0b94a25aed4d1803b4a43b472991c5126b534cf9529650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  2 07:01:12 np0005604943 podman[256184]: 2026-02-02 12:01:12.436698713 +0000 UTC m=+0.111825716 container attach e50c15253943dce86f0b94a25aed4d1803b4a43b472991c5126b534cf9529650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_cerf, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:01:12 np0005604943 podman[256184]: 2026-02-02 12:01:12.346717061 +0000 UTC m=+0.021844084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:01:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:01:12 np0005604943 nova_compute[238883]: 2026-02-02 12:01:12.887 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:13 np0005604943 lvm[256279]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 07:01:13 np0005604943 lvm[256276]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:01:13 np0005604943 lvm[256276]: VG ceph_vg0 finished
Feb  2 07:01:13 np0005604943 lvm[256279]: VG ceph_vg1 finished
Feb  2 07:01:13 np0005604943 lvm[256280]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 07:01:13 np0005604943 lvm[256280]: VG ceph_vg2 finished
Feb  2 07:01:13 np0005604943 musing_cerf[256200]: {}
Feb  2 07:01:13 np0005604943 systemd[1]: libpod-e50c15253943dce86f0b94a25aed4d1803b4a43b472991c5126b534cf9529650.scope: Deactivated successfully.
Feb  2 07:01:13 np0005604943 systemd[1]: libpod-e50c15253943dce86f0b94a25aed4d1803b4a43b472991c5126b534cf9529650.scope: Consumed 1.112s CPU time.
Feb  2 07:01:13 np0005604943 podman[256184]: 2026-02-02 12:01:13.211881596 +0000 UTC m=+0.887008589 container died e50c15253943dce86f0b94a25aed4d1803b4a43b472991c5126b534cf9529650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 07:01:13 np0005604943 systemd[1]: var-lib-containers-storage-overlay-7d3d3a39de0c281f2e195d2ceaa55588f9bfc8ead546d18342d7ed260dba7084-merged.mount: Deactivated successfully.
Feb  2 07:01:13 np0005604943 podman[256184]: 2026-02-02 12:01:13.251645625 +0000 UTC m=+0.926772618 container remove e50c15253943dce86f0b94a25aed4d1803b4a43b472991c5126b534cf9529650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_cerf, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 07:01:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1257294386' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1257294386' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:13 np0005604943 systemd[1]: libpod-conmon-e50c15253943dce86f0b94a25aed4d1803b4a43b472991c5126b534cf9529650.scope: Deactivated successfully.
Feb  2 07:01:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:01:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:01:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:01:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:01:13 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:01:13 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:01:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 271 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 268 KiB/s wr, 204 op/s
Feb  2 07:01:14 np0005604943 nova_compute[238883]: 2026-02-02 12:01:14.325 238887 DEBUG oslo_concurrency.lockutils [None req-8df36cf9-938b-4a37-b0f1-0bd7d7c88a48 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:14 np0005604943 nova_compute[238883]: 2026-02-02 12:01:14.326 238887 DEBUG oslo_concurrency.lockutils [None req-8df36cf9-938b-4a37-b0f1-0bd7d7c88a48 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:14 np0005604943 nova_compute[238883]: 2026-02-02 12:01:14.339 238887 INFO nova.compute.manager [None req-8df36cf9-938b-4a37-b0f1-0bd7d7c88a48 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Detaching volume b765a100-122f-42d6-8e34-79ed7beda2c8#033[00m
Feb  2 07:01:14 np0005604943 nova_compute[238883]: 2026-02-02 12:01:14.475 238887 INFO nova.virt.block_device [None req-8df36cf9-938b-4a37-b0f1-0bd7d7c88a48 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Attempting to driver detach volume b765a100-122f-42d6-8e34-79ed7beda2c8 from mountpoint /dev/vdb#033[00m
Feb  2 07:01:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2049458675' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:14 np0005604943 nova_compute[238883]: 2026-02-02 12:01:14.485 238887 DEBUG nova.virt.libvirt.driver [None req-8df36cf9-938b-4a37-b0f1-0bd7d7c88a48 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Attempting to detach device vdb from instance 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 07:01:14 np0005604943 nova_compute[238883]: 2026-02-02 12:01:14.486 238887 DEBUG nova.virt.libvirt.guest [None req-8df36cf9-938b-4a37-b0f1-0bd7d7c88a48 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 07:01:14 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:01:14 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-b765a100-122f-42d6-8e34-79ed7beda2c8">
Feb  2 07:01:14 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:01:14 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:01:14 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:01:14 np0005604943 nova_compute[238883]:  <serial>b765a100-122f-42d6-8e34-79ed7beda2c8</serial>
Feb  2 07:01:14 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 07:01:14 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:01:14 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 07:01:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2049458675' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:14 np0005604943 nova_compute[238883]: 2026-02-02 12:01:14.493 238887 INFO nova.virt.libvirt.driver [None req-8df36cf9-938b-4a37-b0f1-0bd7d7c88a48 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Successfully detached device vdb from instance 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 from the persistent domain config.#033[00m
Feb  2 07:01:14 np0005604943 nova_compute[238883]: 2026-02-02 12:01:14.493 238887 DEBUG nova.virt.libvirt.driver [None req-8df36cf9-938b-4a37-b0f1-0bd7d7c88a48 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 07:01:14 np0005604943 nova_compute[238883]: 2026-02-02 12:01:14.493 238887 DEBUG nova.virt.libvirt.guest [None req-8df36cf9-938b-4a37-b0f1-0bd7d7c88a48 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 07:01:14 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:01:14 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-b765a100-122f-42d6-8e34-79ed7beda2c8">
Feb  2 07:01:14 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:01:14 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:01:14 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:01:14 np0005604943 nova_compute[238883]:  <serial>b765a100-122f-42d6-8e34-79ed7beda2c8</serial>
Feb  2 07:01:14 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 07:01:14 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:01:14 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 07:01:14 np0005604943 nova_compute[238883]: 2026-02-02 12:01:14.544 238887 DEBUG nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Received event <DeviceRemovedEvent: 1770033674.5436482, 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 07:01:14 np0005604943 nova_compute[238883]: 2026-02-02 12:01:14.545 238887 DEBUG nova.virt.libvirt.driver [None req-8df36cf9-938b-4a37-b0f1-0bd7d7c88a48 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 07:01:14 np0005604943 nova_compute[238883]: 2026-02-02 12:01:14.547 238887 INFO nova.virt.libvirt.driver [None req-8df36cf9-938b-4a37-b0f1-0bd7d7c88a48 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Successfully detached device vdb from instance 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 from the live domain config.#033[00m
Feb  2 07:01:14 np0005604943 nova_compute[238883]: 2026-02-02 12:01:14.701 238887 DEBUG nova.objects.instance [None req-8df36cf9-938b-4a37-b0f1-0bd7d7c88a48 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lazy-loading 'flavor' on Instance uuid 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:01:14 np0005604943 nova_compute[238883]: 2026-02-02 12:01:14.735 238887 DEBUG oslo_concurrency.lockutils [None req-8df36cf9-938b-4a37-b0f1-0bd7d7c88a48 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Feb  2 07:01:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Feb  2 07:01:15 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.039 238887 DEBUG nova.compute.manager [req-12b36555-b853-42da-93de-e2627613fd0d req-23adbf60-5b9c-4510-8c5b-56f2ed045301 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Received event network-changed-1d1b9b21-b452-4b32-a535-8b2ecfac26e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.039 238887 DEBUG nova.compute.manager [req-12b36555-b853-42da-93de-e2627613fd0d req-23adbf60-5b9c-4510-8c5b-56f2ed045301 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Refreshing instance network info cache due to event network-changed-1d1b9b21-b452-4b32-a535-8b2ecfac26e6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.040 238887 DEBUG oslo_concurrency.lockutils [req-12b36555-b853-42da-93de-e2627613fd0d req-23adbf60-5b9c-4510-8c5b-56f2ed045301 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.040 238887 DEBUG oslo_concurrency.lockutils [req-12b36555-b853-42da-93de-e2627613fd0d req-23adbf60-5b9c-4510-8c5b-56f2ed045301 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.040 238887 DEBUG nova.network.neutron [req-12b36555-b853-42da-93de-e2627613fd0d req-23adbf60-5b9c-4510-8c5b-56f2ed045301 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Refreshing network info cache for port 1d1b9b21-b452-4b32-a535-8b2ecfac26e6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.150 238887 DEBUG oslo_concurrency.lockutils [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.150 238887 DEBUG oslo_concurrency.lockutils [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.150 238887 DEBUG oslo_concurrency.lockutils [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.151 238887 DEBUG oslo_concurrency.lockutils [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.151 238887 DEBUG oslo_concurrency.lockutils [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.152 238887 INFO nova.compute.manager [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Terminating instance#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.152 238887 DEBUG nova.compute.manager [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:01:16 np0005604943 kernel: tap1d1b9b21-b4 (unregistering): left promiscuous mode
Feb  2 07:01:16 np0005604943 NetworkManager[49093]: <info>  [1770033676.1938] device (tap1d1b9b21-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:01:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3770213867' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:16 np0005604943 ovn_controller[145056]: 2026-02-02T12:01:16Z|00121|binding|INFO|Releasing lport 1d1b9b21-b452-4b32-a535-8b2ecfac26e6 from this chassis (sb_readonly=0)
Feb  2 07:01:16 np0005604943 ovn_controller[145056]: 2026-02-02T12:01:16Z|00122|binding|INFO|Setting lport 1d1b9b21-b452-4b32-a535-8b2ecfac26e6 down in Southbound
Feb  2 07:01:16 np0005604943 ovn_controller[145056]: 2026-02-02T12:01:16Z|00123|binding|INFO|Removing iface tap1d1b9b21-b4 ovn-installed in OVS
Feb  2 07:01:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.200 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3770213867' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:16.209 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:a4:df 10.100.0.10'], port_security=['fa:16:3e:74:a4:df 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de22212f-33f4-472b-8b67-05be2c5418f5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc9ca354da4dd4bdccf919f13d3561', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c1acc81f-70be-41e7-925b-e46224557e82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d9a408e-4ec8-415c-980d-60f0a24de8bc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=1d1b9b21-b452-4b32-a535-8b2ecfac26e6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:01:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:16.210 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 1d1b9b21-b452-4b32-a535-8b2ecfac26e6 in datapath de22212f-33f4-472b-8b67-05be2c5418f5 unbound from our chassis#033[00m
Feb  2 07:01:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:16.211 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network de22212f-33f4-472b-8b67-05be2c5418f5#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.213 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:16.228 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c923c3e5-69d0-49d7-b44a-99e4cccbf899]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:16 np0005604943 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Feb  2 07:01:16 np0005604943 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 14.151s CPU time.
Feb  2 07:01:16 np0005604943 systemd-machined[206973]: Machine qemu-12-instance-0000000c terminated.
Feb  2 07:01:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:16.251 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[40852403-ac7b-4304-903b-c7c99194a740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:16.256 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf1aecd-e2f9-457e-9072-189fba47b8f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:16.282 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[b29edc6e-ef34-4e88-a344-91a0ac92f1e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 271 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 264 KiB/s wr, 166 op/s
Feb  2 07:01:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:16.297 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[60758703-a9f6-4945-bc35-74ed7206333c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde22212f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:22:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403687, 'reachable_time': 42411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256333, 'error': None, 'target': 'ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.309 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:16.314 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[59e185e8-0e2d-44e6-918b-45e9036f43e2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapde22212f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403696, 'tstamp': 403696}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256334, 'error': None, 'target': 'ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapde22212f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403698, 'tstamp': 403698}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256334, 'error': None, 'target': 'ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:16.316 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde22212f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.317 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.320 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:16.321 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde22212f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:01:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:16.321 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:01:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:16.321 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapde22212f-30, col_values=(('external_ids', {'iface-id': '6001fd23-eaf7-4f4e-bf94-96506f1de9d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:01:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:16.322 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.369 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.372 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.385 238887 INFO nova.virt.libvirt.driver [-] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Instance destroyed successfully.#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.386 238887 DEBUG nova.objects.instance [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lazy-loading 'resources' on Instance uuid 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.398 238887 DEBUG nova.virt.libvirt.vif [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:00:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-863790867',display_name='tempest-TestStampPattern-server-863790867',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-863790867',id=12,image_ref='3c21bba8-7447-4f7b-8add-32d60d531dee',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHkgfVtqBda0LVlF5slmF25Lo/XwS8Q8Sghn9kMaubVvv9bxRUWvKYk1te57NsoxW3EiHAVoG8/mfQ9ewKRmH/t5lWTLgWAau4XX+kOaKUVaGSh/OmZZNeyoLD4n3OeH0A==',key_name='tempest-TestStampPattern-80198922',keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:00:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='82fc9ca354da4dd4bdccf919f13d3561',ramdisk_id='',reservation_id='r-zzuf9m4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='1b038c3f-57e2-4f69-a27c-2ba8d465dfc1',image_min_disk='1',image_min_ram='0',image_owner_id='82fc9ca354da4dd4bdccf919f13d3561',image_owner_project_name='tempest-TestStampPattern-577361379',image_owner_user_name='tempest-TestStampPattern-577361379-project-member',image_user_id='55f5d320b54948c9a8f465d017972291',owner_project_name='tempest-TestStampPattern-577361379',owner_user_name='tempest-TestStampPattern-577361379-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:00:34Z,user_data=None,user_id='55f5d320b54948c9a8f465d017972291',uuid=959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "address": "fa:16:3e:74:a4:df", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d1b9b21-b4", "ovs_interfaceid": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.399 238887 DEBUG nova.network.os_vif_util [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Converting VIF {"id": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "address": "fa:16:3e:74:a4:df", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d1b9b21-b4", "ovs_interfaceid": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.400 238887 DEBUG nova.network.os_vif_util [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:74:a4:df,bridge_name='br-int',has_traffic_filtering=True,id=1d1b9b21-b452-4b32-a535-8b2ecfac26e6,network=Network(de22212f-33f4-472b-8b67-05be2c5418f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d1b9b21-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.400 238887 DEBUG os_vif [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:a4:df,bridge_name='br-int',has_traffic_filtering=True,id=1d1b9b21-b452-4b32-a535-8b2ecfac26e6,network=Network(de22212f-33f4-472b-8b67-05be2c5418f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d1b9b21-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.402 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.403 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d1b9b21-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.404 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.406 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.408 238887 INFO os_vif [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:a4:df,bridge_name='br-int',has_traffic_filtering=True,id=1d1b9b21-b452-4b32-a535-8b2ecfac26e6,network=Network(de22212f-33f4-472b-8b67-05be2c5418f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d1b9b21-b4')#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.644 238887 INFO nova.virt.libvirt.driver [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Deleting instance files /var/lib/nova/instances/959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8_del#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.645 238887 INFO nova.virt.libvirt.driver [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Deletion of /var/lib/nova/instances/959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8_del complete#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.691 238887 INFO nova.compute.manager [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Took 0.54 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.691 238887 DEBUG oslo.service.loopingcall [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.691 238887 DEBUG nova.compute.manager [-] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:01:16 np0005604943 nova_compute[238883]: 2026-02-02 12:01:16.691 238887 DEBUG nova.network.neutron [-] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:01:17 np0005604943 nova_compute[238883]: 2026-02-02 12:01:17.313 238887 DEBUG nova.network.neutron [req-12b36555-b853-42da-93de-e2627613fd0d req-23adbf60-5b9c-4510-8c5b-56f2ed045301 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Updated VIF entry in instance network info cache for port 1d1b9b21-b452-4b32-a535-8b2ecfac26e6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:01:17 np0005604943 nova_compute[238883]: 2026-02-02 12:01:17.313 238887 DEBUG nova.network.neutron [req-12b36555-b853-42da-93de-e2627613fd0d req-23adbf60-5b9c-4510-8c5b-56f2ed045301 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Updating instance_info_cache with network_info: [{"id": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "address": "fa:16:3e:74:a4:df", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d1b9b21-b4", "ovs_interfaceid": "1d1b9b21-b452-4b32-a535-8b2ecfac26e6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:01:17 np0005604943 nova_compute[238883]: 2026-02-02 12:01:17.333 238887 DEBUG oslo_concurrency.lockutils [req-12b36555-b853-42da-93de-e2627613fd0d req-23adbf60-5b9c-4510-8c5b-56f2ed045301 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:01:17 np0005604943 nova_compute[238883]: 2026-02-02 12:01:17.359 238887 DEBUG nova.network.neutron [-] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:01:17 np0005604943 nova_compute[238883]: 2026-02-02 12:01:17.379 238887 INFO nova.compute.manager [-] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Took 0.69 seconds to deallocate network for instance.#033[00m
Feb  2 07:01:17 np0005604943 nova_compute[238883]: 2026-02-02 12:01:17.428 238887 DEBUG nova.compute.manager [req-60e23b5b-5d6f-43f5-bb7f-da5e910fb8d5 req-4ed57032-869c-462e-8921-cde8ab9a241c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Received event network-vif-deleted-1d1b9b21-b452-4b32-a535-8b2ecfac26e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:01:17 np0005604943 nova_compute[238883]: 2026-02-02 12:01:17.439 238887 DEBUG oslo_concurrency.lockutils [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:17 np0005604943 nova_compute[238883]: 2026-02-02 12:01:17.440 238887 DEBUG oslo_concurrency.lockutils [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:17 np0005604943 nova_compute[238883]: 2026-02-02 12:01:17.496 238887 DEBUG oslo_concurrency.processutils [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:01:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:01:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Feb  2 07:01:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Feb  2 07:01:17 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Feb  2 07:01:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:01:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3368200316' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.047 238887 DEBUG oslo_concurrency.processutils [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.052 238887 DEBUG nova.compute.provider_tree [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.069 238887 DEBUG nova.scheduler.client.report [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.093 238887 DEBUG oslo_concurrency.lockutils [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.138 238887 INFO nova.scheduler.client.report [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Deleted allocations for instance 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.150 238887 DEBUG nova.compute.manager [req-0266dc3f-a1ed-4581-bcf7-7536ca318431 req-a1817a59-e94e-40b3-b42a-09f539199d7d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Received event network-vif-unplugged-1d1b9b21-b452-4b32-a535-8b2ecfac26e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.151 238887 DEBUG oslo_concurrency.lockutils [req-0266dc3f-a1ed-4581-bcf7-7536ca318431 req-a1817a59-e94e-40b3-b42a-09f539199d7d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.151 238887 DEBUG oslo_concurrency.lockutils [req-0266dc3f-a1ed-4581-bcf7-7536ca318431 req-a1817a59-e94e-40b3-b42a-09f539199d7d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.151 238887 DEBUG oslo_concurrency.lockutils [req-0266dc3f-a1ed-4581-bcf7-7536ca318431 req-a1817a59-e94e-40b3-b42a-09f539199d7d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.151 238887 DEBUG nova.compute.manager [req-0266dc3f-a1ed-4581-bcf7-7536ca318431 req-a1817a59-e94e-40b3-b42a-09f539199d7d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] No waiting events found dispatching network-vif-unplugged-1d1b9b21-b452-4b32-a535-8b2ecfac26e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.151 238887 WARNING nova.compute.manager [req-0266dc3f-a1ed-4581-bcf7-7536ca318431 req-a1817a59-e94e-40b3-b42a-09f539199d7d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Received unexpected event network-vif-unplugged-1d1b9b21-b452-4b32-a535-8b2ecfac26e6 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.152 238887 DEBUG nova.compute.manager [req-0266dc3f-a1ed-4581-bcf7-7536ca318431 req-a1817a59-e94e-40b3-b42a-09f539199d7d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Received event network-vif-plugged-1d1b9b21-b452-4b32-a535-8b2ecfac26e6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.152 238887 DEBUG oslo_concurrency.lockutils [req-0266dc3f-a1ed-4581-bcf7-7536ca318431 req-a1817a59-e94e-40b3-b42a-09f539199d7d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.152 238887 DEBUG oslo_concurrency.lockutils [req-0266dc3f-a1ed-4581-bcf7-7536ca318431 req-a1817a59-e94e-40b3-b42a-09f539199d7d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.152 238887 DEBUG oslo_concurrency.lockutils [req-0266dc3f-a1ed-4581-bcf7-7536ca318431 req-a1817a59-e94e-40b3-b42a-09f539199d7d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.152 238887 DEBUG nova.compute.manager [req-0266dc3f-a1ed-4581-bcf7-7536ca318431 req-a1817a59-e94e-40b3-b42a-09f539199d7d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] No waiting events found dispatching network-vif-plugged-1d1b9b21-b452-4b32-a535-8b2ecfac26e6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.152 238887 WARNING nova.compute.manager [req-0266dc3f-a1ed-4581-bcf7-7536ca318431 req-a1817a59-e94e-40b3-b42a-09f539199d7d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Received unexpected event network-vif-plugged-1d1b9b21-b452-4b32-a535-8b2ecfac26e6 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:01:18 np0005604943 nova_compute[238883]: 2026-02-02 12:01:18.197 238887 DEBUG oslo_concurrency.lockutils [None req-74c09495-9da6-456e-b86b-72abb28382c4 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 259 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 724 KiB/s rd, 263 KiB/s wr, 210 op/s
Feb  2 07:01:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2634903485' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2634903485' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 250 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 744 KiB/s rd, 267 KiB/s wr, 242 op/s
Feb  2 07:01:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Feb  2 07:01:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Feb  2 07:01:20 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Feb  2 07:01:21 np0005604943 nova_compute[238883]: 2026-02-02 12:01:21.343 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4115299944' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4115299944' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:21 np0005604943 nova_compute[238883]: 2026-02-02 12:01:21.404 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007625856521873726 of space, bias 1.0, pg target 0.22877569565621175 quantized to 32 (current 32)
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00042289663527078164 of space, bias 1.0, pg target 0.1268689905812345 quantized to 32 (current 32)
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 8.013733291700497e-07 of space, bias 1.0, pg target 0.00024041199875101491 quantized to 32 (current 32)
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014240698259525904 of space, bias 1.0, pg target 0.4272209477857771 quantized to 32 (current 32)
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.589528419577594e-07 of space, bias 4.0, pg target 0.0011507434103493112 quantized to 16 (current 16)
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:01:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 07:01:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Feb  2 07:01:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Feb  2 07:01:21 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Feb  2 07:01:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/212546757' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/212546757' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 250 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 813 KiB/s rd, 10 KiB/s wr, 141 op/s
Feb  2 07:01:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3270049151' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3270049151' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:01:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Feb  2 07:01:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Feb  2 07:01:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Feb  2 07:01:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1888048424' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1888048424' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.597 238887 DEBUG nova.compute.manager [req-855e7198-d149-40d2-bcbb-a0ed9eace6f2 req-6cff54a3-41f5-4159-842a-a0ed56c336ae 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received event network-changed-b03048b5-3014-4343-9639-e364514f44d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.597 238887 DEBUG nova.compute.manager [req-855e7198-d149-40d2-bcbb-a0ed9eace6f2 req-6cff54a3-41f5-4159-842a-a0ed56c336ae 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Refreshing instance network info cache due to event network-changed-b03048b5-3014-4343-9639-e364514f44d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.598 238887 DEBUG oslo_concurrency.lockutils [req-855e7198-d149-40d2-bcbb-a0ed9eace6f2 req-6cff54a3-41f5-4159-842a-a0ed56c336ae 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.598 238887 DEBUG oslo_concurrency.lockutils [req-855e7198-d149-40d2-bcbb-a0ed9eace6f2 req-6cff54a3-41f5-4159-842a-a0ed56c336ae 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.598 238887 DEBUG nova.network.neutron [req-855e7198-d149-40d2-bcbb-a0ed9eace6f2 req-6cff54a3-41f5-4159-842a-a0ed56c336ae 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Refreshing network info cache for port b03048b5-3014-4343-9639-e364514f44d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.724 238887 DEBUG oslo_concurrency.lockutils [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.726 238887 DEBUG oslo_concurrency.lockutils [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.726 238887 DEBUG oslo_concurrency.lockutils [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.727 238887 DEBUG oslo_concurrency.lockutils [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.727 238887 DEBUG oslo_concurrency.lockutils [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.728 238887 INFO nova.compute.manager [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Terminating instance#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.730 238887 DEBUG nova.compute.manager [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:01:23 np0005604943 kernel: tapb03048b5-30 (unregistering): left promiscuous mode
Feb  2 07:01:23 np0005604943 NetworkManager[49093]: <info>  [1770033683.7691] device (tapb03048b5-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.770 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:01:23Z|00124|binding|INFO|Releasing lport b03048b5-3014-4343-9639-e364514f44d0 from this chassis (sb_readonly=0)
Feb  2 07:01:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:01:23Z|00125|binding|INFO|Setting lport b03048b5-3014-4343-9639-e364514f44d0 down in Southbound
Feb  2 07:01:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:01:23Z|00126|binding|INFO|Removing iface tapb03048b5-30 ovn-installed in OVS
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.783 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:23.802 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:cf:fd 10.100.0.13'], port_security=['fa:16:3e:d1:cf:fd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1b038c3f-57e2-4f69-a27c-2ba8d465dfc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de22212f-33f4-472b-8b67-05be2c5418f5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc9ca354da4dd4bdccf919f13d3561', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c1acc81f-70be-41e7-925b-e46224557e82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d9a408e-4ec8-415c-980d-60f0a24de8bc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=b03048b5-3014-4343-9639-e364514f44d0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:01:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:23.803 155011 INFO neutron.agent.ovn.metadata.agent [-] Port b03048b5-3014-4343-9639-e364514f44d0 in datapath de22212f-33f4-472b-8b67-05be2c5418f5 unbound from our chassis#033[00m
Feb  2 07:01:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:23.804 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de22212f-33f4-472b-8b67-05be2c5418f5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:01:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:23.805 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ac814168-dc59-47ef-b398-9d294d889674]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:23.805 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5 namespace which is not needed anymore#033[00m
Feb  2 07:01:23 np0005604943 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Feb  2 07:01:23 np0005604943 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 15.278s CPU time.
Feb  2 07:01:23 np0005604943 systemd-machined[206973]: Machine qemu-11-instance-0000000b terminated.
Feb  2 07:01:23 np0005604943 neutron-haproxy-ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5[254193]: [NOTICE]   (254197) : haproxy version is 2.8.14-c23fe91
Feb  2 07:01:23 np0005604943 neutron-haproxy-ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5[254193]: [NOTICE]   (254197) : path to executable is /usr/sbin/haproxy
Feb  2 07:01:23 np0005604943 neutron-haproxy-ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5[254193]: [WARNING]  (254197) : Exiting Master process...
Feb  2 07:01:23 np0005604943 neutron-haproxy-ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5[254193]: [WARNING]  (254197) : Exiting Master process...
Feb  2 07:01:23 np0005604943 neutron-haproxy-ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5[254193]: [ALERT]    (254197) : Current worker (254199) exited with code 143 (Terminated)
Feb  2 07:01:23 np0005604943 neutron-haproxy-ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5[254193]: [WARNING]  (254197) : All workers exited. Exiting... (0)
Feb  2 07:01:23 np0005604943 systemd[1]: libpod-4ce89a93ab265ed97fc68f70c042ff9fe0c32fc236a4d1aa08744f250e06bc9f.scope: Deactivated successfully.
Feb  2 07:01:23 np0005604943 podman[256413]: 2026-02-02 12:01:23.929514815 +0000 UTC m=+0.044611102 container died 4ce89a93ab265ed97fc68f70c042ff9fe0c32fc236a4d1aa08744f250e06bc9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Feb  2 07:01:23 np0005604943 kernel: tapb03048b5-30: entered promiscuous mode
Feb  2 07:01:23 np0005604943 NetworkManager[49093]: <info>  [1770033683.9480] manager: (tapb03048b5-30): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Feb  2 07:01:23 np0005604943 kernel: tapb03048b5-30 (unregistering): left promiscuous mode
Feb  2 07:01:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:01:23Z|00127|binding|INFO|Claiming lport b03048b5-3014-4343-9639-e364514f44d0 for this chassis.
Feb  2 07:01:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:01:23Z|00128|binding|INFO|b03048b5-3014-4343-9639-e364514f44d0: Claiming fa:16:3e:d1:cf:fd 10.100.0.13
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.955 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:23 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ce89a93ab265ed97fc68f70c042ff9fe0c32fc236a4d1aa08744f250e06bc9f-userdata-shm.mount: Deactivated successfully.
Feb  2 07:01:23 np0005604943 systemd[1]: var-lib-containers-storage-overlay-66ab7892671dce0e4b402c8286cd513e25d16ba370fd07f414a70702ed7c26ca-merged.mount: Deactivated successfully.
Feb  2 07:01:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:01:23Z|00129|binding|INFO|Setting lport b03048b5-3014-4343-9639-e364514f44d0 ovn-installed in OVS
Feb  2 07:01:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:01:23Z|00130|binding|INFO|Setting lport b03048b5-3014-4343-9639-e364514f44d0 up in Southbound
Feb  2 07:01:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:23.968 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:cf:fd 10.100.0.13'], port_security=['fa:16:3e:d1:cf:fd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1b038c3f-57e2-4f69-a27c-2ba8d465dfc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de22212f-33f4-472b-8b67-05be2c5418f5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc9ca354da4dd4bdccf919f13d3561', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c1acc81f-70be-41e7-925b-e46224557e82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d9a408e-4ec8-415c-980d-60f0a24de8bc, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=b03048b5-3014-4343-9639-e364514f44d0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:01:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:01:23Z|00131|binding|INFO|Releasing lport b03048b5-3014-4343-9639-e364514f44d0 from this chassis (sb_readonly=1)
Feb  2 07:01:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:01:23Z|00132|binding|INFO|Removing iface tapb03048b5-30 ovn-installed in OVS
Feb  2 07:01:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:01:23Z|00133|if_status|INFO|Not setting lport b03048b5-3014-4343-9639-e364514f44d0 down as sb is readonly
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.970 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:01:23Z|00134|binding|INFO|Releasing lport b03048b5-3014-4343-9639-e364514f44d0 from this chassis (sb_readonly=0)
Feb  2 07:01:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:01:23Z|00135|binding|INFO|Setting lport b03048b5-3014-4343-9639-e364514f44d0 down in Southbound
Feb  2 07:01:23 np0005604943 podman[256413]: 2026-02-02 12:01:23.973467208 +0000 UTC m=+0.088563485 container cleanup 4ce89a93ab265ed97fc68f70c042ff9fe0c32fc236a4d1aa08744f250e06bc9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.973 238887 INFO nova.virt.libvirt.driver [-] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Instance destroyed successfully.#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.974 238887 DEBUG nova.objects.instance [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lazy-loading 'resources' on Instance uuid 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:01:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:23.980 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:cf:fd 10.100.0.13'], port_security=['fa:16:3e:d1:cf:fd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1b038c3f-57e2-4f69-a27c-2ba8d465dfc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de22212f-33f4-472b-8b67-05be2c5418f5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '82fc9ca354da4dd4bdccf919f13d3561', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c1acc81f-70be-41e7-925b-e46224557e82', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d9a408e-4ec8-415c-980d-60f0a24de8bc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=b03048b5-3014-4343-9639-e364514f44d0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.981 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:23 np0005604943 systemd[1]: libpod-conmon-4ce89a93ab265ed97fc68f70c042ff9fe0c32fc236a4d1aa08744f250e06bc9f.scope: Deactivated successfully.
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.993 238887 DEBUG nova.virt.libvirt.vif [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T11:59:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-620872658',display_name='tempest-TestStampPattern-server-620872658',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-620872658',id=11,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHkgfVtqBda0LVlF5slmF25Lo/XwS8Q8Sghn9kMaubVvv9bxRUWvKYk1te57NsoxW3EiHAVoG8/mfQ9ewKRmH/t5lWTLgWAau4XX+kOaKUVaGSh/OmZZNeyoLD4n3OeH0A==',key_name='tempest-TestStampPattern-80198922',keypairs=<?>,launch_index=0,launched_at=2026-02-02T11:59:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='82fc9ca354da4dd4bdccf919f13d3561',ramdisk_id='',reservation_id='r-jmm0e7q7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-577361379',owner_user_name='tempest-TestStampPattern-577361379-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:00:23Z,user_data=None,user_id='55f5d320b54948c9a8f465d017972291',uuid=1b038c3f-57e2-4f69-a27c-2ba8d465dfc1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b03048b5-3014-4343-9639-e364514f44d0", "address": "fa:16:3e:d1:cf:fd", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03048b5-30", "ovs_interfaceid": "b03048b5-3014-4343-9639-e364514f44d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.994 238887 DEBUG nova.network.os_vif_util [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Converting VIF {"id": "b03048b5-3014-4343-9639-e364514f44d0", "address": "fa:16:3e:d1:cf:fd", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03048b5-30", "ovs_interfaceid": "b03048b5-3014-4343-9639-e364514f44d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.995 238887 DEBUG nova.network.os_vif_util [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d1:cf:fd,bridge_name='br-int',has_traffic_filtering=True,id=b03048b5-3014-4343-9639-e364514f44d0,network=Network(de22212f-33f4-472b-8b67-05be2c5418f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03048b5-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.995 238887 DEBUG os_vif [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:cf:fd,bridge_name='br-int',has_traffic_filtering=True,id=b03048b5-3014-4343-9639-e364514f44d0,network=Network(de22212f-33f4-472b-8b67-05be2c5418f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03048b5-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.997 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.998 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb03048b5-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:01:23 np0005604943 nova_compute[238883]: 2026-02-02 12:01:23.999 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.000 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3467402909' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.002 238887 INFO os_vif [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:cf:fd,bridge_name='br-int',has_traffic_filtering=True,id=b03048b5-3014-4343-9639-e364514f44d0,network=Network(de22212f-33f4-472b-8b67-05be2c5418f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03048b5-30')#033[00m
Feb  2 07:01:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3467402909' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:24 np0005604943 podman[256452]: 2026-02-02 12:01:24.028645396 +0000 UTC m=+0.036378939 container remove 4ce89a93ab265ed97fc68f70c042ff9fe0c32fc236a4d1aa08744f250e06bc9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 07:01:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:24.032 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[2dab2278-015b-44b6-bbf0-5ea8f4c84f5a]: (4, ('Mon Feb  2 12:01:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5 (4ce89a93ab265ed97fc68f70c042ff9fe0c32fc236a4d1aa08744f250e06bc9f)\n4ce89a93ab265ed97fc68f70c042ff9fe0c32fc236a4d1aa08744f250e06bc9f\nMon Feb  2 12:01:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5 (4ce89a93ab265ed97fc68f70c042ff9fe0c32fc236a4d1aa08744f250e06bc9f)\n4ce89a93ab265ed97fc68f70c042ff9fe0c32fc236a4d1aa08744f250e06bc9f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:24.034 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[8f69c8d2-1ae6-400d-b560-94e6ec5bf876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:24.036 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde22212f-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.037 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:24 np0005604943 kernel: tapde22212f-30: left promiscuous mode
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.045 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.047 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:24.047 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4d9e16-362e-482e-863a-272d53585f6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:24.068 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c5443046-72af-4c3c-b03c-a8ae9a7b3302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:24.070 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ff042acc-e937-4cce-a36b-14e17761cc3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:24.083 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9b931665-927f-4233-9c95-5fbf42575fbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403682, 'reachable_time': 24767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256485, 'error': None, 'target': 'ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:24.086 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-de22212f-33f4-472b-8b67-05be2c5418f5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 07:01:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:24.086 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[40a03677-077b-46d1-bc8d-b076a2655946]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:24.087 155011 INFO neutron.agent.ovn.metadata.agent [-] Port b03048b5-3014-4343-9639-e364514f44d0 in datapath de22212f-33f4-472b-8b67-05be2c5418f5 unbound from our chassis#033[00m
Feb  2 07:01:24 np0005604943 systemd[1]: run-netns-ovnmeta\x2dde22212f\x2d33f4\x2d472b\x2d8b67\x2d05be2c5418f5.mount: Deactivated successfully.
Feb  2 07:01:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:24.088 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de22212f-33f4-472b-8b67-05be2c5418f5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:01:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:24.089 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c6468ec7-a994-403c-b91c-33a187c03f9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:24.090 155011 INFO neutron.agent.ovn.metadata.agent [-] Port b03048b5-3014-4343-9639-e364514f44d0 in datapath de22212f-33f4-472b-8b67-05be2c5418f5 unbound from our chassis#033[00m
Feb  2 07:01:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:24.091 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de22212f-33f4-472b-8b67-05be2c5418f5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:01:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:01:24.091 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[f97f5637-2f2e-4da2-b9b8-d7c24e6a6ae2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.247 238887 INFO nova.virt.libvirt.driver [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Deleting instance files /var/lib/nova/instances/1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_del#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.248 238887 INFO nova.virt.libvirt.driver [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Deletion of /var/lib/nova/instances/1b038c3f-57e2-4f69-a27c-2ba8d465dfc1_del complete#033[00m
Feb  2 07:01:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 169 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 17 KiB/s wr, 238 op/s
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.318 238887 DEBUG nova.compute.manager [req-c70379bb-6e2c-41d3-b685-a7f3c3afe747 req-6d3ceb36-8a77-4350-80b5-48cdf740b6bf 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received event network-vif-unplugged-b03048b5-3014-4343-9639-e364514f44d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.319 238887 DEBUG oslo_concurrency.lockutils [req-c70379bb-6e2c-41d3-b685-a7f3c3afe747 req-6d3ceb36-8a77-4350-80b5-48cdf740b6bf 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.319 238887 DEBUG oslo_concurrency.lockutils [req-c70379bb-6e2c-41d3-b685-a7f3c3afe747 req-6d3ceb36-8a77-4350-80b5-48cdf740b6bf 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.319 238887 DEBUG oslo_concurrency.lockutils [req-c70379bb-6e2c-41d3-b685-a7f3c3afe747 req-6d3ceb36-8a77-4350-80b5-48cdf740b6bf 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.319 238887 DEBUG nova.compute.manager [req-c70379bb-6e2c-41d3-b685-a7f3c3afe747 req-6d3ceb36-8a77-4350-80b5-48cdf740b6bf 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] No waiting events found dispatching network-vif-unplugged-b03048b5-3014-4343-9639-e364514f44d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.320 238887 DEBUG nova.compute.manager [req-c70379bb-6e2c-41d3-b685-a7f3c3afe747 req-6d3ceb36-8a77-4350-80b5-48cdf740b6bf 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received event network-vif-unplugged-b03048b5-3014-4343-9639-e364514f44d0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.330 238887 INFO nova.compute.manager [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.331 238887 DEBUG oslo.service.loopingcall [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.331 238887 DEBUG nova.compute.manager [-] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.331 238887 DEBUG nova.network.neutron [-] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:01:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1616831408' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1616831408' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.964 238887 DEBUG nova.network.neutron [-] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.966 238887 DEBUG nova.network.neutron [req-855e7198-d149-40d2-bcbb-a0ed9eace6f2 req-6cff54a3-41f5-4159-842a-a0ed56c336ae 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Updated VIF entry in instance network info cache for port b03048b5-3014-4343-9639-e364514f44d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.967 238887 DEBUG nova.network.neutron [req-855e7198-d149-40d2-bcbb-a0ed9eace6f2 req-6cff54a3-41f5-4159-842a-a0ed56c336ae 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Updating instance_info_cache with network_info: [{"id": "b03048b5-3014-4343-9639-e364514f44d0", "address": "fa:16:3e:d1:cf:fd", "network": {"id": "de22212f-33f4-472b-8b67-05be2c5418f5", "bridge": "br-int", "label": "tempest-TestStampPattern-1128598159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "82fc9ca354da4dd4bdccf919f13d3561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03048b5-30", "ovs_interfaceid": "b03048b5-3014-4343-9639-e364514f44d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.989 238887 INFO nova.compute.manager [-] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Took 0.66 seconds to deallocate network for instance.#033[00m
Feb  2 07:01:24 np0005604943 nova_compute[238883]: 2026-02-02 12:01:24.995 238887 DEBUG oslo_concurrency.lockutils [req-855e7198-d149-40d2-bcbb-a0ed9eace6f2 req-6cff54a3-41f5-4159-842a-a0ed56c336ae 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:01:25 np0005604943 nova_compute[238883]: 2026-02-02 12:01:25.029 238887 DEBUG oslo_concurrency.lockutils [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:25 np0005604943 nova_compute[238883]: 2026-02-02 12:01:25.030 238887 DEBUG oslo_concurrency.lockutils [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:25 np0005604943 nova_compute[238883]: 2026-02-02 12:01:25.074 238887 DEBUG oslo_concurrency.processutils [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:01:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/726810653' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/726810653' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:01:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2295602124' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:01:25 np0005604943 nova_compute[238883]: 2026-02-02 12:01:25.601 238887 DEBUG oslo_concurrency.processutils [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:01:25 np0005604943 nova_compute[238883]: 2026-02-02 12:01:25.609 238887 DEBUG nova.compute.provider_tree [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:01:25 np0005604943 nova_compute[238883]: 2026-02-02 12:01:25.627 238887 DEBUG nova.scheduler.client.report [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:01:25 np0005604943 nova_compute[238883]: 2026-02-02 12:01:25.650 238887 DEBUG oslo_concurrency.lockutils [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:25 np0005604943 nova_compute[238883]: 2026-02-02 12:01:25.674 238887 INFO nova.scheduler.client.report [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Deleted allocations for instance 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1#033[00m
Feb  2 07:01:25 np0005604943 nova_compute[238883]: 2026-02-02 12:01:25.730 238887 DEBUG oslo_concurrency.lockutils [None req-ed07991d-9b64-4799-8797-f8ea991f17a2 55f5d320b54948c9a8f465d017972291 82fc9ca354da4dd4bdccf919f13d3561 - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:25 np0005604943 nova_compute[238883]: 2026-02-02 12:01:25.759 238887 DEBUG nova.compute.manager [req-1e139594-2229-4e14-88d2-22423e2009d3 req-5ca4fd95-e01a-4a6f-8b69-f29654a52fc1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received event network-vif-deleted-b03048b5-3014-4343-9639-e364514f44d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:01:25 np0005604943 nova_compute[238883]: 2026-02-02 12:01:25.759 238887 INFO nova.compute.manager [req-1e139594-2229-4e14-88d2-22423e2009d3 req-5ca4fd95-e01a-4a6f-8b69-f29654a52fc1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Neutron deleted interface b03048b5-3014-4343-9639-e364514f44d0; detaching it from the instance and deleting it from the info cache#033[00m
Feb  2 07:01:25 np0005604943 nova_compute[238883]: 2026-02-02 12:01:25.759 238887 DEBUG nova.network.neutron [req-1e139594-2229-4e14-88d2-22423e2009d3 req-5ca4fd95-e01a-4a6f-8b69-f29654a52fc1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Feb  2 07:01:25 np0005604943 nova_compute[238883]: 2026-02-02 12:01:25.762 238887 DEBUG nova.compute.manager [req-1e139594-2229-4e14-88d2-22423e2009d3 req-5ca4fd95-e01a-4a6f-8b69-f29654a52fc1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Detach interface failed, port_id=b03048b5-3014-4343-9639-e364514f44d0, reason: Instance 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb  2 07:01:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 169 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 11 KiB/s wr, 196 op/s
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.345 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.390 238887 DEBUG nova.compute.manager [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received event network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.391 238887 DEBUG oslo_concurrency.lockutils [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.391 238887 DEBUG oslo_concurrency.lockutils [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.391 238887 DEBUG oslo_concurrency.lockutils [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.391 238887 DEBUG nova.compute.manager [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] No waiting events found dispatching network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.392 238887 WARNING nova.compute.manager [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received unexpected event network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.392 238887 DEBUG nova.compute.manager [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received event network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.392 238887 DEBUG oslo_concurrency.lockutils [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.392 238887 DEBUG oslo_concurrency.lockutils [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.393 238887 DEBUG oslo_concurrency.lockutils [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.393 238887 DEBUG nova.compute.manager [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] No waiting events found dispatching network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.393 238887 WARNING nova.compute.manager [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received unexpected event network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.393 238887 DEBUG nova.compute.manager [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received event network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.393 238887 DEBUG oslo_concurrency.lockutils [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.394 238887 DEBUG oslo_concurrency.lockutils [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.394 238887 DEBUG oslo_concurrency.lockutils [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.394 238887 DEBUG nova.compute.manager [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] No waiting events found dispatching network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.394 238887 WARNING nova.compute.manager [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received unexpected event network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.395 238887 DEBUG nova.compute.manager [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received event network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.395 238887 DEBUG oslo_concurrency.lockutils [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.395 238887 DEBUG oslo_concurrency.lockutils [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.396 238887 DEBUG oslo_concurrency.lockutils [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "1b038c3f-57e2-4f69-a27c-2ba8d465dfc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.396 238887 DEBUG nova.compute.manager [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] No waiting events found dispatching network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:01:26 np0005604943 nova_compute[238883]: 2026-02-02 12:01:26.396 238887 WARNING nova.compute.manager [req-11aeb7e0-a60b-4d85-8ba0-27efb98f678e req-f828979b-8bd1-4269-9414-a94a33dfc1f1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Received unexpected event network-vif-plugged-b03048b5-3014-4343-9639-e364514f44d0 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:01:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:26 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/8148856' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:26 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/8148856' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:01:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Feb  2 07:01:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Feb  2 07:01:27 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Feb  2 07:01:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2801558472' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2801558472' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 118 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 226 KiB/s rd, 12 KiB/s wr, 315 op/s
Feb  2 07:01:29 np0005604943 nova_compute[238883]: 2026-02-02 12:01:28.999 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:29 np0005604943 nova_compute[238883]: 2026-02-02 12:01:29.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:01:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Feb  2 07:01:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Feb  2 07:01:30 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Feb  2 07:01:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 90 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 224 KiB/s rd, 11 KiB/s wr, 305 op/s
Feb  2 07:01:31 np0005604943 nova_compute[238883]: 2026-02-02 12:01:31.347 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:31 np0005604943 nova_compute[238883]: 2026-02-02 12:01:31.385 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033676.38358, 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:01:31 np0005604943 nova_compute[238883]: 2026-02-02 12:01:31.385 238887 INFO nova.compute.manager [-] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:01:31 np0005604943 nova_compute[238883]: 2026-02-02 12:01:31.408 238887 DEBUG nova.compute.manager [None req-bf23e7f1-4d95-4a86-96bc-5dae00563636 - - - - - -] [instance: 959f6de5-0a2a-44e5-b7dc-ef6b03a6e7b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:01:31 np0005604943 nova_compute[238883]: 2026-02-02 12:01:31.506 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:31 np0005604943 nova_compute[238883]: 2026-02-02 12:01:31.574 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:31 np0005604943 nova_compute[238883]: 2026-02-02 12:01:31.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:01:31 np0005604943 nova_compute[238883]: 2026-02-02 12:01:31.667 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:31 np0005604943 nova_compute[238883]: 2026-02-02 12:01:31.668 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:31 np0005604943 nova_compute[238883]: 2026-02-02 12:01:31.668 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:31 np0005604943 nova_compute[238883]: 2026-02-02 12:01:31.668 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 07:01:31 np0005604943 nova_compute[238883]: 2026-02-02 12:01:31.669 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:01:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Feb  2 07:01:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Feb  2 07:01:32 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Feb  2 07:01:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:01:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999172197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:01:32 np0005604943 nova_compute[238883]: 2026-02-02 12:01:32.219 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:01:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 90 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 4.5 KiB/s wr, 210 op/s
Feb  2 07:01:32 np0005604943 nova_compute[238883]: 2026-02-02 12:01:32.383 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:01:32 np0005604943 nova_compute[238883]: 2026-02-02 12:01:32.384 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4552MB free_disk=59.98815828561783GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 07:01:32 np0005604943 nova_compute[238883]: 2026-02-02 12:01:32.385 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:01:32 np0005604943 nova_compute[238883]: 2026-02-02 12:01:32.385 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:01:32 np0005604943 nova_compute[238883]: 2026-02-02 12:01:32.444 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 07:01:32 np0005604943 nova_compute[238883]: 2026-02-02 12:01:32.445 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 07:01:32 np0005604943 nova_compute[238883]: 2026-02-02 12:01:32.465 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:01:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:01:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:01:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/424722781' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:01:33 np0005604943 nova_compute[238883]: 2026-02-02 12:01:33.004 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:01:33 np0005604943 nova_compute[238883]: 2026-02-02 12:01:33.010 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:01:33 np0005604943 nova_compute[238883]: 2026-02-02 12:01:33.027 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:01:33 np0005604943 nova_compute[238883]: 2026-02-02 12:01:33.047 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 07:01:33 np0005604943 nova_compute[238883]: 2026-02-02 12:01:33.047 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:01:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Feb  2 07:01:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Feb  2 07:01:33 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Feb  2 07:01:34 np0005604943 nova_compute[238883]: 2026-02-02 12:01:34.001 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:34 np0005604943 nova_compute[238883]: 2026-02-02 12:01:34.047 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:01:34 np0005604943 nova_compute[238883]: 2026-02-02 12:01:34.048 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 07:01:34 np0005604943 nova_compute[238883]: 2026-02-02 12:01:34.048 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 07:01:34 np0005604943 nova_compute[238883]: 2026-02-02 12:01:34.069 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 07:01:34 np0005604943 nova_compute[238883]: 2026-02-02 12:01:34.069 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:01:34 np0005604943 nova_compute[238883]: 2026-02-02 12:01:34.069 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:01:34 np0005604943 nova_compute[238883]: 2026-02-02 12:01:34.069 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 07:01:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 6.0 KiB/s wr, 114 op/s
Feb  2 07:01:34 np0005604943 nova_compute[238883]: 2026-02-02 12:01:34.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:01:34 np0005604943 nova_compute[238883]: 2026-02-02 12:01:34.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:01:35 np0005604943 nova_compute[238883]: 2026-02-02 12:01:35.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:01:36 np0005604943 podman[256557]: 2026-02-02 12:01:36.042158763 +0000 UTC m=+0.055042245 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:01:36 np0005604943 podman[256556]: 2026-02-02 12:01:36.06080121 +0000 UTC m=+0.078519653 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 07:01:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Feb  2 07:01:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Feb  2 07:01:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 3.0 KiB/s wr, 41 op/s
Feb  2 07:01:36 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Feb  2 07:01:36 np0005604943 nova_compute[238883]: 2026-02-02 12:01:36.349 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:36 np0005604943 nova_compute[238883]: 2026-02-02 12:01:36.635 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:01:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:01:37 np0005604943 nova_compute[238883]: 2026-02-02 12:01:37.649 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:01:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 5.9 KiB/s wr, 101 op/s
Feb  2 07:01:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Feb  2 07:01:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Feb  2 07:01:38 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Feb  2 07:01:38 np0005604943 nova_compute[238883]: 2026-02-02 12:01:38.974 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033683.972348, 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:01:38 np0005604943 nova_compute[238883]: 2026-02-02 12:01:38.974 238887 INFO nova.compute.manager [-] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:01:38 np0005604943 nova_compute[238883]: 2026-02-02 12:01:38.996 238887 DEBUG nova.compute.manager [None req-453c7dd6-9798-4017-a172-b1c82d380484 - - - - - -] [instance: 1b038c3f-57e2-4f69-a27c-2ba8d465dfc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:01:39 np0005604943 nova_compute[238883]: 2026-02-02 12:01:39.003 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.8 KiB/s wr, 57 op/s
Feb  2 07:01:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Feb  2 07:01:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Feb  2 07:01:40 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Feb  2 07:01:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:01:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:01:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:01:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:01:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:01:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:01:41 np0005604943 nova_compute[238883]: 2026-02-02 12:01:41.351 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Feb  2 07:01:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Feb  2 07:01:41 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Feb  2 07:01:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 3.2 KiB/s wr, 66 op/s
Feb  2 07:01:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800852088' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800852088' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:01:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Feb  2 07:01:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Feb  2 07:01:42 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Feb  2 07:01:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3895003687' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3895003687' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:44 np0005604943 nova_compute[238883]: 2026-02-02 12:01:44.005 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 88 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 5.2 KiB/s wr, 106 op/s
Feb  2 07:01:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Feb  2 07:01:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Feb  2 07:01:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Feb  2 07:01:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482944121' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482944121' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/353837093' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/353837093' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 88 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 5.1 KiB/s wr, 103 op/s
Feb  2 07:01:46 np0005604943 nova_compute[238883]: 2026-02-02 12:01:46.353 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:01:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Feb  2 07:01:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Feb  2 07:01:47 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Feb  2 07:01:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 88 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 6.0 KiB/s wr, 167 op/s
Feb  2 07:01:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Feb  2 07:01:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Feb  2 07:01:48 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Feb  2 07:01:49 np0005604943 nova_compute[238883]: 2026-02-02 12:01:49.007 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 88 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 KiB/s wr, 70 op/s
Feb  2 07:01:51 np0005604943 nova_compute[238883]: 2026-02-02 12:01:51.355 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 88 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.4 KiB/s wr, 55 op/s
Feb  2 07:01:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1184829100' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1184829100' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:01:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Feb  2 07:01:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Feb  2 07:01:52 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Feb  2 07:01:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Feb  2 07:01:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Feb  2 07:01:53 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Feb  2 07:01:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 07:01:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 17K writes, 62K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 17K writes, 6165 syncs, 2.85 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 38K keys, 11K commit groups, 1.0 writes per commit group, ingest: 22.68 MB, 0.04 MB/s#012Interval WAL: 11K writes, 5176 syncs, 2.27 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 07:01:54 np0005604943 nova_compute[238883]: 2026-02-02 12:01:54.009 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 8.5 KiB/s wr, 122 op/s
Feb  2 07:01:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.7 KiB/s wr, 97 op/s
Feb  2 07:01:56 np0005604943 nova_compute[238883]: 2026-02-02 12:01:56.357 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Feb  2 07:01:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Feb  2 07:01:56 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Feb  2 07:01:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:01:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 07:01:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 17K writes, 66K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 17K writes, 5894 syncs, 3.01 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 24.34 MB, 0.04 MB/s#012Interval WAL: 10K writes, 4497 syncs, 2.36 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 07:01:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1909322153' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1909322153' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:01:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 10 KiB/s wr, 183 op/s
Feb  2 07:01:59 np0005604943 nova_compute[238883]: 2026-02-02 12:01:59.010 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:01:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:01:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4292383323' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:01:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:01:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4292383323' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:02:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:02:00 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3998435554' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:02:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 109 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.7 MiB/s wr, 128 op/s
Feb  2 07:02:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Feb  2 07:02:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Feb  2 07:02:00 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Feb  2 07:02:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:00.599 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:02:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:00.600 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 07:02:00 np0005604943 nova_compute[238883]: 2026-02-02 12:02:00.601 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:01 np0005604943 nova_compute[238883]: 2026-02-02 12:02:01.359 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Feb  2 07:02:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Feb  2 07:02:01 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Feb  2 07:02:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 07:02:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 14K writes, 57K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 4680 syncs, 3.10 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8815 writes, 33K keys, 8815 commit groups, 1.0 writes per commit group, ingest: 21.03 MB, 0.04 MB/s#012Interval WAL: 8815 writes, 3778 syncs, 2.33 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 07:02:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:02:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1941291990' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:02:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:02:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1941291990' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:02:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 109 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 106 op/s
Feb  2 07:02:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:02:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Feb  2 07:02:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Feb  2 07:02:02 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Feb  2 07:02:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:02:03 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2192343987' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:02:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:02:03 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2192343987' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:02:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:03.601 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:02:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:02:03 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2851714725' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:02:04 np0005604943 nova_compute[238883]: 2026-02-02 12:02:04.012 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 180 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.1 MiB/s wr, 231 op/s
Feb  2 07:02:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Feb  2 07:02:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Feb  2 07:02:04 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Feb  2 07:02:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:02:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3776329797' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:02:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:02:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3776329797' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:02:04 np0005604943 ceph-mgr[75558]: [devicehealth INFO root] Check health
Feb  2 07:02:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:02:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1716032580' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:02:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Feb  2 07:02:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Feb  2 07:02:05 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Feb  2 07:02:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:02:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3563638800' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:02:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:02:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3563638800' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:02:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 180 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.1 MiB/s wr, 235 op/s
Feb  2 07:02:06 np0005604943 nova_compute[238883]: 2026-02-02 12:02:06.362 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Feb  2 07:02:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Feb  2 07:02:06 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Feb  2 07:02:07 np0005604943 podman[256606]: 2026-02-02 12:02:07.057337812 +0000 UTC m=+0.062903638 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb  2 07:02:07 np0005604943 podman[256605]: 2026-02-02 12:02:07.073809438 +0000 UTC m=+0.084931691 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 07:02:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:02:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1817692137' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:02:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:02:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1817692137' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:02:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:02:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 211 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.9 MiB/s wr, 199 op/s
Feb  2 07:02:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:02:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/301162072' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:02:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:02:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/301162072' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:02:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:02:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1228061949' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:02:09 np0005604943 nova_compute[238883]: 2026-02-02 12:02:09.013 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_12:02:09
Feb  2 07:02:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 07:02:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 07:02:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'default.rgw.log', 'backups', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.meta']
Feb  2 07:02:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 07:02:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Feb  2 07:02:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Feb  2 07:02:09 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Feb  2 07:02:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:10.027 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:02:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:10.027 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:02:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:10.027 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 226 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 224 op/s
Feb  2 07:02:10 np0005604943 nova_compute[238883]: 2026-02-02 12:02:10.544 238887 DEBUG oslo_concurrency.lockutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Acquiring lock "74c6efeb-3664-46ac-a191-a2af260625f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:02:10 np0005604943 nova_compute[238883]: 2026-02-02 12:02:10.545 238887 DEBUG oslo_concurrency.lockutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Lock "74c6efeb-3664-46ac-a191-a2af260625f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:02:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Feb  2 07:02:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Feb  2 07:02:10 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:02:10 np0005604943 nova_compute[238883]: 2026-02-02 12:02:10.877 238887 DEBUG nova.compute.manager [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:02:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:02:11 np0005604943 nova_compute[238883]: 2026-02-02 12:02:11.010 238887 DEBUG oslo_concurrency.lockutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:02:11 np0005604943 nova_compute[238883]: 2026-02-02 12:02:11.011 238887 DEBUG oslo_concurrency.lockutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:02:11 np0005604943 nova_compute[238883]: 2026-02-02 12:02:11.019 238887 DEBUG nova.virt.hardware [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:02:11 np0005604943 nova_compute[238883]: 2026-02-02 12:02:11.019 238887 INFO nova.compute.claims [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:02:11 np0005604943 nova_compute[238883]: 2026-02-02 12:02:11.125 238887 DEBUG oslo_concurrency.processutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:02:11 np0005604943 nova_compute[238883]: 2026-02-02 12:02:11.364 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:02:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1276261499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:02:11 np0005604943 nova_compute[238883]: 2026-02-02 12:02:11.682 238887 DEBUG oslo_concurrency.processutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:02:11 np0005604943 nova_compute[238883]: 2026-02-02 12:02:11.688 238887 DEBUG nova.compute.provider_tree [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:02:11 np0005604943 nova_compute[238883]: 2026-02-02 12:02:11.708 238887 DEBUG nova.scheduler.client.report [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:02:11 np0005604943 nova_compute[238883]: 2026-02-02 12:02:11.732 238887 DEBUG oslo_concurrency.lockutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:02:11 np0005604943 nova_compute[238883]: 2026-02-02 12:02:11.733 238887 DEBUG nova.compute.manager [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:02:11 np0005604943 nova_compute[238883]: 2026-02-02 12:02:11.776 238887 DEBUG nova.compute.manager [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:02:11 np0005604943 nova_compute[238883]: 2026-02-02 12:02:11.777 238887 DEBUG nova.network.neutron [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:02:11 np0005604943 nova_compute[238883]: 2026-02-02 12:02:11.795 238887 INFO nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:02:11 np0005604943 nova_compute[238883]: 2026-02-02 12:02:11.812 238887 DEBUG nova.compute.manager [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:02:11 np0005604943 nova_compute[238883]: 2026-02-02 12:02:11.866 238887 INFO nova.virt.block_device [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Booting with volume 79c7fa59-0aa0-4f72-b1bb-4182030d587d at /dev/vda#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.079 238887 DEBUG os_brick.utils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.080 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.089 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.089 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[6471649f-9cc5-4ed9-ac87-edf8fcdd3ff0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.091 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.124 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.124 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb8273f-e53d-48f9-a243-8134f574e2b3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.127 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.137 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.138 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[dc153c36-d840-40f0-8366-c6836a97d667]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.139 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[c05558b0-a046-484e-baa5-6f1df407273b]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.140 238887 DEBUG oslo_concurrency.processutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.161 238887 DEBUG oslo_concurrency.processutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.163 238887 DEBUG os_brick.initiator.connectors.lightos [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.163 238887 DEBUG os_brick.initiator.connectors.lightos [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.163 238887 DEBUG os_brick.initiator.connectors.lightos [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.164 238887 DEBUG os_brick.utils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] <== get_connector_properties: return (84ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.164 238887 DEBUG nova.virt.block_device [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Updating existing volume attachment record: dec83381-e3a3-465a-8ff3-b176a6422d8c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:02:12 np0005604943 ovn_controller[145056]: 2026-02-02T12:02:12Z|00136|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Feb  2 07:02:12 np0005604943 nova_compute[238883]: 2026-02-02 12:02:12.218 238887 DEBUG nova.policy [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd8f09513610247a8bb0c10546e2d036e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '34a46b2cbe7d4757b891bffab0c70022', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:02:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 226 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.5 MiB/s wr, 235 op/s
Feb  2 07:02:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:02:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Feb  2 07:02:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Feb  2 07:02:12 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Feb  2 07:02:13 np0005604943 nova_compute[238883]: 2026-02-02 12:02:13.076 238887 DEBUG nova.network.neutron [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Successfully created port: 1849877d-6591-447e-a3a5-68b010c64ba2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:02:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:02:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3106941149' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:02:13 np0005604943 nova_compute[238883]: 2026-02-02 12:02:13.491 238887 DEBUG nova.compute.manager [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:02:13 np0005604943 nova_compute[238883]: 2026-02-02 12:02:13.494 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:02:13 np0005604943 nova_compute[238883]: 2026-02-02 12:02:13.494 238887 INFO nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Creating image(s)#033[00m
Feb  2 07:02:13 np0005604943 nova_compute[238883]: 2026-02-02 12:02:13.494 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 07:02:13 np0005604943 nova_compute[238883]: 2026-02-02 12:02:13.495 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Ensure instance console log exists: /var/lib/nova/instances/74c6efeb-3664-46ac-a191-a2af260625f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:02:13 np0005604943 nova_compute[238883]: 2026-02-02 12:02:13.495 238887 DEBUG oslo_concurrency.lockutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:02:13 np0005604943 nova_compute[238883]: 2026-02-02 12:02:13.495 238887 DEBUG oslo_concurrency.lockutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:02:13 np0005604943 nova_compute[238883]: 2026-02-02 12:02:13.496 238887 DEBUG oslo_concurrency.lockutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:02:13 np0005604943 nova_compute[238883]: 2026-02-02 12:02:13.954 238887 DEBUG nova.network.neutron [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Successfully updated port: 1849877d-6591-447e-a3a5-68b010c64ba2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:02:13 np0005604943 nova_compute[238883]: 2026-02-02 12:02:13.968 238887 DEBUG oslo_concurrency.lockutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Acquiring lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:02:13 np0005604943 nova_compute[238883]: 2026-02-02 12:02:13.968 238887 DEBUG oslo_concurrency.lockutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Acquired lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:02:13 np0005604943 nova_compute[238883]: 2026-02-02 12:02:13.968 238887 DEBUG nova.network.neutron [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:02:14 np0005604943 nova_compute[238883]: 2026-02-02 12:02:14.015 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:14 np0005604943 nova_compute[238883]: 2026-02-02 12:02:14.057 238887 DEBUG nova.compute.manager [req-a2357a48-985e-439c-8dbd-8996e86d4e7f req-77c8b2ce-8df9-4ec9-81c3-cdc6648c6645 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Received event network-changed-1849877d-6591-447e-a3a5-68b010c64ba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:02:14 np0005604943 nova_compute[238883]: 2026-02-02 12:02:14.058 238887 DEBUG nova.compute.manager [req-a2357a48-985e-439c-8dbd-8996e86d4e7f req-77c8b2ce-8df9-4ec9-81c3-cdc6648c6645 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Refreshing instance network info cache due to event network-changed-1849877d-6591-447e-a3a5-68b010c64ba2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:02:14 np0005604943 nova_compute[238883]: 2026-02-02 12:02:14.058 238887 DEBUG oslo_concurrency.lockutils [req-a2357a48-985e-439c-8dbd-8996e86d4e7f req-77c8b2ce-8df9-4ec9-81c3-cdc6648c6645 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:02:14 np0005604943 nova_compute[238883]: 2026-02-02 12:02:14.177 238887 DEBUG nova.network.neutron [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:02:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:02:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:02:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:02:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:02:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 226 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.0 MiB/s wr, 136 op/s
Feb  2 07:02:14 np0005604943 podman[256890]: 2026-02-02 12:02:14.528034703 +0000 UTC m=+0.034919357 container create 5550af701e02647d6773cb164b3ee4efc00363a2b37e5c791d1f76687dbf3f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_euclid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 07:02:14 np0005604943 systemd[1]: Started libpod-conmon-5550af701e02647d6773cb164b3ee4efc00363a2b37e5c791d1f76687dbf3f08.scope.
Feb  2 07:02:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:02:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3831680881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:02:14 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:02:14 np0005604943 podman[256890]: 2026-02-02 12:02:14.511244448 +0000 UTC m=+0.018129122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:02:14 np0005604943 podman[256890]: 2026-02-02 12:02:14.61962113 +0000 UTC m=+0.126505814 container init 5550af701e02647d6773cb164b3ee4efc00363a2b37e5c791d1f76687dbf3f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:02:14 np0005604943 podman[256890]: 2026-02-02 12:02:14.627277373 +0000 UTC m=+0.134162037 container start 5550af701e02647d6773cb164b3ee4efc00363a2b37e5c791d1f76687dbf3f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_euclid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:02:14 np0005604943 podman[256890]: 2026-02-02 12:02:14.630838937 +0000 UTC m=+0.137723611 container attach 5550af701e02647d6773cb164b3ee4efc00363a2b37e5c791d1f76687dbf3f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_euclid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 07:02:14 np0005604943 stupefied_euclid[256907]: 167 167
Feb  2 07:02:14 np0005604943 systemd[1]: libpod-5550af701e02647d6773cb164b3ee4efc00363a2b37e5c791d1f76687dbf3f08.scope: Deactivated successfully.
Feb  2 07:02:14 np0005604943 conmon[256907]: conmon 5550af701e02647d6773 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5550af701e02647d6773cb164b3ee4efc00363a2b37e5c791d1f76687dbf3f08.scope/container/memory.events
Feb  2 07:02:14 np0005604943 podman[256890]: 2026-02-02 12:02:14.637153614 +0000 UTC m=+0.144038288 container died 5550af701e02647d6773cb164b3ee4efc00363a2b37e5c791d1f76687dbf3f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:02:14 np0005604943 systemd[1]: var-lib-containers-storage-overlay-6210fb923ba661fdc678a4adb7f0d86dcb8014da6c0a7f552a06cdd74923a9b8-merged.mount: Deactivated successfully.
Feb  2 07:02:14 np0005604943 podman[256890]: 2026-02-02 12:02:14.679977039 +0000 UTC m=+0.186861693 container remove 5550af701e02647d6773cb164b3ee4efc00363a2b37e5c791d1f76687dbf3f08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:02:14 np0005604943 systemd[1]: libpod-conmon-5550af701e02647d6773cb164b3ee4efc00363a2b37e5c791d1f76687dbf3f08.scope: Deactivated successfully.
Feb  2 07:02:14 np0005604943 podman[256930]: 2026-02-02 12:02:14.801995093 +0000 UTC m=+0.036846747 container create 32e77b8ff104044638461b7a62f6442672d8259626880895aaae06ef951fb490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goldwasser, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:02:14 np0005604943 systemd[1]: Started libpod-conmon-32e77b8ff104044638461b7a62f6442672d8259626880895aaae06ef951fb490.scope.
Feb  2 07:02:14 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:02:14 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93776f1699593f548ceb92fa42e45e43d22aeee44a0a64c5d3897bce7b1991b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:14 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93776f1699593f548ceb92fa42e45e43d22aeee44a0a64c5d3897bce7b1991b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:14 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93776f1699593f548ceb92fa42e45e43d22aeee44a0a64c5d3897bce7b1991b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:14 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93776f1699593f548ceb92fa42e45e43d22aeee44a0a64c5d3897bce7b1991b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:14 np0005604943 podman[256930]: 2026-02-02 12:02:14.869956343 +0000 UTC m=+0.104807997 container init 32e77b8ff104044638461b7a62f6442672d8259626880895aaae06ef951fb490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:02:14 np0005604943 podman[256930]: 2026-02-02 12:02:14.877607786 +0000 UTC m=+0.112459450 container start 32e77b8ff104044638461b7a62f6442672d8259626880895aaae06ef951fb490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goldwasser, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:02:14 np0005604943 podman[256930]: 2026-02-02 12:02:14.880966715 +0000 UTC m=+0.115818369 container attach 32e77b8ff104044638461b7a62f6442672d8259626880895aaae06ef951fb490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:02:14 np0005604943 podman[256930]: 2026-02-02 12:02:14.785770162 +0000 UTC m=+0.020621846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]: [
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:    {
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:        "available": false,
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:        "being_replaced": false,
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:        "ceph_device_lvm": false,
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:        "lsm_data": {},
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:        "lvs": [],
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:        "path": "/dev/sr0",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:        "rejected_reasons": [
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "Has a FileSystem",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "Insufficient space (<5GB)"
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:        ],
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:        "sys_api": {
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "actuators": null,
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "device_nodes": [
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:                "sr0"
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            ],
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "devname": "sr0",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "human_readable_size": "482.00 KB",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "id_bus": "ata",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "model": "QEMU DVD-ROM",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "nr_requests": "2",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "parent": "/dev/sr0",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "partitions": {},
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "path": "/dev/sr0",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "removable": "1",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "rev": "2.5+",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "ro": "0",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "rotational": "1",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "sas_address": "",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "sas_device_handle": "",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "scheduler_mode": "mq-deadline",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "sectors": 0,
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "sectorsize": "2048",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "size": 493568.0,
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "support_discard": "2048",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "type": "disk",
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:            "vendor": "QEMU"
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:        }
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]:    }
Feb  2 07:02:15 np0005604943 vigorous_goldwasser[256946]: ]
Feb  2 07:02:15 np0005604943 systemd[1]: libpod-32e77b8ff104044638461b7a62f6442672d8259626880895aaae06ef951fb490.scope: Deactivated successfully.
Feb  2 07:02:15 np0005604943 conmon[256946]: conmon 32e77b8ff10404463846 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32e77b8ff104044638461b7a62f6442672d8259626880895aaae06ef951fb490.scope/container/memory.events
Feb  2 07:02:15 np0005604943 podman[256930]: 2026-02-02 12:02:15.404639392 +0000 UTC m=+0.639491046 container died 32e77b8ff104044638461b7a62f6442672d8259626880895aaae06ef951fb490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 07:02:15 np0005604943 systemd[1]: var-lib-containers-storage-overlay-93776f1699593f548ceb92fa42e45e43d22aeee44a0a64c5d3897bce7b1991b2-merged.mount: Deactivated successfully.
Feb  2 07:02:15 np0005604943 podman[256930]: 2026-02-02 12:02:15.451016491 +0000 UTC m=+0.685868145 container remove 32e77b8ff104044638461b7a62f6442672d8259626880895aaae06ef951fb490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:02:15 np0005604943 systemd[1]: libpod-conmon-32e77b8ff104044638461b7a62f6442672d8259626880895aaae06ef951fb490.scope: Deactivated successfully.
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:02:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.582 238887 DEBUG nova.network.neutron [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Updating instance_info_cache with network_info: [{"id": "1849877d-6591-447e-a3a5-68b010c64ba2", "address": "fa:16:3e:c5:d3:e9", "network": {"id": "077c46d0-8d19-4d3f-a5fe-650ee517c2b7", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1827864277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34a46b2cbe7d4757b891bffab0c70022", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1849877d-65", "ovs_interfaceid": "1849877d-6591-447e-a3a5-68b010c64ba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.627 238887 DEBUG oslo_concurrency.lockutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Releasing lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.627 238887 DEBUG nova.compute.manager [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Instance network_info: |[{"id": "1849877d-6591-447e-a3a5-68b010c64ba2", "address": "fa:16:3e:c5:d3:e9", "network": {"id": "077c46d0-8d19-4d3f-a5fe-650ee517c2b7", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1827864277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34a46b2cbe7d4757b891bffab0c70022", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1849877d-65", "ovs_interfaceid": "1849877d-6591-447e-a3a5-68b010c64ba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.628 238887 DEBUG oslo_concurrency.lockutils [req-a2357a48-985e-439c-8dbd-8996e86d4e7f req-77c8b2ce-8df9-4ec9-81c3-cdc6648c6645 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.628 238887 DEBUG nova.network.neutron [req-a2357a48-985e-439c-8dbd-8996e86d4e7f req-77c8b2ce-8df9-4ec9-81c3-cdc6648c6645 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Refreshing network info cache for port 1849877d-6591-447e-a3a5-68b010c64ba2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.631 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Start _get_guest_xml network_info=[{"id": "1849877d-6591-447e-a3a5-68b010c64ba2", "address": "fa:16:3e:c5:d3:e9", "network": {"id": "077c46d0-8d19-4d3f-a5fe-650ee517c2b7", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1827864277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34a46b2cbe7d4757b891bffab0c70022", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1849877d-65", "ovs_interfaceid": "1849877d-6591-447e-a3a5-68b010c64ba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'attachment_id': 'dec83381-e3a3-465a-8ff3-b176a6422d8c', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-79c7fa59-0aa0-4f72-b1bb-4182030d587d', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '79c7fa59-0aa0-4f72-b1bb-4182030d587d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '74c6efeb-3664-46ac-a191-a2af260625f7', 'attached_at': '', 'detached_at': '', 'volume_id': '79c7fa59-0aa0-4f72-b1bb-4182030d587d', 'serial': '79c7fa59-0aa0-4f72-b1bb-4182030d587d'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.637 238887 WARNING nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.642 238887 DEBUG nova.virt.libvirt.host [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.643 238887 DEBUG nova.virt.libvirt.host [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.654 238887 DEBUG nova.virt.libvirt.host [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.655 238887 DEBUG nova.virt.libvirt.host [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.656 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.656 238887 DEBUG nova.virt.hardware [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.656 238887 DEBUG nova.virt.hardware [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.656 238887 DEBUG nova.virt.hardware [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.657 238887 DEBUG nova.virt.hardware [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.657 238887 DEBUG nova.virt.hardware [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.657 238887 DEBUG nova.virt.hardware [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.657 238887 DEBUG nova.virt.hardware [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.658 238887 DEBUG nova.virt.hardware [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.658 238887 DEBUG nova.virt.hardware [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.658 238887 DEBUG nova.virt.hardware [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.658 238887 DEBUG nova.virt.hardware [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.680 238887 DEBUG nova.storage.rbd_utils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] rbd image 74c6efeb-3664-46ac-a191-a2af260625f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:02:15 np0005604943 nova_compute[238883]: 2026-02-02 12:02:15.684 238887 DEBUG oslo_concurrency.processutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:02:15 np0005604943 podman[257853]: 2026-02-02 12:02:15.879450024 +0000 UTC m=+0.033233371 container create 772ed583150b2693f97e758c5d99900f710768ca5b63ededa206646c4a23b816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 07:02:15 np0005604943 systemd[1]: Started libpod-conmon-772ed583150b2693f97e758c5d99900f710768ca5b63ededa206646c4a23b816.scope.
Feb  2 07:02:15 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:02:15 np0005604943 podman[257853]: 2026-02-02 12:02:15.934416252 +0000 UTC m=+0.088199589 container init 772ed583150b2693f97e758c5d99900f710768ca5b63ededa206646c4a23b816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Feb  2 07:02:15 np0005604943 podman[257853]: 2026-02-02 12:02:15.938973452 +0000 UTC m=+0.092756799 container start 772ed583150b2693f97e758c5d99900f710768ca5b63ededa206646c4a23b816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_kowalevski, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 07:02:15 np0005604943 strange_kowalevski[257869]: 167 167
Feb  2 07:02:15 np0005604943 podman[257853]: 2026-02-02 12:02:15.941791357 +0000 UTC m=+0.095574724 container attach 772ed583150b2693f97e758c5d99900f710768ca5b63ededa206646c4a23b816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:02:15 np0005604943 systemd[1]: libpod-772ed583150b2693f97e758c5d99900f710768ca5b63ededa206646c4a23b816.scope: Deactivated successfully.
Feb  2 07:02:15 np0005604943 podman[257853]: 2026-02-02 12:02:15.942335811 +0000 UTC m=+0.096119158 container died 772ed583150b2693f97e758c5d99900f710768ca5b63ededa206646c4a23b816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 07:02:15 np0005604943 podman[257853]: 2026-02-02 12:02:15.86382782 +0000 UTC m=+0.017611197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:02:15 np0005604943 systemd[1]: var-lib-containers-storage-overlay-33b3f92656f35b7fc8ce7a637c3457f1d7de36b7a883d5694a2a3f08ca8808d9-merged.mount: Deactivated successfully.
Feb  2 07:02:15 np0005604943 podman[257853]: 2026-02-02 12:02:15.973111846 +0000 UTC m=+0.126895193 container remove 772ed583150b2693f97e758c5d99900f710768ca5b63ededa206646c4a23b816 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_kowalevski, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:02:15 np0005604943 systemd[1]: libpod-conmon-772ed583150b2693f97e758c5d99900f710768ca5b63ededa206646c4a23b816.scope: Deactivated successfully.
Feb  2 07:02:16 np0005604943 podman[257893]: 2026-02-02 12:02:16.091920405 +0000 UTC m=+0.034619188 container create 12cdfd44d33f1512ccea9642ae06d40d3f9bf5734c7aafcd67baf9738ad31a94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_haibt, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 07:02:16 np0005604943 systemd[1]: Started libpod-conmon-12cdfd44d33f1512ccea9642ae06d40d3f9bf5734c7aafcd67baf9738ad31a94.scope.
Feb  2 07:02:16 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:02:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2fbac0394245ca22e95cac8c898c5ffc0801a2ce2ab472d4a5af6839b57225f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2fbac0394245ca22e95cac8c898c5ffc0801a2ce2ab472d4a5af6839b57225f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2fbac0394245ca22e95cac8c898c5ffc0801a2ce2ab472d4a5af6839b57225f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2fbac0394245ca22e95cac8c898c5ffc0801a2ce2ab472d4a5af6839b57225f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2fbac0394245ca22e95cac8c898c5ffc0801a2ce2ab472d4a5af6839b57225f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:16 np0005604943 podman[257893]: 2026-02-02 12:02:16.157655057 +0000 UTC m=+0.100353900 container init 12cdfd44d33f1512ccea9642ae06d40d3f9bf5734c7aafcd67baf9738ad31a94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_haibt, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 07:02:16 np0005604943 podman[257893]: 2026-02-02 12:02:16.164814276 +0000 UTC m=+0.107513079 container start 12cdfd44d33f1512ccea9642ae06d40d3f9bf5734c7aafcd67baf9738ad31a94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 07:02:16 np0005604943 podman[257893]: 2026-02-02 12:02:16.167626651 +0000 UTC m=+0.110325464 container attach 12cdfd44d33f1512ccea9642ae06d40d3f9bf5734c7aafcd67baf9738ad31a94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:02:16 np0005604943 podman[257893]: 2026-02-02 12:02:16.074584166 +0000 UTC m=+0.017283009 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:02:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:02:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/494020323' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:02:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Feb  2 07:02:16 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:02:16 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:02:16 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:02:16 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:02:16 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.229 238887 DEBUG oslo_concurrency.processutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:02:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Feb  2 07:02:16 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.258 238887 DEBUG nova.virt.libvirt.vif [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:02:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-185840292',display_name='tempest-TestVolumeBackupRestore-server-185840292',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-185840292',id=13,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOGJufSFQNB4hpBzcN9ivKAtSHK0eqhRDSthqnkmRBuzm9obWgGVHoAZsPaCUS4Vee+URE8lWDyUvVBxJaXiZ+7VUQcWNq0pRYsYvi7moWCSna6gLgc8i/WZy00S62zE6Q==',key_name='tempest-TestVolumeBackupRestore-1120990637',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='34a46b2cbe7d4757b891bffab0c70022',ramdisk_id='',reservation_id='r-gdl8u785',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-911136857',owner_user_name='tempest-TestVolumeBackupRestore-911136857-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:02:11Z,user_data=None,user_id='d8f09513610247a8bb0c10546e2d036e',uuid=74c6efeb-3664-46ac-a191-a2af260625f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1849877d-6591-447e-a3a5-68b010c64ba2", "address": "fa:16:3e:c5:d3:e9", "network": {"id": "077c46d0-8d19-4d3f-a5fe-650ee517c2b7", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1827864277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34a46b2cbe7d4757b891bffab0c70022", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1849877d-65", "ovs_interfaceid": "1849877d-6591-447e-a3a5-68b010c64ba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.259 238887 DEBUG nova.network.os_vif_util [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Converting VIF {"id": "1849877d-6591-447e-a3a5-68b010c64ba2", "address": "fa:16:3e:c5:d3:e9", "network": {"id": "077c46d0-8d19-4d3f-a5fe-650ee517c2b7", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1827864277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34a46b2cbe7d4757b891bffab0c70022", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1849877d-65", "ovs_interfaceid": "1849877d-6591-447e-a3a5-68b010c64ba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.260 238887 DEBUG nova.network.os_vif_util [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:d3:e9,bridge_name='br-int',has_traffic_filtering=True,id=1849877d-6591-447e-a3a5-68b010c64ba2,network=Network(077c46d0-8d19-4d3f-a5fe-650ee517c2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1849877d-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.263 238887 DEBUG nova.objects.instance [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Lazy-loading 'pci_devices' on Instance uuid 74c6efeb-3664-46ac-a191-a2af260625f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.277 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  <uuid>74c6efeb-3664-46ac-a191-a2af260625f7</uuid>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  <name>instance-0000000d</name>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <nova:name>tempest-TestVolumeBackupRestore-server-185840292</nova:name>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:02:15</nova:creationTime>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:        <nova:user uuid="d8f09513610247a8bb0c10546e2d036e">tempest-TestVolumeBackupRestore-911136857-project-member</nova:user>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:        <nova:project uuid="34a46b2cbe7d4757b891bffab0c70022">tempest-TestVolumeBackupRestore-911136857</nova:project>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:        <nova:port uuid="1849877d-6591-447e-a3a5-68b010c64ba2">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <entry name="serial">74c6efeb-3664-46ac-a191-a2af260625f7</entry>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <entry name="uuid">74c6efeb-3664-46ac-a191-a2af260625f7</entry>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/74c6efeb-3664-46ac-a191-a2af260625f7_disk.config">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="volumes/volume-79c7fa59-0aa0-4f72-b1bb-4182030d587d">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <serial>79c7fa59-0aa0-4f72-b1bb-4182030d587d</serial>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:c5:d3:e9"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <target dev="tap1849877d-65"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/74c6efeb-3664-46ac-a191-a2af260625f7/console.log" append="off"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:02:16 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:02:16 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:02:16 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:02:16 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.278 238887 DEBUG nova.compute.manager [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Preparing to wait for external event network-vif-plugged-1849877d-6591-447e-a3a5-68b010c64ba2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.279 238887 DEBUG oslo_concurrency.lockutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Acquiring lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.279 238887 DEBUG oslo_concurrency.lockutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.280 238887 DEBUG oslo_concurrency.lockutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.281 238887 DEBUG nova.virt.libvirt.vif [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:02:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-185840292',display_name='tempest-TestVolumeBackupRestore-server-185840292',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-185840292',id=13,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOGJufSFQNB4hpBzcN9ivKAtSHK0eqhRDSthqnkmRBuzm9obWgGVHoAZsPaCUS4Vee+URE8lWDyUvVBxJaXiZ+7VUQcWNq0pRYsYvi7moWCSna6gLgc8i/WZy00S62zE6Q==',key_name='tempest-TestVolumeBackupRestore-1120990637',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='34a46b2cbe7d4757b891bffab0c70022',ramdisk_id='',reservation_id='r-gdl8u785',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-911136857',owner_user_name='tempest-TestVolumeBackupRestore-911136857-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:02:11Z,user_data=None,user_id='d8f09513610247a8bb0c10546e2d036e',uuid=74c6efeb-3664-46ac-a191-a2af260625f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1849877d-6591-447e-a3a5-68b010c64ba2", "address": "fa:16:3e:c5:d3:e9", "network": {"id": "077c46d0-8d19-4d3f-a5fe-650ee517c2b7", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1827864277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34a46b2cbe7d4757b891bffab0c70022", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1849877d-65", "ovs_interfaceid": "1849877d-6591-447e-a3a5-68b010c64ba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.282 238887 DEBUG nova.network.os_vif_util [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Converting VIF {"id": "1849877d-6591-447e-a3a5-68b010c64ba2", "address": "fa:16:3e:c5:d3:e9", "network": {"id": "077c46d0-8d19-4d3f-a5fe-650ee517c2b7", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1827864277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34a46b2cbe7d4757b891bffab0c70022", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1849877d-65", "ovs_interfaceid": "1849877d-6591-447e-a3a5-68b010c64ba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.283 238887 DEBUG nova.network.os_vif_util [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:d3:e9,bridge_name='br-int',has_traffic_filtering=True,id=1849877d-6591-447e-a3a5-68b010c64ba2,network=Network(077c46d0-8d19-4d3f-a5fe-650ee517c2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1849877d-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.284 238887 DEBUG os_vif [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:d3:e9,bridge_name='br-int',has_traffic_filtering=True,id=1849877d-6591-447e-a3a5-68b010c64ba2,network=Network(077c46d0-8d19-4d3f-a5fe-650ee517c2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1849877d-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.284 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.285 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.286 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.292 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.292 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1849877d-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.293 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1849877d-65, col_values=(('external_ids', {'iface-id': '1849877d-6591-447e-a3a5-68b010c64ba2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:d3:e9', 'vm-uuid': '74c6efeb-3664-46ac-a191-a2af260625f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.295 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:16 np0005604943 NetworkManager[49093]: <info>  [1770033736.2978] manager: (tap1849877d-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.299 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.303 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.304 238887 INFO os_vif [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:d3:e9,bridge_name='br-int',has_traffic_filtering=True,id=1849877d-6591-447e-a3a5-68b010c64ba2,network=Network(077c46d0-8d19-4d3f-a5fe-650ee517c2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1849877d-65')#033[00m
Feb  2 07:02:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 226 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 2.8 KiB/s wr, 62 op/s
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.365 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.376 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.377 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.377 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] No VIF found with MAC fa:16:3e:c5:d3:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.378 238887 INFO nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Using config drive#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.404 238887 DEBUG nova.storage.rbd_utils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] rbd image 74c6efeb-3664-46ac-a191-a2af260625f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:02:16 np0005604943 tender_haibt[257910]: --> passed data devices: 0 physical, 3 LVM
Feb  2 07:02:16 np0005604943 tender_haibt[257910]: --> All data devices are unavailable
Feb  2 07:02:16 np0005604943 systemd[1]: libpod-12cdfd44d33f1512ccea9642ae06d40d3f9bf5734c7aafcd67baf9738ad31a94.scope: Deactivated successfully.
Feb  2 07:02:16 np0005604943 podman[257893]: 2026-02-02 12:02:16.599142187 +0000 UTC m=+0.541840980 container died 12cdfd44d33f1512ccea9642ae06d40d3f9bf5734c7aafcd67baf9738ad31a94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 07:02:16 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f2fbac0394245ca22e95cac8c898c5ffc0801a2ce2ab472d4a5af6839b57225f-merged.mount: Deactivated successfully.
Feb  2 07:02:16 np0005604943 podman[257893]: 2026-02-02 12:02:16.635650534 +0000 UTC m=+0.578349327 container remove 12cdfd44d33f1512ccea9642ae06d40d3f9bf5734c7aafcd67baf9738ad31a94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:02:16 np0005604943 systemd[1]: libpod-conmon-12cdfd44d33f1512ccea9642ae06d40d3f9bf5734c7aafcd67baf9738ad31a94.scope: Deactivated successfully.
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.837 238887 INFO nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Creating config drive at /var/lib/nova/instances/74c6efeb-3664-46ac-a191-a2af260625f7/disk.config#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.841 238887 DEBUG oslo_concurrency.processutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/74c6efeb-3664-46ac-a191-a2af260625f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmps51b91mg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.964 238887 DEBUG oslo_concurrency.processutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/74c6efeb-3664-46ac-a191-a2af260625f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmps51b91mg" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:02:16 np0005604943 nova_compute[238883]: 2026-02-02 12:02:16.996 238887 DEBUG nova.storage.rbd_utils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] rbd image 74c6efeb-3664-46ac-a191-a2af260625f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.000 238887 DEBUG oslo_concurrency.processutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/74c6efeb-3664-46ac-a191-a2af260625f7/disk.config 74c6efeb-3664-46ac-a191-a2af260625f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:02:17 np0005604943 podman[258038]: 2026-02-02 12:02:17.024702953 +0000 UTC m=+0.032290977 container create 29a2c9a8358ef752c34ce479b116a5cc4b7e688ae9e9b60a1e426d72638fa212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_franklin, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:02:17 np0005604943 systemd[1]: Started libpod-conmon-29a2c9a8358ef752c34ce479b116a5cc4b7e688ae9e9b60a1e426d72638fa212.scope.
Feb  2 07:02:17 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:02:17 np0005604943 podman[258038]: 2026-02-02 12:02:17.083867781 +0000 UTC m=+0.091455815 container init 29a2c9a8358ef752c34ce479b116a5cc4b7e688ae9e9b60a1e426d72638fa212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:02:17 np0005604943 podman[258038]: 2026-02-02 12:02:17.089744047 +0000 UTC m=+0.097332071 container start 29a2c9a8358ef752c34ce479b116a5cc4b7e688ae9e9b60a1e426d72638fa212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 07:02:17 np0005604943 systemd[1]: libpod-29a2c9a8358ef752c34ce479b116a5cc4b7e688ae9e9b60a1e426d72638fa212.scope: Deactivated successfully.
Feb  2 07:02:17 np0005604943 charming_franklin[258080]: 167 167
Feb  2 07:02:17 np0005604943 podman[258038]: 2026-02-02 12:02:17.096227749 +0000 UTC m=+0.103815793 container attach 29a2c9a8358ef752c34ce479b116a5cc4b7e688ae9e9b60a1e426d72638fa212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 07:02:17 np0005604943 conmon[258080]: conmon 29a2c9a8358ef752c34c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-29a2c9a8358ef752c34ce479b116a5cc4b7e688ae9e9b60a1e426d72638fa212.scope/container/memory.events
Feb  2 07:02:17 np0005604943 podman[258038]: 2026-02-02 12:02:17.096928807 +0000 UTC m=+0.104516831 container died 29a2c9a8358ef752c34ce479b116a5cc4b7e688ae9e9b60a1e426d72638fa212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_franklin, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:02:17 np0005604943 podman[258038]: 2026-02-02 12:02:17.010552428 +0000 UTC m=+0.018140472 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:02:17 np0005604943 systemd[1]: var-lib-containers-storage-overlay-81699997e74a2d454bc0fbcf0bac66d52f2fd37b9352a51918bc594c5f2fb72f-merged.mount: Deactivated successfully.
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.143 238887 DEBUG oslo_concurrency.processutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/74c6efeb-3664-46ac-a191-a2af260625f7/disk.config 74c6efeb-3664-46ac-a191-a2af260625f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.146 238887 INFO nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Deleting local config drive /var/lib/nova/instances/74c6efeb-3664-46ac-a191-a2af260625f7/disk.config because it was imported into RBD.#033[00m
Feb  2 07:02:17 np0005604943 podman[258038]: 2026-02-02 12:02:17.167358553 +0000 UTC m=+0.174946577 container remove 29a2c9a8358ef752c34ce479b116a5cc4b7e688ae9e9b60a1e426d72638fa212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:02:17 np0005604943 systemd[1]: libpod-conmon-29a2c9a8358ef752c34ce479b116a5cc4b7e688ae9e9b60a1e426d72638fa212.scope: Deactivated successfully.
Feb  2 07:02:17 np0005604943 kernel: tap1849877d-65: entered promiscuous mode
Feb  2 07:02:17 np0005604943 NetworkManager[49093]: <info>  [1770033737.1901] manager: (tap1849877d-65): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.190 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:17 np0005604943 ovn_controller[145056]: 2026-02-02T12:02:17Z|00137|binding|INFO|Claiming lport 1849877d-6591-447e-a3a5-68b010c64ba2 for this chassis.
Feb  2 07:02:17 np0005604943 ovn_controller[145056]: 2026-02-02T12:02:17Z|00138|binding|INFO|1849877d-6591-447e-a3a5-68b010c64ba2: Claiming fa:16:3e:c5:d3:e9 10.100.0.14
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.193 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.197 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.209 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:d3:e9 10.100.0.14'], port_security=['fa:16:3e:c5:d3:e9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '74c6efeb-3664-46ac-a191-a2af260625f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-077c46d0-8d19-4d3f-a5fe-650ee517c2b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a46b2cbe7d4757b891bffab0c70022', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6dd606e7-f627-4726-8281-fd244a3c544e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c68cdb9-bf26-4372-b7f1-4197b5921755, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=1849877d-6591-447e-a3a5-68b010c64ba2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.210 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 1849877d-6591-447e-a3a5-68b010c64ba2 in datapath 077c46d0-8d19-4d3f-a5fe-650ee517c2b7 bound to our chassis#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.211 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 077c46d0-8d19-4d3f-a5fe-650ee517c2b7#033[00m
Feb  2 07:02:17 np0005604943 systemd-machined[206973]: New machine qemu-13-instance-0000000d.
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.219 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[f49089bb-561d-4067-83c8-c7db7911f9f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.219 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap077c46d0-81 in ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.222 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap077c46d0-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.223 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc7fcc3-dc3f-4c39-a0da-6458ca8f0ea5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.223 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cac598-8057-400c-930f-5cba6ac015b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:17 np0005604943 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Feb  2 07:02:17 np0005604943 systemd-udevd[258114]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.232 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[44acfe41-f195-4c07-b1df-7ff3c0a10cf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:17 np0005604943 NetworkManager[49093]: <info>  [1770033737.2386] device (tap1849877d-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:02:17 np0005604943 NetworkManager[49093]: <info>  [1770033737.2393] device (tap1849877d-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.248 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:17 np0005604943 ovn_controller[145056]: 2026-02-02T12:02:17Z|00139|binding|INFO|Setting lport 1849877d-6591-447e-a3a5-68b010c64ba2 ovn-installed in OVS
Feb  2 07:02:17 np0005604943 ovn_controller[145056]: 2026-02-02T12:02:17Z|00140|binding|INFO|Setting lport 1849877d-6591-447e-a3a5-68b010c64ba2 up in Southbound
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.251 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.253 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[47810019-c9b2-41ec-bb38-2df16a656adf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.273 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc3c0a3-4743-4c08-92bd-40a7f9a1363b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:17 np0005604943 NetworkManager[49093]: <info>  [1770033737.2801] manager: (tap077c46d0-80): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.278 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[21bdab5a-d776-4a7a-8e89-3fa3abdbcefd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.303 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[7e92a755-f4c2-4289-a8d7-ecf59244fbdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.306 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[163fcad0-5e9d-4016-af5d-7a95f1c2f491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:17 np0005604943 podman[258129]: 2026-02-02 12:02:17.311631596 +0000 UTC m=+0.046449201 container create bacc11ebae648ffa348467c0cb238cd3dc57acb6665763db1c0af184d5c56daa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lehmann, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.313 238887 DEBUG nova.network.neutron [req-a2357a48-985e-439c-8dbd-8996e86d4e7f req-77c8b2ce-8df9-4ec9-81c3-cdc6648c6645 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Updated VIF entry in instance network info cache for port 1849877d-6591-447e-a3a5-68b010c64ba2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.313 238887 DEBUG nova.network.neutron [req-a2357a48-985e-439c-8dbd-8996e86d4e7f req-77c8b2ce-8df9-4ec9-81c3-cdc6648c6645 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Updating instance_info_cache with network_info: [{"id": "1849877d-6591-447e-a3a5-68b010c64ba2", "address": "fa:16:3e:c5:d3:e9", "network": {"id": "077c46d0-8d19-4d3f-a5fe-650ee517c2b7", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1827864277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34a46b2cbe7d4757b891bffab0c70022", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1849877d-65", "ovs_interfaceid": "1849877d-6591-447e-a3a5-68b010c64ba2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:02:17 np0005604943 NetworkManager[49093]: <info>  [1770033737.3216] device (tap077c46d0-80): carrier: link connected
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.323 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[a6901b6c-9f16-447a-bd9f-bfee48d91b4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.332 238887 DEBUG oslo_concurrency.lockutils [req-a2357a48-985e-439c-8dbd-8996e86d4e7f req-77c8b2ce-8df9-4ec9-81c3-cdc6648c6645 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.339 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[3d82b8d7-db0c-4127-8e1a-c783111ebefb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap077c46d0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:2d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418267, 'reachable_time': 30899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258167, 'error': None, 'target': 'ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:17 np0005604943 systemd[1]: Started libpod-conmon-bacc11ebae648ffa348467c0cb238cd3dc57acb6665763db1c0af184d5c56daa.scope.
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.351 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9c35ffa2-56dc-4e4f-8734-78f9b7f6cb99]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:2d00'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418267, 'tstamp': 418267}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258170, 'error': None, 'target': 'ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:17 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.366 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[17200869-e4ff-494d-9b0a-3b1c698fa2be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap077c46d0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:2d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418267, 'reachable_time': 30899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258174, 'error': None, 'target': 'ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:17 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6399f7e3ff70f449967813accda5d68ad1cb692ad4abf9b752fd14e0aee5c8f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:17 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6399f7e3ff70f449967813accda5d68ad1cb692ad4abf9b752fd14e0aee5c8f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:17 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6399f7e3ff70f449967813accda5d68ad1cb692ad4abf9b752fd14e0aee5c8f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:17 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6399f7e3ff70f449967813accda5d68ad1cb692ad4abf9b752fd14e0aee5c8f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:17 np0005604943 podman[258129]: 2026-02-02 12:02:17.294236796 +0000 UTC m=+0.029054431 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:02:17 np0005604943 podman[258129]: 2026-02-02 12:02:17.393998149 +0000 UTC m=+0.128815764 container init bacc11ebae648ffa348467c0cb238cd3dc57acb6665763db1c0af184d5c56daa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.395 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[71163434-0bc8-4f19-8d81-984e168695e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:17 np0005604943 podman[258129]: 2026-02-02 12:02:17.401490968 +0000 UTC m=+0.136308583 container start bacc11ebae648ffa348467c0cb238cd3dc57acb6665763db1c0af184d5c56daa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lehmann, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb  2 07:02:17 np0005604943 podman[258129]: 2026-02-02 12:02:17.405331029 +0000 UTC m=+0.140148634 container attach bacc11ebae648ffa348467c0cb238cd3dc57acb6665763db1c0af184d5c56daa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.442 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[8af1fcf7-ad96-4764-955f-344eedf7a880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.444 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap077c46d0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.444 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.444 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap077c46d0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.446 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:17 np0005604943 NetworkManager[49093]: <info>  [1770033737.4470] manager: (tap077c46d0-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Feb  2 07:02:17 np0005604943 kernel: tap077c46d0-80: entered promiscuous mode
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.450 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.451 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap077c46d0-80, col_values=(('external_ids', {'iface-id': 'ebb31857-67a0-43cf-9b71-c8cb9596225d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.452 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:17 np0005604943 ovn_controller[145056]: 2026-02-02T12:02:17Z|00141|binding|INFO|Releasing lport ebb31857-67a0-43cf-9b71-c8cb9596225d from this chassis (sb_readonly=0)
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.459 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.460 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/077c46d0-8d19-4d3f-a5fe-650ee517c2b7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/077c46d0-8d19-4d3f-a5fe-650ee517c2b7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.461 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d4b75704-7100-4dc3-a4b9-cfa874592aaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.462 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-077c46d0-8d19-4d3f-a5fe-650ee517c2b7
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/077c46d0-8d19-4d3f-a5fe-650ee517c2b7.pid.haproxy
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID 077c46d0-8d19-4d3f-a5fe-650ee517c2b7
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 07:02:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:17.462 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7', 'env', 'PROCESS_TAG=haproxy-077c46d0-8d19-4d3f-a5fe-650ee517c2b7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/077c46d0-8d19-4d3f-a5fe-650ee517c2b7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 07:02:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]: {
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:    "0": [
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:        {
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "devices": [
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "/dev/loop3"
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            ],
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "lv_name": "ceph_lv0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "lv_size": "21470642176",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "name": "ceph_lv0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "tags": {
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.cluster_name": "ceph",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.crush_device_class": "",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.encrypted": "0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.objectstore": "bluestore",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.osd_id": "0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.type": "block",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.vdo": "0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.with_tpm": "0"
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            },
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "type": "block",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "vg_name": "ceph_vg0"
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:        }
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:    ],
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:    "1": [
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:        {
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "devices": [
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "/dev/loop4"
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            ],
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "lv_name": "ceph_lv1",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "lv_size": "21470642176",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "name": "ceph_lv1",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "tags": {
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.cluster_name": "ceph",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.crush_device_class": "",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.encrypted": "0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.objectstore": "bluestore",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.osd_id": "1",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.type": "block",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.vdo": "0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.with_tpm": "0"
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            },
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "type": "block",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "vg_name": "ceph_vg1"
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:        }
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:    ],
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:    "2": [
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:        {
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "devices": [
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "/dev/loop5"
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            ],
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "lv_name": "ceph_lv2",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "lv_size": "21470642176",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "name": "ceph_lv2",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "tags": {
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.cluster_name": "ceph",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.crush_device_class": "",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.encrypted": "0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.objectstore": "bluestore",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.osd_id": "2",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.type": "block",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.vdo": "0",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:                "ceph.with_tpm": "0"
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            },
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "type": "block",
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:            "vg_name": "ceph_vg2"
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:        }
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]:    ]
Feb  2 07:02:17 np0005604943 bold_lehmann[258171]: }
Feb  2 07:02:17 np0005604943 systemd[1]: libpod-bacc11ebae648ffa348467c0cb238cd3dc57acb6665763db1c0af184d5c56daa.scope: Deactivated successfully.
Feb  2 07:02:17 np0005604943 podman[258231]: 2026-02-02 12:02:17.738541719 +0000 UTC m=+0.027059957 container died bacc11ebae648ffa348467c0cb238cd3dc57acb6665763db1c0af184d5c56daa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lehmann, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:02:17 np0005604943 systemd[1]: var-lib-containers-storage-overlay-6399f7e3ff70f449967813accda5d68ad1cb692ad4abf9b752fd14e0aee5c8f8-merged.mount: Deactivated successfully.
Feb  2 07:02:17 np0005604943 podman[258231]: 2026-02-02 12:02:17.777756699 +0000 UTC m=+0.066274917 container remove bacc11ebae648ffa348467c0cb238cd3dc57acb6665763db1c0af184d5c56daa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 07:02:17 np0005604943 systemd[1]: libpod-conmon-bacc11ebae648ffa348467c0cb238cd3dc57acb6665763db1c0af184d5c56daa.scope: Deactivated successfully.
Feb  2 07:02:17 np0005604943 podman[258258]: 2026-02-02 12:02:17.796335862 +0000 UTC m=+0.061094291 container create 2bc49dd83b2a6ae4b6c6501fba38f592d6c75b85a7c102afb74ed4450d825926 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 07:02:17 np0005604943 systemd[1]: Started libpod-conmon-2bc49dd83b2a6ae4b6c6501fba38f592d6c75b85a7c102afb74ed4450d825926.scope.
Feb  2 07:02:17 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:02:17 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2883d592b85f1e144db8563a088e1ef0854cce11e8b6a65ab0dc2ac3372b26/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.845 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033737.8445234, 74c6efeb-3664-46ac-a191-a2af260625f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.846 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] VM Started (Lifecycle Event)#033[00m
Feb  2 07:02:17 np0005604943 podman[258258]: 2026-02-02 12:02:17.85554221 +0000 UTC m=+0.120300579 container init 2bc49dd83b2a6ae4b6c6501fba38f592d6c75b85a7c102afb74ed4450d825926 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb  2 07:02:17 np0005604943 podman[258258]: 2026-02-02 12:02:17.860648335 +0000 UTC m=+0.125406684 container start 2bc49dd83b2a6ae4b6c6501fba38f592d6c75b85a7c102afb74ed4450d825926 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 07:02:17 np0005604943 podman[258258]: 2026-02-02 12:02:17.765582817 +0000 UTC m=+0.030341166 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.873 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:02:17 np0005604943 neutron-haproxy-ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7[258285]: [NOTICE]   (258307) : New worker (258314) forked
Feb  2 07:02:17 np0005604943 neutron-haproxy-ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7[258285]: [NOTICE]   (258307) : Loading success.
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.890 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033737.8446782, 74c6efeb-3664-46ac-a191-a2af260625f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.890 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.918 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.922 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:02:17 np0005604943 nova_compute[238883]: 2026-02-02 12:02:17.948 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:02:18 np0005604943 podman[258365]: 2026-02-02 12:02:18.229268024 +0000 UTC m=+0.040930975 container create 84c55dbfe54d887b651ad57fca9a34aa2b0bccec05f6d3e8a41ff309f3aa1fff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jackson, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 07:02:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Feb  2 07:02:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Feb  2 07:02:18 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Feb  2 07:02:18 np0005604943 systemd[1]: Started libpod-conmon-84c55dbfe54d887b651ad57fca9a34aa2b0bccec05f6d3e8a41ff309f3aa1fff.scope.
Feb  2 07:02:18 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:02:18 np0005604943 podman[258365]: 2026-02-02 12:02:18.294075241 +0000 UTC m=+0.105738202 container init 84c55dbfe54d887b651ad57fca9a34aa2b0bccec05f6d3e8a41ff309f3aa1fff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jackson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:02:18 np0005604943 podman[258365]: 2026-02-02 12:02:18.300631865 +0000 UTC m=+0.112294806 container start 84c55dbfe54d887b651ad57fca9a34aa2b0bccec05f6d3e8a41ff309f3aa1fff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Feb  2 07:02:18 np0005604943 podman[258365]: 2026-02-02 12:02:18.210017834 +0000 UTC m=+0.021680805 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:02:18 np0005604943 podman[258365]: 2026-02-02 12:02:18.303874791 +0000 UTC m=+0.115537732 container attach 84c55dbfe54d887b651ad57fca9a34aa2b0bccec05f6d3e8a41ff309f3aa1fff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb  2 07:02:18 np0005604943 zealous_jackson[258382]: 167 167
Feb  2 07:02:18 np0005604943 systemd[1]: libpod-84c55dbfe54d887b651ad57fca9a34aa2b0bccec05f6d3e8a41ff309f3aa1fff.scope: Deactivated successfully.
Feb  2 07:02:18 np0005604943 podman[258365]: 2026-02-02 12:02:18.306271835 +0000 UTC m=+0.117934776 container died 84c55dbfe54d887b651ad57fca9a34aa2b0bccec05f6d3e8a41ff309f3aa1fff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:02:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 226 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 4.1 KiB/s wr, 58 op/s
Feb  2 07:02:18 np0005604943 systemd[1]: var-lib-containers-storage-overlay-77172bf5dc90e0fdaac6b9d1398ef8826041bb3efb46b830fab76d01b0b4d40f-merged.mount: Deactivated successfully.
Feb  2 07:02:18 np0005604943 podman[258365]: 2026-02-02 12:02:18.352422218 +0000 UTC m=+0.164085159 container remove 84c55dbfe54d887b651ad57fca9a34aa2b0bccec05f6d3e8a41ff309f3aa1fff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:02:18 np0005604943 systemd[1]: libpod-conmon-84c55dbfe54d887b651ad57fca9a34aa2b0bccec05f6d3e8a41ff309f3aa1fff.scope: Deactivated successfully.
Feb  2 07:02:18 np0005604943 podman[258406]: 2026-02-02 12:02:18.514103092 +0000 UTC m=+0.044045818 container create 8e7413564ac9093bbaeeb5e1aed02113313a73def8cbb9c806de5678638d2f2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 07:02:18 np0005604943 systemd[1]: Started libpod-conmon-8e7413564ac9093bbaeeb5e1aed02113313a73def8cbb9c806de5678638d2f2a.scope.
Feb  2 07:02:18 np0005604943 podman[258406]: 2026-02-02 12:02:18.497954194 +0000 UTC m=+0.027896970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:02:18 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:02:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e8f716051452fb77b061fa655de2ccdc2f351f83b4dcffcdbf30364875c1c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e8f716051452fb77b061fa655de2ccdc2f351f83b4dcffcdbf30364875c1c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e8f716051452fb77b061fa655de2ccdc2f351f83b4dcffcdbf30364875c1c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:18 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e8f716051452fb77b061fa655de2ccdc2f351f83b4dcffcdbf30364875c1c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:02:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:02:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1760990595' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:02:18 np0005604943 podman[258406]: 2026-02-02 12:02:18.612073148 +0000 UTC m=+0.142015894 container init 8e7413564ac9093bbaeeb5e1aed02113313a73def8cbb9c806de5678638d2f2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 07:02:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:02:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1760990595' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:02:18 np0005604943 podman[258406]: 2026-02-02 12:02:18.622059072 +0000 UTC m=+0.152001788 container start 8e7413564ac9093bbaeeb5e1aed02113313a73def8cbb9c806de5678638d2f2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:02:18 np0005604943 podman[258406]: 2026-02-02 12:02:18.626684826 +0000 UTC m=+0.156627562 container attach 8e7413564ac9093bbaeeb5e1aed02113313a73def8cbb9c806de5678638d2f2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 07:02:19 np0005604943 lvm[258499]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:02:19 np0005604943 lvm[258499]: VG ceph_vg0 finished
Feb  2 07:02:19 np0005604943 lvm[258501]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 07:02:19 np0005604943 lvm[258501]: VG ceph_vg1 finished
Feb  2 07:02:19 np0005604943 lvm[258502]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 07:02:19 np0005604943 lvm[258502]: VG ceph_vg2 finished
Feb  2 07:02:19 np0005604943 tender_chaplygin[258423]: {}
Feb  2 07:02:19 np0005604943 systemd[1]: libpod-8e7413564ac9093bbaeeb5e1aed02113313a73def8cbb9c806de5678638d2f2a.scope: Deactivated successfully.
Feb  2 07:02:19 np0005604943 podman[258406]: 2026-02-02 12:02:19.34123217 +0000 UTC m=+0.871174896 container died 8e7413564ac9093bbaeeb5e1aed02113313a73def8cbb9c806de5678638d2f2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:02:19 np0005604943 systemd[1]: var-lib-containers-storage-overlay-66e8f716051452fb77b061fa655de2ccdc2f351f83b4dcffcdbf30364875c1c6-merged.mount: Deactivated successfully.
Feb  2 07:02:19 np0005604943 podman[258406]: 2026-02-02 12:02:19.37670187 +0000 UTC m=+0.906644596 container remove 8e7413564ac9093bbaeeb5e1aed02113313a73def8cbb9c806de5678638d2f2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  2 07:02:19 np0005604943 systemd[1]: libpod-conmon-8e7413564ac9093bbaeeb5e1aed02113313a73def8cbb9c806de5678638d2f2a.scope: Deactivated successfully.
Feb  2 07:02:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:02:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:02:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:02:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.275 238887 DEBUG nova.compute.manager [req-272ec8e1-3fcd-4ff2-a36a-511dee69ed9a req-f652d7e1-b7e7-47a9-97b8-bcb5d13a4cdf 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Received event network-vif-plugged-1849877d-6591-447e-a3a5-68b010c64ba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.276 238887 DEBUG oslo_concurrency.lockutils [req-272ec8e1-3fcd-4ff2-a36a-511dee69ed9a req-f652d7e1-b7e7-47a9-97b8-bcb5d13a4cdf 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.276 238887 DEBUG oslo_concurrency.lockutils [req-272ec8e1-3fcd-4ff2-a36a-511dee69ed9a req-f652d7e1-b7e7-47a9-97b8-bcb5d13a4cdf 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.277 238887 DEBUG oslo_concurrency.lockutils [req-272ec8e1-3fcd-4ff2-a36a-511dee69ed9a req-f652d7e1-b7e7-47a9-97b8-bcb5d13a4cdf 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.277 238887 DEBUG nova.compute.manager [req-272ec8e1-3fcd-4ff2-a36a-511dee69ed9a req-f652d7e1-b7e7-47a9-97b8-bcb5d13a4cdf 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Processing event network-vif-plugged-1849877d-6591-447e-a3a5-68b010c64ba2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.277 238887 DEBUG nova.compute.manager [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.281 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033740.2808776, 74c6efeb-3664-46ac-a191-a2af260625f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.281 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.284 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.288 238887 INFO nova.virt.libvirt.driver [-] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Instance spawned successfully.#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.288 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.308 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.315 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:02:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 227 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 30 KiB/s wr, 143 op/s
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.319 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.320 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.320 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.320 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.321 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.321 238887 DEBUG nova.virt.libvirt.driver [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.333 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.386 238887 INFO nova.compute.manager [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Took 6.89 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.387 238887 DEBUG nova.compute.manager [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:02:20 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:02:20 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.456 238887 INFO nova.compute.manager [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Took 9.47 seconds to build instance.#033[00m
Feb  2 07:02:20 np0005604943 nova_compute[238883]: 2026-02-02 12:02:20.485 238887 DEBUG oslo_concurrency.lockutils [None req-f9d8a4e1-def4-44cc-8f95-76ca854a2516 d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Lock "74c6efeb-3664-46ac-a191-a2af260625f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:02:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:02:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4109894953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:02:21 np0005604943 nova_compute[238883]: 2026-02-02 12:02:21.298 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:21 np0005604943 nova_compute[238883]: 2026-02-02 12:02:21.366 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 4.423450987390408e-06 of space, bias 1.0, pg target 0.0013270352962171223 quantized to 32 (current 32)
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0010505624602081146 of space, bias 1.0, pg target 0.3151687380624344 quantized to 32 (current 32)
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.00034752439814495097 of space, bias 1.0, pg target 0.10425731944348529 quantized to 32 (current 32)
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662010486729716 of space, bias 1.0, pg target 0.1998603146018915 quantized to 32 (current 32)
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0896584496582253e-06 of space, bias 4.0, pg target 0.0013075901395898704 quantized to 16 (current 16)
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:02:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 07:02:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 227 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 689 KiB/s rd, 26 KiB/s wr, 144 op/s
Feb  2 07:02:22 np0005604943 nova_compute[238883]: 2026-02-02 12:02:22.387 238887 DEBUG nova.compute.manager [req-ca90992a-c9ec-404e-8178-27e78307e993 req-bba332bd-ce67-4a77-8dfb-078b3fb94c9b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Received event network-vif-plugged-1849877d-6591-447e-a3a5-68b010c64ba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:02:22 np0005604943 nova_compute[238883]: 2026-02-02 12:02:22.387 238887 DEBUG oslo_concurrency.lockutils [req-ca90992a-c9ec-404e-8178-27e78307e993 req-bba332bd-ce67-4a77-8dfb-078b3fb94c9b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:02:22 np0005604943 nova_compute[238883]: 2026-02-02 12:02:22.388 238887 DEBUG oslo_concurrency.lockutils [req-ca90992a-c9ec-404e-8178-27e78307e993 req-bba332bd-ce67-4a77-8dfb-078b3fb94c9b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:02:22 np0005604943 nova_compute[238883]: 2026-02-02 12:02:22.388 238887 DEBUG oslo_concurrency.lockutils [req-ca90992a-c9ec-404e-8178-27e78307e993 req-bba332bd-ce67-4a77-8dfb-078b3fb94c9b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:02:22 np0005604943 nova_compute[238883]: 2026-02-02 12:02:22.388 238887 DEBUG nova.compute.manager [req-ca90992a-c9ec-404e-8178-27e78307e993 req-bba332bd-ce67-4a77-8dfb-078b3fb94c9b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] No waiting events found dispatching network-vif-plugged-1849877d-6591-447e-a3a5-68b010c64ba2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:02:22 np0005604943 nova_compute[238883]: 2026-02-02 12:02:22.389 238887 WARNING nova.compute.manager [req-ca90992a-c9ec-404e-8178-27e78307e993 req-bba332bd-ce67-4a77-8dfb-078b3fb94c9b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Received unexpected event network-vif-plugged-1849877d-6591-447e-a3a5-68b010c64ba2 for instance with vm_state active and task_state None.#033[00m
Feb  2 07:02:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Feb  2 07:02:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Feb  2 07:02:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Feb  2 07:02:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:02:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Feb  2 07:02:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Feb  2 07:02:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Feb  2 07:02:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:02:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3029773093' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:02:23 np0005604943 nova_compute[238883]: 2026-02-02 12:02:23.072 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:23 np0005604943 NetworkManager[49093]: <info>  [1770033743.0757] manager: (patch-br-int-to-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Feb  2 07:02:23 np0005604943 NetworkManager[49093]: <info>  [1770033743.0766] manager: (patch-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Feb  2 07:02:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:02:23Z|00142|binding|INFO|Releasing lport ebb31857-67a0-43cf-9b71-c8cb9596225d from this chassis (sb_readonly=0)
Feb  2 07:02:23 np0005604943 nova_compute[238883]: 2026-02-02 12:02:23.143 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:23 np0005604943 nova_compute[238883]: 2026-02-02 12:02:23.156 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:23 np0005604943 nova_compute[238883]: 2026-02-02 12:02:23.578 238887 DEBUG nova.compute.manager [req-d6ca071a-d2bc-4388-af23-8dba76f0d3a4 req-6fb5d0c9-59a2-495c-9a64-5594eb659642 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Received event network-changed-1849877d-6591-447e-a3a5-68b010c64ba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:02:23 np0005604943 nova_compute[238883]: 2026-02-02 12:02:23.578 238887 DEBUG nova.compute.manager [req-d6ca071a-d2bc-4388-af23-8dba76f0d3a4 req-6fb5d0c9-59a2-495c-9a64-5594eb659642 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Refreshing instance network info cache due to event network-changed-1849877d-6591-447e-a3a5-68b010c64ba2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:02:23 np0005604943 nova_compute[238883]: 2026-02-02 12:02:23.579 238887 DEBUG oslo_concurrency.lockutils [req-d6ca071a-d2bc-4388-af23-8dba76f0d3a4 req-6fb5d0c9-59a2-495c-9a64-5594eb659642 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:02:23 np0005604943 nova_compute[238883]: 2026-02-02 12:02:23.579 238887 DEBUG oslo_concurrency.lockutils [req-d6ca071a-d2bc-4388-af23-8dba76f0d3a4 req-6fb5d0c9-59a2-495c-9a64-5594eb659642 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:02:23 np0005604943 nova_compute[238883]: 2026-02-02 12:02:23.579 238887 DEBUG nova.network.neutron [req-d6ca071a-d2bc-4388-af23-8dba76f0d3a4 req-6fb5d0c9-59a2-495c-9a64-5594eb659642 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Refreshing network info cache for port 1849877d-6591-447e-a3a5-68b010c64ba2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:02:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Feb  2 07:02:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Feb  2 07:02:23 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Feb  2 07:02:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 227 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 267 op/s
Feb  2 07:02:24 np0005604943 nova_compute[238883]: 2026-02-02 12:02:24.465 238887 DEBUG nova.network.neutron [req-d6ca071a-d2bc-4388-af23-8dba76f0d3a4 req-6fb5d0c9-59a2-495c-9a64-5594eb659642 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Updated VIF entry in instance network info cache for port 1849877d-6591-447e-a3a5-68b010c64ba2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:02:24 np0005604943 nova_compute[238883]: 2026-02-02 12:02:24.466 238887 DEBUG nova.network.neutron [req-d6ca071a-d2bc-4388-af23-8dba76f0d3a4 req-6fb5d0c9-59a2-495c-9a64-5594eb659642 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Updating instance_info_cache with network_info: [{"id": "1849877d-6591-447e-a3a5-68b010c64ba2", "address": "fa:16:3e:c5:d3:e9", "network": {"id": "077c46d0-8d19-4d3f-a5fe-650ee517c2b7", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1827864277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34a46b2cbe7d4757b891bffab0c70022", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1849877d-65", "ovs_interfaceid": "1849877d-6591-447e-a3a5-68b010c64ba2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:02:24 np0005604943 nova_compute[238883]: 2026-02-02 12:02:24.481 238887 DEBUG oslo_concurrency.lockutils [req-d6ca071a-d2bc-4388-af23-8dba76f0d3a4 req-6fb5d0c9-59a2-495c-9a64-5594eb659642 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:02:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Feb  2 07:02:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Feb  2 07:02:24 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Feb  2 07:02:25 np0005604943 nova_compute[238883]: 2026-02-02 12:02:25.812 238887 DEBUG nova.compute.manager [req-85d655de-6ff8-4567-ac3a-577bc04d6d82 req-cd983bd1-cca5-4a1d-87df-87a736d6d186 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Received event network-changed-1849877d-6591-447e-a3a5-68b010c64ba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:02:25 np0005604943 nova_compute[238883]: 2026-02-02 12:02:25.813 238887 DEBUG nova.compute.manager [req-85d655de-6ff8-4567-ac3a-577bc04d6d82 req-cd983bd1-cca5-4a1d-87df-87a736d6d186 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Refreshing instance network info cache due to event network-changed-1849877d-6591-447e-a3a5-68b010c64ba2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:02:25 np0005604943 nova_compute[238883]: 2026-02-02 12:02:25.813 238887 DEBUG oslo_concurrency.lockutils [req-85d655de-6ff8-4567-ac3a-577bc04d6d82 req-cd983bd1-cca5-4a1d-87df-87a736d6d186 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:02:25 np0005604943 nova_compute[238883]: 2026-02-02 12:02:25.814 238887 DEBUG oslo_concurrency.lockutils [req-85d655de-6ff8-4567-ac3a-577bc04d6d82 req-cd983bd1-cca5-4a1d-87df-87a736d6d186 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:02:25 np0005604943 nova_compute[238883]: 2026-02-02 12:02:25.814 238887 DEBUG nova.network.neutron [req-85d655de-6ff8-4567-ac3a-577bc04d6d82 req-cd983bd1-cca5-4a1d-87df-87a736d6d186 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Refreshing network info cache for port 1849877d-6591-447e-a3a5-68b010c64ba2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:02:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:02:26 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2249047480' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:02:26 np0005604943 nova_compute[238883]: 2026-02-02 12:02:26.301 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 227 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.2 KiB/s wr, 220 op/s
Feb  2 07:02:26 np0005604943 nova_compute[238883]: 2026-02-02 12:02:26.369 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Feb  2 07:02:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Feb  2 07:02:26 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Feb  2 07:02:26 np0005604943 nova_compute[238883]: 2026-02-02 12:02:26.770 238887 DEBUG nova.network.neutron [req-85d655de-6ff8-4567-ac3a-577bc04d6d82 req-cd983bd1-cca5-4a1d-87df-87a736d6d186 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Updated VIF entry in instance network info cache for port 1849877d-6591-447e-a3a5-68b010c64ba2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:02:26 np0005604943 nova_compute[238883]: 2026-02-02 12:02:26.771 238887 DEBUG nova.network.neutron [req-85d655de-6ff8-4567-ac3a-577bc04d6d82 req-cd983bd1-cca5-4a1d-87df-87a736d6d186 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Updating instance_info_cache with network_info: [{"id": "1849877d-6591-447e-a3a5-68b010c64ba2", "address": "fa:16:3e:c5:d3:e9", "network": {"id": "077c46d0-8d19-4d3f-a5fe-650ee517c2b7", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1827864277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34a46b2cbe7d4757b891bffab0c70022", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1849877d-65", "ovs_interfaceid": "1849877d-6591-447e-a3a5-68b010c64ba2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:02:26 np0005604943 nova_compute[238883]: 2026-02-02 12:02:26.792 238887 DEBUG oslo_concurrency.lockutils [req-85d655de-6ff8-4567-ac3a-577bc04d6d82 req-cd983bd1-cca5-4a1d-87df-87a736d6d186 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:02:26 np0005604943 nova_compute[238883]: 2026-02-02 12:02:26.792 238887 DEBUG nova.compute.manager [req-85d655de-6ff8-4567-ac3a-577bc04d6d82 req-cd983bd1-cca5-4a1d-87df-87a736d6d186 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Received event network-changed-1849877d-6591-447e-a3a5-68b010c64ba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:02:26 np0005604943 nova_compute[238883]: 2026-02-02 12:02:26.792 238887 DEBUG nova.compute.manager [req-85d655de-6ff8-4567-ac3a-577bc04d6d82 req-cd983bd1-cca5-4a1d-87df-87a736d6d186 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Refreshing instance network info cache due to event network-changed-1849877d-6591-447e-a3a5-68b010c64ba2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:02:26 np0005604943 nova_compute[238883]: 2026-02-02 12:02:26.793 238887 DEBUG oslo_concurrency.lockutils [req-85d655de-6ff8-4567-ac3a-577bc04d6d82 req-cd983bd1-cca5-4a1d-87df-87a736d6d186 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:02:26 np0005604943 nova_compute[238883]: 2026-02-02 12:02:26.793 238887 DEBUG oslo_concurrency.lockutils [req-85d655de-6ff8-4567-ac3a-577bc04d6d82 req-cd983bd1-cca5-4a1d-87df-87a736d6d186 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:02:26 np0005604943 nova_compute[238883]: 2026-02-02 12:02:26.793 238887 DEBUG nova.network.neutron [req-85d655de-6ff8-4567-ac3a-577bc04d6d82 req-cd983bd1-cca5-4a1d-87df-87a736d6d186 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Refreshing network info cache for port 1849877d-6591-447e-a3a5-68b010c64ba2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:02:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:02:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Feb  2 07:02:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Feb  2 07:02:27 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Feb  2 07:02:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:02:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/255345000' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:02:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:02:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/255345000' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:02:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 303 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 16 MiB/s wr, 44 op/s
Feb  2 07:02:29 np0005604943 nova_compute[238883]: 2026-02-02 12:02:29.185 238887 DEBUG nova.network.neutron [req-85d655de-6ff8-4567-ac3a-577bc04d6d82 req-cd983bd1-cca5-4a1d-87df-87a736d6d186 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Updated VIF entry in instance network info cache for port 1849877d-6591-447e-a3a5-68b010c64ba2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:02:29 np0005604943 nova_compute[238883]: 2026-02-02 12:02:29.186 238887 DEBUG nova.network.neutron [req-85d655de-6ff8-4567-ac3a-577bc04d6d82 req-cd983bd1-cca5-4a1d-87df-87a736d6d186 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Updating instance_info_cache with network_info: [{"id": "1849877d-6591-447e-a3a5-68b010c64ba2", "address": "fa:16:3e:c5:d3:e9", "network": {"id": "077c46d0-8d19-4d3f-a5fe-650ee517c2b7", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1827864277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34a46b2cbe7d4757b891bffab0c70022", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1849877d-65", "ovs_interfaceid": "1849877d-6591-447e-a3a5-68b010c64ba2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:02:29 np0005604943 nova_compute[238883]: 2026-02-02 12:02:29.213 238887 DEBUG oslo_concurrency.lockutils [req-85d655de-6ff8-4567-ac3a-577bc04d6d82 req-cd983bd1-cca5-4a1d-87df-87a736d6d186 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:02:29 np0005604943 nova_compute[238883]: 2026-02-02 12:02:29.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:02:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 443 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 140 KiB/s rd, 36 MiB/s wr, 213 op/s
Feb  2 07:02:31 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb  2 07:02:31 np0005604943 nova_compute[238883]: 2026-02-02 12:02:31.306 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:31 np0005604943 nova_compute[238883]: 2026-02-02 12:02:31.369 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:02:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1729466789' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:02:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Feb  2 07:02:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Feb  2 07:02:31 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Feb  2 07:02:31 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb  2 07:02:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 1.0 GiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 425 KiB/s rd, 136 MiB/s wr, 310 op/s
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:02:32.627013) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033752627040, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1617, "num_deletes": 267, "total_data_size": 2116075, "memory_usage": 2150160, "flush_reason": "Manual Compaction"}
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033752636666, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 2054932, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24008, "largest_seqno": 25623, "table_properties": {"data_size": 2047078, "index_size": 4672, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17333, "raw_average_key_size": 20, "raw_value_size": 2030882, "raw_average_value_size": 2435, "num_data_blocks": 206, "num_entries": 834, "num_filter_entries": 834, "num_deletions": 267, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770033668, "oldest_key_time": 1770033668, "file_creation_time": 1770033752, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 9710 microseconds, and 3599 cpu microseconds.
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:02:32.636724) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 2054932 bytes OK
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:02:32.636739) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:02:32.638092) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:02:32.638103) EVENT_LOG_v1 {"time_micros": 1770033752638100, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:02:32.638120) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 2108629, prev total WAL file size 2108629, number of live WAL files 2.
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:02:32.638553) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(2006KB)], [53(9169KB)]
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033752638600, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11444312, "oldest_snapshot_seqno": -1}
Feb  2 07:02:32 np0005604943 nova_compute[238883]: 2026-02-02 12:02:32.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:02:32 np0005604943 nova_compute[238883]: 2026-02-02 12:02:32.659 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:02:32 np0005604943 nova_compute[238883]: 2026-02-02 12:02:32.659 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:02:32 np0005604943 nova_compute[238883]: 2026-02-02 12:02:32.659 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:02:32 np0005604943 nova_compute[238883]: 2026-02-02 12:02:32.660 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 07:02:32 np0005604943 nova_compute[238883]: 2026-02-02 12:02:32.660 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5553 keys, 11341789 bytes, temperature: kUnknown
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033752683931, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 11341789, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11296807, "index_size": 30024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 137667, "raw_average_key_size": 24, "raw_value_size": 11189144, "raw_average_value_size": 2014, "num_data_blocks": 1239, "num_entries": 5553, "num_filter_entries": 5553, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770033752, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:02:32.684279) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 11341789 bytes
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:02:32.685743) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 251.8 rd, 249.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.0 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(11.1) write-amplify(5.5) OK, records in: 6097, records dropped: 544 output_compression: NoCompression
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:02:32.685760) EVENT_LOG_v1 {"time_micros": 1770033752685753, "job": 28, "event": "compaction_finished", "compaction_time_micros": 45458, "compaction_time_cpu_micros": 22315, "output_level": 6, "num_output_files": 1, "total_output_size": 11341789, "num_input_records": 6097, "num_output_records": 5553, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033752686007, "job": 28, "event": "table_file_deletion", "file_number": 55}
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033752686925, "job": 28, "event": "table_file_deletion", "file_number": 53}
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:02:32.638493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:02:32.686957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:02:32.686960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:02:32.686962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:02:32.686963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:02:32 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:02:32.686964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:02:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:02:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2904611418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:02:33 np0005604943 nova_compute[238883]: 2026-02-02 12:02:33.176 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:02:33 np0005604943 nova_compute[238883]: 2026-02-02 12:02:33.243 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:02:33 np0005604943 nova_compute[238883]: 2026-02-02 12:02:33.244 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:02:33 np0005604943 nova_compute[238883]: 2026-02-02 12:02:33.373 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:02:33 np0005604943 nova_compute[238883]: 2026-02-02 12:02:33.374 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4337MB free_disk=59.988014937378466GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 07:02:33 np0005604943 nova_compute[238883]: 2026-02-02 12:02:33.374 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:02:33 np0005604943 nova_compute[238883]: 2026-02-02 12:02:33.375 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:02:33 np0005604943 ovn_controller[145056]: 2026-02-02T12:02:33Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c5:d3:e9 10.100.0.14
Feb  2 07:02:33 np0005604943 ovn_controller[145056]: 2026-02-02T12:02:33Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c5:d3:e9 10.100.0.14
Feb  2 07:02:33 np0005604943 nova_compute[238883]: 2026-02-02 12:02:33.553 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance 74c6efeb-3664-46ac-a191-a2af260625f7 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 07:02:33 np0005604943 nova_compute[238883]: 2026-02-02 12:02:33.553 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 07:02:33 np0005604943 nova_compute[238883]: 2026-02-02 12:02:33.554 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 07:02:33 np0005604943 nova_compute[238883]: 2026-02-02 12:02:33.662 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:02:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Feb  2 07:02:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Feb  2 07:02:33 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Feb  2 07:02:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:02:34 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/682604978' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:02:34 np0005604943 nova_compute[238883]: 2026-02-02 12:02:34.240 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:02:34 np0005604943 nova_compute[238883]: 2026-02-02 12:02:34.244 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:02:34 np0005604943 nova_compute[238883]: 2026-02-02 12:02:34.262 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:02:34 np0005604943 nova_compute[238883]: 2026-02-02 12:02:34.283 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 07:02:34 np0005604943 nova_compute[238883]: 2026-02-02 12:02:34.284 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:02:34 np0005604943 nova_compute[238883]: 2026-02-02 12:02:34.284 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:02:34 np0005604943 nova_compute[238883]: 2026-02-02 12:02:34.284 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  2 07:02:34 np0005604943 nova_compute[238883]: 2026-02-02 12:02:34.297 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  2 07:02:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 1.3 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 162 MiB/s wr, 604 op/s
Feb  2 07:02:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Feb  2 07:02:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Feb  2 07:02:34 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Feb  2 07:02:35 np0005604943 nova_compute[238883]: 2026-02-02 12:02:35.298 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:02:35 np0005604943 nova_compute[238883]: 2026-02-02 12:02:35.298 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 07:02:35 np0005604943 nova_compute[238883]: 2026-02-02 12:02:35.299 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 07:02:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:02:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2653290062' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:02:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:02:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2653290062' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:02:35 np0005604943 nova_compute[238883]: 2026-02-02 12:02:35.514 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:02:35 np0005604943 nova_compute[238883]: 2026-02-02 12:02:35.515 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquired lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:02:35 np0005604943 nova_compute[238883]: 2026-02-02 12:02:35.515 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 07:02:35 np0005604943 nova_compute[238883]: 2026-02-02 12:02:35.516 238887 DEBUG nova.objects.instance [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 74c6efeb-3664-46ac-a191-a2af260625f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:02:36 np0005604943 nova_compute[238883]: 2026-02-02 12:02:36.308 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 1.3 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 187 MiB/s wr, 570 op/s
Feb  2 07:02:36 np0005604943 nova_compute[238883]: 2026-02-02 12:02:36.372 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:36 np0005604943 nova_compute[238883]: 2026-02-02 12:02:36.447 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Updating instance_info_cache with network_info: [{"id": "1849877d-6591-447e-a3a5-68b010c64ba2", "address": "fa:16:3e:c5:d3:e9", "network": {"id": "077c46d0-8d19-4d3f-a5fe-650ee517c2b7", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1827864277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34a46b2cbe7d4757b891bffab0c70022", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1849877d-65", "ovs_interfaceid": "1849877d-6591-447e-a3a5-68b010c64ba2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:02:36 np0005604943 nova_compute[238883]: 2026-02-02 12:02:36.464 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Releasing lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:02:36 np0005604943 nova_compute[238883]: 2026-02-02 12:02:36.465 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 07:02:36 np0005604943 nova_compute[238883]: 2026-02-02 12:02:36.465 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:02:36 np0005604943 nova_compute[238883]: 2026-02-02 12:02:36.465 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:02:36 np0005604943 nova_compute[238883]: 2026-02-02 12:02:36.465 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:02:36 np0005604943 nova_compute[238883]: 2026-02-02 12:02:36.466 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:02:36 np0005604943 nova_compute[238883]: 2026-02-02 12:02:36.466 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 07:02:36 np0005604943 nova_compute[238883]: 2026-02-02 12:02:36.466 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:02:36 np0005604943 nova_compute[238883]: 2026-02-02 12:02:36.466 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  2 07:02:36 np0005604943 nova_compute[238883]: 2026-02-02 12:02:36.658 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:02:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:02:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4222738215' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:02:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:02:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1666181791' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:02:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:02:37 np0005604943 nova_compute[238883]: 2026-02-02 12:02:37.635 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:02:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Feb  2 07:02:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Feb  2 07:02:37 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Feb  2 07:02:38 np0005604943 podman[258590]: 2026-02-02 12:02:38.049545934 +0000 UTC m=+0.052149744 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 07:02:38 np0005604943 podman[258589]: 2026-02-02 12:02:38.069248976 +0000 UTC m=+0.072209475 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:02:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 1.3 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 609 KiB/s rd, 41 MiB/s wr, 413 op/s
Feb  2 07:02:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Feb  2 07:02:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Feb  2 07:02:38 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Feb  2 07:02:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:02:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3622247177' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:02:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:02:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3622247177' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:02:39 np0005604943 nova_compute[238883]: 2026-02-02 12:02:39.763 238887 DEBUG nova.compute.manager [req-c4a218e2-05c1-4b45-a852-81fd6d9227f4 req-b55cff18-1db6-4a0c-be01-df7e06eb04e6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Received event network-changed-1849877d-6591-447e-a3a5-68b010c64ba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:02:39 np0005604943 nova_compute[238883]: 2026-02-02 12:02:39.763 238887 DEBUG nova.compute.manager [req-c4a218e2-05c1-4b45-a852-81fd6d9227f4 req-b55cff18-1db6-4a0c-be01-df7e06eb04e6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Refreshing instance network info cache due to event network-changed-1849877d-6591-447e-a3a5-68b010c64ba2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:02:39 np0005604943 nova_compute[238883]: 2026-02-02 12:02:39.763 238887 DEBUG oslo_concurrency.lockutils [req-c4a218e2-05c1-4b45-a852-81fd6d9227f4 req-b55cff18-1db6-4a0c-be01-df7e06eb04e6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:02:39 np0005604943 nova_compute[238883]: 2026-02-02 12:02:39.764 238887 DEBUG oslo_concurrency.lockutils [req-c4a218e2-05c1-4b45-a852-81fd6d9227f4 req-b55cff18-1db6-4a0c-be01-df7e06eb04e6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:02:39 np0005604943 nova_compute[238883]: 2026-02-02 12:02:39.764 238887 DEBUG nova.network.neutron [req-c4a218e2-05c1-4b45-a852-81fd6d9227f4 req-b55cff18-1db6-4a0c-be01-df7e06eb04e6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Refreshing network info cache for port 1849877d-6591-447e-a3a5-68b010c64ba2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:02:39 np0005604943 nova_compute[238883]: 2026-02-02 12:02:39.866 238887 DEBUG oslo_concurrency.lockutils [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Acquiring lock "74c6efeb-3664-46ac-a191-a2af260625f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:02:39 np0005604943 nova_compute[238883]: 2026-02-02 12:02:39.866 238887 DEBUG oslo_concurrency.lockutils [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Lock "74c6efeb-3664-46ac-a191-a2af260625f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:02:39 np0005604943 nova_compute[238883]: 2026-02-02 12:02:39.867 238887 DEBUG oslo_concurrency.lockutils [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Acquiring lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:02:39 np0005604943 nova_compute[238883]: 2026-02-02 12:02:39.867 238887 DEBUG oslo_concurrency.lockutils [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:02:39 np0005604943 nova_compute[238883]: 2026-02-02 12:02:39.867 238887 DEBUG oslo_concurrency.lockutils [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:02:39 np0005604943 nova_compute[238883]: 2026-02-02 12:02:39.868 238887 INFO nova.compute.manager [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Terminating instance#033[00m
Feb  2 07:02:39 np0005604943 nova_compute[238883]: 2026-02-02 12:02:39.870 238887 DEBUG nova.compute.manager [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:02:39 np0005604943 kernel: tap1849877d-65 (unregistering): left promiscuous mode
Feb  2 07:02:39 np0005604943 NetworkManager[49093]: <info>  [1770033759.9237] device (tap1849877d-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:02:39 np0005604943 nova_compute[238883]: 2026-02-02 12:02:39.924 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:39 np0005604943 ovn_controller[145056]: 2026-02-02T12:02:39Z|00143|binding|INFO|Releasing lport 1849877d-6591-447e-a3a5-68b010c64ba2 from this chassis (sb_readonly=0)
Feb  2 07:02:39 np0005604943 ovn_controller[145056]: 2026-02-02T12:02:39Z|00144|binding|INFO|Setting lport 1849877d-6591-447e-a3a5-68b010c64ba2 down in Southbound
Feb  2 07:02:39 np0005604943 ovn_controller[145056]: 2026-02-02T12:02:39Z|00145|binding|INFO|Removing iface tap1849877d-65 ovn-installed in OVS
Feb  2 07:02:39 np0005604943 nova_compute[238883]: 2026-02-02 12:02:39.931 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:39 np0005604943 nova_compute[238883]: 2026-02-02 12:02:39.933 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:39.938 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:d3:e9 10.100.0.14'], port_security=['fa:16:3e:c5:d3:e9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '74c6efeb-3664-46ac-a191-a2af260625f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-077c46d0-8d19-4d3f-a5fe-650ee517c2b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a46b2cbe7d4757b891bffab0c70022', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6dd606e7-f627-4726-8281-fd244a3c544e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c68cdb9-bf26-4372-b7f1-4197b5921755, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=1849877d-6591-447e-a3a5-68b010c64ba2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:02:39 np0005604943 nova_compute[238883]: 2026-02-02 12:02:39.940 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:39.941 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 1849877d-6591-447e-a3a5-68b010c64ba2 in datapath 077c46d0-8d19-4d3f-a5fe-650ee517c2b7 unbound from our chassis#033[00m
Feb  2 07:02:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:39.943 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 077c46d0-8d19-4d3f-a5fe-650ee517c2b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:02:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:39.945 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd39cba-537f-477e-aad2-dd0467028a5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:39.945 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7 namespace which is not needed anymore#033[00m
Feb  2 07:02:39 np0005604943 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Feb  2 07:02:39 np0005604943 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 12.545s CPU time.
Feb  2 07:02:39 np0005604943 systemd-machined[206973]: Machine qemu-13-instance-0000000d terminated.
Feb  2 07:02:40 np0005604943 neutron-haproxy-ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7[258285]: [NOTICE]   (258307) : haproxy version is 2.8.14-c23fe91
Feb  2 07:02:40 np0005604943 neutron-haproxy-ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7[258285]: [NOTICE]   (258307) : path to executable is /usr/sbin/haproxy
Feb  2 07:02:40 np0005604943 neutron-haproxy-ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7[258285]: [WARNING]  (258307) : Exiting Master process...
Feb  2 07:02:40 np0005604943 neutron-haproxy-ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7[258285]: [ALERT]    (258307) : Current worker (258314) exited with code 143 (Terminated)
Feb  2 07:02:40 np0005604943 neutron-haproxy-ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7[258285]: [WARNING]  (258307) : All workers exited. Exiting... (0)
Feb  2 07:02:40 np0005604943 systemd[1]: libpod-2bc49dd83b2a6ae4b6c6501fba38f592d6c75b85a7c102afb74ed4450d825926.scope: Deactivated successfully.
Feb  2 07:02:40 np0005604943 podman[258653]: 2026-02-02 12:02:40.056437876 +0000 UTC m=+0.037950417 container died 2bc49dd83b2a6ae4b6c6501fba38f592d6c75b85a7c102afb74ed4450d825926 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:02:40 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2bc49dd83b2a6ae4b6c6501fba38f592d6c75b85a7c102afb74ed4450d825926-userdata-shm.mount: Deactivated successfully.
Feb  2 07:02:40 np0005604943 systemd[1]: var-lib-containers-storage-overlay-6b2883d592b85f1e144db8563a088e1ef0854cce11e8b6a65ab0dc2ac3372b26-merged.mount: Deactivated successfully.
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.085 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.088 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:40 np0005604943 podman[258653]: 2026-02-02 12:02:40.08979913 +0000 UTC m=+0.071311671 container cleanup 2bc49dd83b2a6ae4b6c6501fba38f592d6c75b85a7c102afb74ed4450d825926 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:02:40 np0005604943 systemd[1]: libpod-conmon-2bc49dd83b2a6ae4b6c6501fba38f592d6c75b85a7c102afb74ed4450d825926.scope: Deactivated successfully.
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.099 238887 INFO nova.virt.libvirt.driver [-] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Instance destroyed successfully.#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.099 238887 DEBUG nova.objects.instance [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Lazy-loading 'resources' on Instance uuid 74c6efeb-3664-46ac-a191-a2af260625f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.123 238887 DEBUG nova.virt.libvirt.vif [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:02:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-185840292',display_name='tempest-TestVolumeBackupRestore-server-185840292',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-185840292',id=13,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOGJufSFQNB4hpBzcN9ivKAtSHK0eqhRDSthqnkmRBuzm9obWgGVHoAZsPaCUS4Vee+URE8lWDyUvVBxJaXiZ+7VUQcWNq0pRYsYvi7moWCSna6gLgc8i/WZy00S62zE6Q==',key_name='tempest-TestVolumeBackupRestore-1120990637',keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:02:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='34a46b2cbe7d4757b891bffab0c70022',ramdisk_id='',reservation_id='r-gdl8u785',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-911136857',owner_user_name='tempest-TestVolumeBackupRestore-911136857-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:02:20Z,user_data=None,user_id='d8f09513610247a8bb0c10546e2d036e',uuid=74c6efeb-3664-46ac-a191-a2af260625f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1849877d-6591-447e-a3a5-68b010c64ba2", "address": "fa:16:3e:c5:d3:e9", "network": {"id": "077c46d0-8d19-4d3f-a5fe-650ee517c2b7", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1827864277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34a46b2cbe7d4757b891bffab0c70022", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1849877d-65", "ovs_interfaceid": "1849877d-6591-447e-a3a5-68b010c64ba2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.123 238887 DEBUG nova.network.os_vif_util [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Converting VIF {"id": "1849877d-6591-447e-a3a5-68b010c64ba2", "address": "fa:16:3e:c5:d3:e9", "network": {"id": "077c46d0-8d19-4d3f-a5fe-650ee517c2b7", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1827864277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34a46b2cbe7d4757b891bffab0c70022", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1849877d-65", "ovs_interfaceid": "1849877d-6591-447e-a3a5-68b010c64ba2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.124 238887 DEBUG nova.network.os_vif_util [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c5:d3:e9,bridge_name='br-int',has_traffic_filtering=True,id=1849877d-6591-447e-a3a5-68b010c64ba2,network=Network(077c46d0-8d19-4d3f-a5fe-650ee517c2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1849877d-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.124 238887 DEBUG os_vif [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c5:d3:e9,bridge_name='br-int',has_traffic_filtering=True,id=1849877d-6591-447e-a3a5-68b010c64ba2,network=Network(077c46d0-8d19-4d3f-a5fe-650ee517c2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1849877d-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.126 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.126 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1849877d-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.127 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.130 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.132 238887 INFO os_vif [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c5:d3:e9,bridge_name='br-int',has_traffic_filtering=True,id=1849877d-6591-447e-a3a5-68b010c64ba2,network=Network(077c46d0-8d19-4d3f-a5fe-650ee517c2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1849877d-65')#033[00m
Feb  2 07:02:40 np0005604943 podman[258692]: 2026-02-02 12:02:40.143694738 +0000 UTC m=+0.037273869 container remove 2bc49dd83b2a6ae4b6c6501fba38f592d6c75b85a7c102afb74ed4450d825926 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 07:02:40 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:40.150 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ced830b9-52fd-453b-b58a-7618e34d845f]: (4, ('Mon Feb  2 12:02:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7 (2bc49dd83b2a6ae4b6c6501fba38f592d6c75b85a7c102afb74ed4450d825926)\n2bc49dd83b2a6ae4b6c6501fba38f592d6c75b85a7c102afb74ed4450d825926\nMon Feb  2 12:02:40 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7 (2bc49dd83b2a6ae4b6c6501fba38f592d6c75b85a7c102afb74ed4450d825926)\n2bc49dd83b2a6ae4b6c6501fba38f592d6c75b85a7c102afb74ed4450d825926\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:40 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:40.151 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[2908b53a-d84d-4f45-ae38-c32a31289704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:40 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:40.152 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap077c46d0-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.153 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:40 np0005604943 kernel: tap077c46d0-80: left promiscuous mode
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.160 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.161 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:40 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:40.163 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[93616c3a-d70a-46ab-a4ca-b3b946b6d9cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:40 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:40.171 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[569f128a-3227-4882-9097-74b804f41873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:40 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:40.173 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[35ddca42-4eec-46d2-a257-90300e0962d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.176 238887 DEBUG nova.compute.manager [req-2396ca83-3ad3-460e-a633-5e15440939a3 req-f1dae9c1-3571-4fd7-95a4-c1ba2f63f33e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Received event network-vif-unplugged-1849877d-6591-447e-a3a5-68b010c64ba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.177 238887 DEBUG oslo_concurrency.lockutils [req-2396ca83-3ad3-460e-a633-5e15440939a3 req-f1dae9c1-3571-4fd7-95a4-c1ba2f63f33e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.177 238887 DEBUG oslo_concurrency.lockutils [req-2396ca83-3ad3-460e-a633-5e15440939a3 req-f1dae9c1-3571-4fd7-95a4-c1ba2f63f33e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.177 238887 DEBUG oslo_concurrency.lockutils [req-2396ca83-3ad3-460e-a633-5e15440939a3 req-f1dae9c1-3571-4fd7-95a4-c1ba2f63f33e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.177 238887 DEBUG nova.compute.manager [req-2396ca83-3ad3-460e-a633-5e15440939a3 req-f1dae9c1-3571-4fd7-95a4-c1ba2f63f33e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] No waiting events found dispatching network-vif-unplugged-1849877d-6591-447e-a3a5-68b010c64ba2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.178 238887 DEBUG nova.compute.manager [req-2396ca83-3ad3-460e-a633-5e15440939a3 req-f1dae9c1-3571-4fd7-95a4-c1ba2f63f33e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Received event network-vif-unplugged-1849877d-6591-447e-a3a5-68b010c64ba2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:02:40 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:40.185 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[bda70bd7-22af-410d-abca-6d238cda93ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418262, 'reachable_time': 16200, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258723, 'error': None, 'target': 'ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:40 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:40.187 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-077c46d0-8d19-4d3f-a5fe-650ee517c2b7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 07:02:40 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:02:40.187 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[360e4a52-a7d8-482a-868a-2cad067901a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:02:40 np0005604943 systemd[1]: run-netns-ovnmeta\x2d077c46d0\x2d8d19\x2d4d3f\x2da5fe\x2d650ee517c2b7.mount: Deactivated successfully.
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.279 238887 INFO nova.virt.libvirt.driver [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Deleting instance files /var/lib/nova/instances/74c6efeb-3664-46ac-a191-a2af260625f7_del#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.280 238887 INFO nova.virt.libvirt.driver [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Deletion of /var/lib/nova/instances/74c6efeb-3664-46ac-a191-a2af260625f7_del complete#033[00m
Feb  2 07:02:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 1.3 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 65 KiB/s wr, 134 op/s
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.333 238887 INFO nova.compute.manager [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Took 0.46 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.333 238887 DEBUG oslo.service.loopingcall [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.334 238887 DEBUG nova.compute.manager [-] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:02:40 np0005604943 nova_compute[238883]: 2026-02-02 12:02:40.334 238887 DEBUG nova.network.neutron [-] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:02:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:02:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2847734070' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:02:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:02:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:02:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:02:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:02:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:02:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:02:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Feb  2 07:02:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Feb  2 07:02:40 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Feb  2 07:02:41 np0005604943 nova_compute[238883]: 2026-02-02 12:02:41.031 238887 DEBUG nova.network.neutron [req-c4a218e2-05c1-4b45-a852-81fd6d9227f4 req-b55cff18-1db6-4a0c-be01-df7e06eb04e6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Updated VIF entry in instance network info cache for port 1849877d-6591-447e-a3a5-68b010c64ba2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:02:41 np0005604943 nova_compute[238883]: 2026-02-02 12:02:41.032 238887 DEBUG nova.network.neutron [req-c4a218e2-05c1-4b45-a852-81fd6d9227f4 req-b55cff18-1db6-4a0c-be01-df7e06eb04e6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Updating instance_info_cache with network_info: [{"id": "1849877d-6591-447e-a3a5-68b010c64ba2", "address": "fa:16:3e:c5:d3:e9", "network": {"id": "077c46d0-8d19-4d3f-a5fe-650ee517c2b7", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1827864277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "34a46b2cbe7d4757b891bffab0c70022", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1849877d-65", "ovs_interfaceid": "1849877d-6591-447e-a3a5-68b010c64ba2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:02:41 np0005604943 nova_compute[238883]: 2026-02-02 12:02:41.051 238887 DEBUG nova.network.neutron [-] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:02:41 np0005604943 nova_compute[238883]: 2026-02-02 12:02:41.053 238887 DEBUG oslo_concurrency.lockutils [req-c4a218e2-05c1-4b45-a852-81fd6d9227f4 req-b55cff18-1db6-4a0c-be01-df7e06eb04e6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-74c6efeb-3664-46ac-a191-a2af260625f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:02:41 np0005604943 nova_compute[238883]: 2026-02-02 12:02:41.067 238887 INFO nova.compute.manager [-] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Took 0.73 seconds to deallocate network for instance.#033[00m
Feb  2 07:02:41 np0005604943 nova_compute[238883]: 2026-02-02 12:02:41.267 238887 INFO nova.compute.manager [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Took 0.20 seconds to detach 1 volumes for instance.#033[00m
Feb  2 07:02:41 np0005604943 nova_compute[238883]: 2026-02-02 12:02:41.314 238887 DEBUG oslo_concurrency.lockutils [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:02:41 np0005604943 nova_compute[238883]: 2026-02-02 12:02:41.314 238887 DEBUG oslo_concurrency.lockutils [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:02:41 np0005604943 nova_compute[238883]: 2026-02-02 12:02:41.361 238887 DEBUG oslo_concurrency.processutils [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:02:41 np0005604943 nova_compute[238883]: 2026-02-02 12:02:41.381 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:41 np0005604943 nova_compute[238883]: 2026-02-02 12:02:41.853 238887 DEBUG nova.compute.manager [req-2832b7e7-a640-4cd0-be9d-93eb275947f5 req-03dc958f-926f-40e3-ba98-1ed85036ff5d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Received event network-vif-deleted-1849877d-6591-447e-a3a5-68b010c64ba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:02:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:02:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4171489876' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:02:41 np0005604943 nova_compute[238883]: 2026-02-02 12:02:41.898 238887 DEBUG oslo_concurrency.processutils [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:02:41 np0005604943 nova_compute[238883]: 2026-02-02 12:02:41.902 238887 DEBUG nova.compute.provider_tree [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:02:41 np0005604943 nova_compute[238883]: 2026-02-02 12:02:41.922 238887 DEBUG nova.scheduler.client.report [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:02:41 np0005604943 nova_compute[238883]: 2026-02-02 12:02:41.943 238887 DEBUG oslo_concurrency.lockutils [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:02:41 np0005604943 nova_compute[238883]: 2026-02-02 12:02:41.972 238887 INFO nova.scheduler.client.report [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Deleted allocations for instance 74c6efeb-3664-46ac-a191-a2af260625f7#033[00m
Feb  2 07:02:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Feb  2 07:02:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Feb  2 07:02:41 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Feb  2 07:02:42 np0005604943 nova_compute[238883]: 2026-02-02 12:02:42.031 238887 DEBUG oslo_concurrency.lockutils [None req-ba28ad9d-9615-4d07-98cb-3aeca0a6a85c d8f09513610247a8bb0c10546e2d036e 34a46b2cbe7d4757b891bffab0c70022 - - default default] Lock "74c6efeb-3664-46ac-a191-a2af260625f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:02:42 np0005604943 nova_compute[238883]: 2026-02-02 12:02:42.258 238887 DEBUG nova.compute.manager [req-bc371dfa-bcad-40ab-8512-388992a09249 req-6264fc35-a144-45c9-94b1-5827d5bb976a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Received event network-vif-plugged-1849877d-6591-447e-a3a5-68b010c64ba2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:02:42 np0005604943 nova_compute[238883]: 2026-02-02 12:02:42.258 238887 DEBUG oslo_concurrency.lockutils [req-bc371dfa-bcad-40ab-8512-388992a09249 req-6264fc35-a144-45c9-94b1-5827d5bb976a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:02:42 np0005604943 nova_compute[238883]: 2026-02-02 12:02:42.259 238887 DEBUG oslo_concurrency.lockutils [req-bc371dfa-bcad-40ab-8512-388992a09249 req-6264fc35-a144-45c9-94b1-5827d5bb976a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:02:42 np0005604943 nova_compute[238883]: 2026-02-02 12:02:42.259 238887 DEBUG oslo_concurrency.lockutils [req-bc371dfa-bcad-40ab-8512-388992a09249 req-6264fc35-a144-45c9-94b1-5827d5bb976a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "74c6efeb-3664-46ac-a191-a2af260625f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:02:42 np0005604943 nova_compute[238883]: 2026-02-02 12:02:42.259 238887 DEBUG nova.compute.manager [req-bc371dfa-bcad-40ab-8512-388992a09249 req-6264fc35-a144-45c9-94b1-5827d5bb976a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] No waiting events found dispatching network-vif-plugged-1849877d-6591-447e-a3a5-68b010c64ba2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:02:42 np0005604943 nova_compute[238883]: 2026-02-02 12:02:42.259 238887 WARNING nova.compute.manager [req-bc371dfa-bcad-40ab-8512-388992a09249 req-6264fc35-a144-45c9-94b1-5827d5bb976a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Received unexpected event network-vif-plugged-1849877d-6591-447e-a3a5-68b010c64ba2 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:02:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 1.3 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 183 KiB/s rd, 41 KiB/s wr, 243 op/s
Feb  2 07:02:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:02:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Feb  2 07:02:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Feb  2 07:02:42 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Feb  2 07:02:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Feb  2 07:02:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Feb  2 07:02:43 np0005604943 nova_compute[238883]: 2026-02-02 12:02:43.644 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:02:43 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Feb  2 07:02:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:02:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1289747038' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:02:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 1.3 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 239 KiB/s rd, 13 KiB/s wr, 314 op/s
Feb  2 07:02:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:02:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2408406569' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:02:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:02:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2408406569' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:02:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Feb  2 07:02:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Feb  2 07:02:45 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Feb  2 07:02:45 np0005604943 nova_compute[238883]: 2026-02-02 12:02:45.130 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:02:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3661501571' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:02:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:02:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3661501571' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:02:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Feb  2 07:02:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Feb  2 07:02:46 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Feb  2 07:02:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:02:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3804668955' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:02:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:02:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3804668955' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:02:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 1.3 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 116 KiB/s rd, 5.7 KiB/s wr, 152 op/s
Feb  2 07:02:46 np0005604943 nova_compute[238883]: 2026-02-02 12:02:46.375 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Feb  2 07:02:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Feb  2 07:02:47 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Feb  2 07:02:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:02:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Feb  2 07:02:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Feb  2 07:02:47 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Feb  2 07:02:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:02:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1722688575' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:02:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:02:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1722688575' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:02:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 4 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 289 active+clean; 1.5 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 173 KiB/s rd, 90 MiB/s wr, 257 op/s
Feb  2 07:02:48 np0005604943 nova_compute[238883]: 2026-02-02 12:02:48.905 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:49 np0005604943 nova_compute[238883]: 2026-02-02 12:02:49.017 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:50 np0005604943 nova_compute[238883]: 2026-02-02 12:02:50.134 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 4 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 289 active+clean; 1.7 GiB data, 2.0 GiB used, 58 GiB / 60 GiB avail; 375 KiB/s rd, 129 MiB/s wr, 596 op/s
Feb  2 07:02:51 np0005604943 nova_compute[238883]: 2026-02-02 12:02:51.377 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Feb  2 07:02:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Feb  2 07:02:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Feb  2 07:02:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 2.0 GiB data, 2.2 GiB used, 58 GiB / 60 GiB avail; 356 KiB/s rd, 156 MiB/s wr, 570 op/s
Feb  2 07:02:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Feb  2 07:02:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Feb  2 07:02:52 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Feb  2 07:02:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:02:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Feb  2 07:02:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Feb  2 07:02:52 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Feb  2 07:02:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 285 KiB/s rd, 111 MiB/s wr, 475 op/s
Feb  2 07:02:55 np0005604943 nova_compute[238883]: 2026-02-02 12:02:55.099 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033760.0976806, 74c6efeb-3664-46ac-a191-a2af260625f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:02:55 np0005604943 nova_compute[238883]: 2026-02-02 12:02:55.100 238887 INFO nova.compute.manager [-] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:02:55 np0005604943 nova_compute[238883]: 2026-02-02 12:02:55.121 238887 DEBUG nova.compute.manager [None req-5c3d41f9-7636-48fb-8acd-cbec190d5244 - - - - - -] [instance: 74c6efeb-3664-46ac-a191-a2af260625f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:02:55 np0005604943 nova_compute[238883]: 2026-02-02 12:02:55.136 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:02:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/631290273' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:02:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 71 KiB/s rd, 57 MiB/s wr, 122 op/s
Feb  2 07:02:56 np0005604943 nova_compute[238883]: 2026-02-02 12:02:56.379 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:02:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Feb  2 07:02:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Feb  2 07:02:56 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Feb  2 07:02:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Feb  2 07:02:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Feb  2 07:02:57 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Feb  2 07:02:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:02:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Feb  2 07:02:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Feb  2 07:02:57 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Feb  2 07:02:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 MiB/s wr, 224 op/s
Feb  2 07:03:00 np0005604943 nova_compute[238883]: 2026-02-02 12:03:00.139 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.8 KiB/s wr, 211 op/s
Feb  2 07:03:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:03:00 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3422196925' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:03:01 np0005604943 nova_compute[238883]: 2026-02-02 12:03:01.098 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:01 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:01.099 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:03:01 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:01.100 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 07:03:01 np0005604943 nova_compute[238883]: 2026-02-02 12:03:01.380 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Feb  2 07:03:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Feb  2 07:03:01 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Feb  2 07:03:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.8 MiB/s wr, 298 op/s
Feb  2 07:03:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:03:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1465539641' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:03:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:03:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Feb  2 07:03:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Feb  2 07:03:02 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Feb  2 07:03:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:03.102 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Feb  2 07:03:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Feb  2 07:03:03 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Feb  2 07:03:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.5 MiB/s wr, 182 op/s
Feb  2 07:03:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Feb  2 07:03:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Feb  2 07:03:04 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Feb  2 07:03:05 np0005604943 nova_compute[238883]: 2026-02-02 12:03:05.141 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:03:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/498519403' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:03:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:03:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/498519403' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:03:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 88 KiB/s rd, 4.6 MiB/s wr, 138 op/s
Feb  2 07:03:06 np0005604943 nova_compute[238883]: 2026-02-02 12:03:06.382 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:03:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:03:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/571049808' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:03:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 88 op/s
Feb  2 07:03:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Feb  2 07:03:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Feb  2 07:03:08 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Feb  2 07:03:09 np0005604943 podman[258750]: 2026-02-02 12:03:09.061485796 +0000 UTC m=+0.078306086 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Feb  2 07:03:09 np0005604943 podman[258749]: 2026-02-02 12:03:09.062938525 +0000 UTC m=+0.081472920 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:03:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_12:03:09
Feb  2 07:03:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 07:03:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 07:03:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'backups', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.mgr', 'default.rgw.log', 'default.rgw.meta']
Feb  2 07:03:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 07:03:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Feb  2 07:03:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Feb  2 07:03:09 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Feb  2 07:03:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:10.028 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:10.028 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:10.028 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:10 np0005604943 nova_compute[238883]: 2026-02-02 12:03:10.145 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 99 op/s
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:03:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Feb  2 07:03:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Feb  2 07:03:10 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:03:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:03:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:03:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2125497284' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:03:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:03:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2125497284' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:03:11 np0005604943 nova_compute[238883]: 2026-02-02 12:03:11.384 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Feb  2 07:03:12 np0005604943 nova_compute[238883]: 2026-02-02 12:03:12.471 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Acquiring lock "a96ffff3-5920-4e78-bdab-1435004f049f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:12 np0005604943 nova_compute[238883]: 2026-02-02 12:03:12.471 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lock "a96ffff3-5920-4e78-bdab-1435004f049f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:12 np0005604943 nova_compute[238883]: 2026-02-02 12:03:12.491 238887 DEBUG nova.compute.manager [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:03:12 np0005604943 nova_compute[238883]: 2026-02-02 12:03:12.586 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:12 np0005604943 nova_compute[238883]: 2026-02-02 12:03:12.586 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:12 np0005604943 nova_compute[238883]: 2026-02-02 12:03:12.594 238887 DEBUG nova.virt.hardware [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:03:12 np0005604943 nova_compute[238883]: 2026-02-02 12:03:12.594 238887 INFO nova.compute.claims [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:03:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:03:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Feb  2 07:03:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Feb  2 07:03:12 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Feb  2 07:03:12 np0005604943 nova_compute[238883]: 2026-02-02 12:03:12.726 238887 DEBUG oslo_concurrency.processutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:03:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/634915194' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.291 238887 DEBUG oslo_concurrency.processutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.296 238887 DEBUG nova.compute.provider_tree [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.313 238887 DEBUG nova.scheduler.client.report [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.358 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.359 238887 DEBUG nova.compute.manager [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.412 238887 DEBUG nova.compute.manager [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.412 238887 DEBUG nova.network.neutron [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.444 238887 INFO nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.478 238887 DEBUG nova.compute.manager [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.543 238887 INFO nova.virt.block_device [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Booting with volume db2f7182-fc47-4f30-a5d2-347e3f22e132 at /dev/vdb#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.656 238887 DEBUG os_brick.utils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.657 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.668 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.668 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[ce878e43-aba7-45cd-91fa-bd5e64f6710f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.669 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.675 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.675 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[2a9e6e6c-becf-41d0-9d06-353473d28149]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.677 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.683 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.683 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[335f171f-f6bd-4620-8aea-2afbd346185a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.684 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[1d966ad9-4028-4dd2-a8f7-941b808201c3]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.685 238887 DEBUG oslo_concurrency.processutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.702 238887 DEBUG oslo_concurrency.processutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.705 238887 DEBUG os_brick.initiator.connectors.lightos [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.705 238887 DEBUG os_brick.initiator.connectors.lightos [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.706 238887 DEBUG os_brick.initiator.connectors.lightos [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.706 238887 DEBUG os_brick.utils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] <== get_connector_properties: return (49ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.706 238887 DEBUG nova.virt.block_device [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Updating existing volume attachment record: 244067bf-715c-498f-b1eb-1b4b344d56c3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:03:13 np0005604943 nova_compute[238883]: 2026-02-02 12:03:13.844 238887 DEBUG nova.policy [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '60fb6bd172e548f3a5aaa37de0e4fc9f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ee083e554351460bb418a3d98b537343', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:03:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 62 KiB/s rd, 4.3 KiB/s wr, 85 op/s
Feb  2 07:03:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:03:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/815179774' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:03:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Feb  2 07:03:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Feb  2 07:03:14 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Feb  2 07:03:14 np0005604943 nova_compute[238883]: 2026-02-02 12:03:14.712 238887 DEBUG nova.network.neutron [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Successfully created port: cb56f4bc-ae6e-4f97-afb4-1d300f11c761 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:03:14 np0005604943 nova_compute[238883]: 2026-02-02 12:03:14.804 238887 DEBUG nova.compute.manager [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:03:14 np0005604943 nova_compute[238883]: 2026-02-02 12:03:14.805 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:03:14 np0005604943 nova_compute[238883]: 2026-02-02 12:03:14.805 238887 INFO nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Creating image(s)#033[00m
Feb  2 07:03:14 np0005604943 nova_compute[238883]: 2026-02-02 12:03:14.828 238887 DEBUG nova.storage.rbd_utils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] rbd image a96ffff3-5920-4e78-bdab-1435004f049f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:03:14 np0005604943 nova_compute[238883]: 2026-02-02 12:03:14.848 238887 DEBUG nova.storage.rbd_utils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] rbd image a96ffff3-5920-4e78-bdab-1435004f049f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:03:14 np0005604943 nova_compute[238883]: 2026-02-02 12:03:14.869 238887 DEBUG nova.storage.rbd_utils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] rbd image a96ffff3-5920-4e78-bdab-1435004f049f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:03:14 np0005604943 nova_compute[238883]: 2026-02-02 12:03:14.872 238887 DEBUG oslo_concurrency.processutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:14 np0005604943 nova_compute[238883]: 2026-02-02 12:03:14.937 238887 DEBUG oslo_concurrency.processutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:14 np0005604943 nova_compute[238883]: 2026-02-02 12:03:14.938 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Acquiring lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:14 np0005604943 nova_compute[238883]: 2026-02-02 12:03:14.939 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:14 np0005604943 nova_compute[238883]: 2026-02-02 12:03:14.939 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:14 np0005604943 nova_compute[238883]: 2026-02-02 12:03:14.961 238887 DEBUG nova.storage.rbd_utils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] rbd image a96ffff3-5920-4e78-bdab-1435004f049f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:03:14 np0005604943 nova_compute[238883]: 2026-02-02 12:03:14.965 238887 DEBUG oslo_concurrency.processutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 a96ffff3-5920-4e78-bdab-1435004f049f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.148 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.320 238887 DEBUG nova.network.neutron [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Successfully updated port: cb56f4bc-ae6e-4f97-afb4-1d300f11c761 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.341 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Acquiring lock "refresh_cache-a96ffff3-5920-4e78-bdab-1435004f049f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.342 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Acquired lock "refresh_cache-a96ffff3-5920-4e78-bdab-1435004f049f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.342 238887 DEBUG nova.network.neutron [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.449 238887 DEBUG nova.compute.manager [req-e3b05f5e-3b08-4606-aa2c-420db3e3442e req-6d620ac0-cc7b-4153-bd5c-fb10f7b6aaf5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Received event network-changed-cb56f4bc-ae6e-4f97-afb4-1d300f11c761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.450 238887 DEBUG nova.compute.manager [req-e3b05f5e-3b08-4606-aa2c-420db3e3442e req-6d620ac0-cc7b-4153-bd5c-fb10f7b6aaf5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Refreshing instance network info cache due to event network-changed-cb56f4bc-ae6e-4f97-afb4-1d300f11c761. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.450 238887 DEBUG oslo_concurrency.lockutils [req-e3b05f5e-3b08-4606-aa2c-420db3e3442e req-6d620ac0-cc7b-4153-bd5c-fb10f7b6aaf5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-a96ffff3-5920-4e78-bdab-1435004f049f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.454 238887 DEBUG oslo_concurrency.processutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 a96ffff3-5920-4e78-bdab-1435004f049f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.510 238887 DEBUG nova.network.neutron [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.517 238887 DEBUG nova.storage.rbd_utils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] resizing rbd image a96ffff3-5920-4e78-bdab-1435004f049f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 07:03:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.679 238887 DEBUG nova.objects.instance [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lazy-loading 'migration_context' on Instance uuid a96ffff3-5920-4e78-bdab-1435004f049f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.692 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.692 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Ensure instance console log exists: /var/lib/nova/instances/a96ffff3-5920-4e78-bdab-1435004f049f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.693 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.693 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:15 np0005604943 nova_compute[238883]: 2026-02-02 12:03:15.693 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Feb  2 07:03:15 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Feb  2 07:03:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 62 KiB/s rd, 4.3 KiB/s wr, 86 op/s
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.357 238887 DEBUG nova.network.neutron [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Updating instance_info_cache with network_info: [{"id": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "address": "fa:16:3e:db:79:b0", "network": {"id": "ed0c9eb2-5a02-4561-b216-a1cb6ff3164f", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574964546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee083e554351460bb418a3d98b537343", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb56f4bc-ae", "ovs_interfaceid": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.382 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Releasing lock "refresh_cache-a96ffff3-5920-4e78-bdab-1435004f049f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.382 238887 DEBUG nova.compute.manager [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Instance network_info: |[{"id": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "address": "fa:16:3e:db:79:b0", "network": {"id": "ed0c9eb2-5a02-4561-b216-a1cb6ff3164f", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574964546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee083e554351460bb418a3d98b537343", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb56f4bc-ae", "ovs_interfaceid": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.383 238887 DEBUG oslo_concurrency.lockutils [req-e3b05f5e-3b08-4606-aa2c-420db3e3442e req-6d620ac0-cc7b-4153-bd5c-fb10f7b6aaf5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-a96ffff3-5920-4e78-bdab-1435004f049f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.383 238887 DEBUG nova.network.neutron [req-e3b05f5e-3b08-4606-aa2c-420db3e3442e req-6d620ac0-cc7b-4153-bd5c-fb10f7b6aaf5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Refreshing network info cache for port cb56f4bc-ae6e-4f97-afb4-1d300f11c761 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.386 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Start _get_guest_xml network_info=[{"id": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "address": "fa:16:3e:db:79:b0", "network": {"id": "ed0c9eb2-5a02-4561-b216-a1cb6ff3164f", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574964546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee083e554351460bb418a3d98b537343", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb56f4bc-ae", "ovs_interfaceid": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'image_id': '21b263f0-00f1-47be-b8b1-e3c07da0a6a2'}], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'mount_device': '/dev/vdb', 'disk_bus': 'virtio', 'attachment_id': '244067bf-715c-498f-b1eb-1b4b344d56c3', 'delete_on_termination': False, 'guest_format': None, 'boot_index': -1, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-db2f7182-fc47-4f30-a5d2-347e3f22e132', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'db2f7182-fc47-4f30-a5d2-347e3f22e132', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a96ffff3-5920-4e78-bdab-1435004f049f', 'attached_at': '', 'detached_at': '', 'volume_id': 'db2f7182-fc47-4f30-a5d2-347e3f22e132', 'serial': 'db2f7182-fc47-4f30-a5d2-347e3f22e132'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.419 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.423 238887 WARNING nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.429 238887 DEBUG nova.virt.libvirt.host [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.429 238887 DEBUG nova.virt.libvirt.host [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.432 238887 DEBUG nova.virt.libvirt.host [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.434 238887 DEBUG nova.virt.libvirt.host [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.435 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.435 238887 DEBUG nova.virt.hardware [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.436 238887 DEBUG nova.virt.hardware [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.437 238887 DEBUG nova.virt.hardware [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.437 238887 DEBUG nova.virt.hardware [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.438 238887 DEBUG nova.virt.hardware [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.438 238887 DEBUG nova.virt.hardware [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.439 238887 DEBUG nova.virt.hardware [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.439 238887 DEBUG nova.virt.hardware [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.440 238887 DEBUG nova.virt.hardware [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.440 238887 DEBUG nova.virt.hardware [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.441 238887 DEBUG nova.virt.hardware [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.445 238887 DEBUG oslo_concurrency.processutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:03:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2240862042' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.960 238887 DEBUG oslo_concurrency.processutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.989 238887 DEBUG nova.storage.rbd_utils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] rbd image a96ffff3-5920-4e78-bdab-1435004f049f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:03:16 np0005604943 nova_compute[238883]: 2026-02-02 12:03:16.994 238887 DEBUG oslo_concurrency.processutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:03:17 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1478335373' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.505 238887 DEBUG oslo_concurrency.processutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.524 238887 DEBUG nova.network.neutron [req-e3b05f5e-3b08-4606-aa2c-420db3e3442e req-6d620ac0-cc7b-4153-bd5c-fb10f7b6aaf5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Updated VIF entry in instance network info cache for port cb56f4bc-ae6e-4f97-afb4-1d300f11c761. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.524 238887 DEBUG nova.network.neutron [req-e3b05f5e-3b08-4606-aa2c-420db3e3442e req-6d620ac0-cc7b-4153-bd5c-fb10f7b6aaf5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Updating instance_info_cache with network_info: [{"id": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "address": "fa:16:3e:db:79:b0", "network": {"id": "ed0c9eb2-5a02-4561-b216-a1cb6ff3164f", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574964546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee083e554351460bb418a3d98b537343", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb56f4bc-ae", "ovs_interfaceid": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.528 238887 DEBUG nova.virt.libvirt.vif [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:03:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-1917573818',display_name='tempest-instance-1917573818',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1917573818',id=14,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHreMOpv+vG56C29AwHK7dAMPisR6ubzzXv8jjNXBzeJA+YypvAIWLUASYtM8sV/rTkRA72DOyN4tEasdZNuM3qKxpn4WpIZks80MjgEBt2yWjxqqZJ8dPVOJ01rpDLaHQ==',key_name='tempest-keypair-1465596797',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee083e554351460bb418a3d98b537343',ramdisk_id='',reservation_id='r-u9a62mcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1399459303',owner_user_name='tempest-VolumesBackupsTest-1399459303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:03:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='60fb6bd172e548f3a5aaa37de0e4fc9f',uuid=a96ffff3-5920-4e78-bdab-1435004f049f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "address": "fa:16:3e:db:79:b0", "network": {"id": "ed0c9eb2-5a02-4561-b216-a1cb6ff3164f", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574964546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee083e554351460bb418a3d98b537343", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb56f4bc-ae", "ovs_interfaceid": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.529 238887 DEBUG nova.network.os_vif_util [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Converting VIF {"id": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "address": "fa:16:3e:db:79:b0", "network": {"id": "ed0c9eb2-5a02-4561-b216-a1cb6ff3164f", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574964546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee083e554351460bb418a3d98b537343", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb56f4bc-ae", "ovs_interfaceid": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.529 238887 DEBUG nova.network.os_vif_util [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:79:b0,bridge_name='br-int',has_traffic_filtering=True,id=cb56f4bc-ae6e-4f97-afb4-1d300f11c761,network=Network(ed0c9eb2-5a02-4561-b216-a1cb6ff3164f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb56f4bc-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.530 238887 DEBUG nova.objects.instance [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lazy-loading 'pci_devices' on Instance uuid a96ffff3-5920-4e78-bdab-1435004f049f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.544 238887 DEBUG oslo_concurrency.lockutils [req-e3b05f5e-3b08-4606-aa2c-420db3e3442e req-6d620ac0-cc7b-4153-bd5c-fb10f7b6aaf5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-a96ffff3-5920-4e78-bdab-1435004f049f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.549 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  <uuid>a96ffff3-5920-4e78-bdab-1435004f049f</uuid>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  <name>instance-0000000e</name>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <nova:name>tempest-instance-1917573818</nova:name>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:03:16</nova:creationTime>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:        <nova:user uuid="60fb6bd172e548f3a5aaa37de0e4fc9f">tempest-VolumesBackupsTest-1399459303-project-member</nova:user>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:        <nova:project uuid="ee083e554351460bb418a3d98b537343">tempest-VolumesBackupsTest-1399459303</nova:project>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <nova:root type="image" uuid="21b263f0-00f1-47be-b8b1-e3c07da0a6a2"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:        <nova:port uuid="cb56f4bc-ae6e-4f97-afb4-1d300f11c761">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <entry name="serial">a96ffff3-5920-4e78-bdab-1435004f049f</entry>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <entry name="uuid">a96ffff3-5920-4e78-bdab-1435004f049f</entry>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/a96ffff3-5920-4e78-bdab-1435004f049f_disk">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/a96ffff3-5920-4e78-bdab-1435004f049f_disk.config">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="volumes/volume-db2f7182-fc47-4f30-a5d2-347e3f22e132">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <target dev="vdb" bus="virtio"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <serial>db2f7182-fc47-4f30-a5d2-347e3f22e132</serial>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:db:79:b0"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <target dev="tapcb56f4bc-ae"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/a96ffff3-5920-4e78-bdab-1435004f049f/console.log" append="off"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:03:17 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:03:17 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:03:17 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:03:17 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.550 238887 DEBUG nova.compute.manager [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Preparing to wait for external event network-vif-plugged-cb56f4bc-ae6e-4f97-afb4-1d300f11c761 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.551 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Acquiring lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.551 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.552 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.552 238887 DEBUG nova.virt.libvirt.vif [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:03:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-1917573818',display_name='tempest-instance-1917573818',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1917573818',id=14,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHreMOpv+vG56C29AwHK7dAMPisR6ubzzXv8jjNXBzeJA+YypvAIWLUASYtM8sV/rTkRA72DOyN4tEasdZNuM3qKxpn4WpIZks80MjgEBt2yWjxqqZJ8dPVOJ01rpDLaHQ==',key_name='tempest-keypair-1465596797',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee083e554351460bb418a3d98b537343',ramdisk_id='',reservation_id='r-u9a62mcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1399459303',owner_user_name='tempest-VolumesBackupsTest-1399459303-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:03:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='60fb6bd172e548f3a5aaa37de0e4fc9f',uuid=a96ffff3-5920-4e78-bdab-1435004f049f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "address": "fa:16:3e:db:79:b0", "network": {"id": "ed0c9eb2-5a02-4561-b216-a1cb6ff3164f", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574964546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee083e554351460bb418a3d98b537343", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb56f4bc-ae", "ovs_interfaceid": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.553 238887 DEBUG nova.network.os_vif_util [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Converting VIF {"id": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "address": "fa:16:3e:db:79:b0", "network": {"id": "ed0c9eb2-5a02-4561-b216-a1cb6ff3164f", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574964546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee083e554351460bb418a3d98b537343", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb56f4bc-ae", "ovs_interfaceid": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.553 238887 DEBUG nova.network.os_vif_util [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:79:b0,bridge_name='br-int',has_traffic_filtering=True,id=cb56f4bc-ae6e-4f97-afb4-1d300f11c761,network=Network(ed0c9eb2-5a02-4561-b216-a1cb6ff3164f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb56f4bc-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.554 238887 DEBUG os_vif [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:79:b0,bridge_name='br-int',has_traffic_filtering=True,id=cb56f4bc-ae6e-4f97-afb4-1d300f11c761,network=Network(ed0c9eb2-5a02-4561-b216-a1cb6ff3164f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb56f4bc-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.555 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.555 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.556 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.560 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.560 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcb56f4bc-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.561 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcb56f4bc-ae, col_values=(('external_ids', {'iface-id': 'cb56f4bc-ae6e-4f97-afb4-1d300f11c761', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:79:b0', 'vm-uuid': 'a96ffff3-5920-4e78-bdab-1435004f049f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.562 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:17 np0005604943 NetworkManager[49093]: <info>  [1770033797.5638] manager: (tapcb56f4bc-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.564 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.567 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.568 238887 INFO os_vif [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:79:b0,bridge_name='br-int',has_traffic_filtering=True,id=cb56f4bc-ae6e-4f97-afb4-1d300f11c761,network=Network(ed0c9eb2-5a02-4561-b216-a1cb6ff3164f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb56f4bc-ae')#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.614 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.615 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.615 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.615 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] No VIF found with MAC fa:16:3e:db:79:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.616 238887 INFO nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Using config drive#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.635 238887 DEBUG nova.storage.rbd_utils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] rbd image a96ffff3-5920-4e78-bdab-1435004f049f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:03:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:03:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Feb  2 07:03:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Feb  2 07:03:17 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.959 238887 INFO nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Creating config drive at /var/lib/nova/instances/a96ffff3-5920-4e78-bdab-1435004f049f/disk.config#033[00m
Feb  2 07:03:17 np0005604943 nova_compute[238883]: 2026-02-02 12:03:17.968 238887 DEBUG oslo_concurrency.processutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a96ffff3-5920-4e78-bdab-1435004f049f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9agb9q_o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:03:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2521363925' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:03:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:03:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2521363925' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.096 238887 DEBUG oslo_concurrency.processutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a96ffff3-5920-4e78-bdab-1435004f049f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9agb9q_o" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.125 238887 DEBUG nova.storage.rbd_utils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] rbd image a96ffff3-5920-4e78-bdab-1435004f049f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.130 238887 DEBUG oslo_concurrency.processutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a96ffff3-5920-4e78-bdab-1435004f049f/disk.config a96ffff3-5920-4e78-bdab-1435004f049f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.251 238887 DEBUG oslo_concurrency.processutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a96ffff3-5920-4e78-bdab-1435004f049f/disk.config a96ffff3-5920-4e78-bdab-1435004f049f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.253 238887 INFO nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Deleting local config drive /var/lib/nova/instances/a96ffff3-5920-4e78-bdab-1435004f049f/disk.config because it was imported into RBD.#033[00m
Feb  2 07:03:18 np0005604943 kernel: tapcb56f4bc-ae: entered promiscuous mode
Feb  2 07:03:18 np0005604943 NetworkManager[49093]: <info>  [1770033798.2958] manager: (tapcb56f4bc-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Feb  2 07:03:18 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:18Z|00146|binding|INFO|Claiming lport cb56f4bc-ae6e-4f97-afb4-1d300f11c761 for this chassis.
Feb  2 07:03:18 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:18Z|00147|binding|INFO|cb56f4bc-ae6e-4f97-afb4-1d300f11c761: Claiming fa:16:3e:db:79:b0 10.100.0.12
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.298 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.302 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.310 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:79:b0 10.100.0.12'], port_security=['fa:16:3e:db:79:b0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a96ffff3-5920-4e78-bdab-1435004f049f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ee083e554351460bb418a3d98b537343', 'neutron:revision_number': '2', 'neutron:security_group_ids': '83a2be6b-fd76-4549-82b1-9fd8e8284c8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c85056b-f6a9-4ab1-bc66-86e8e15bd6fd, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=cb56f4bc-ae6e-4f97-afb4-1d300f11c761) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.311 155011 INFO neutron.agent.ovn.metadata.agent [-] Port cb56f4bc-ae6e-4f97-afb4-1d300f11c761 in datapath ed0c9eb2-5a02-4561-b216-a1cb6ff3164f bound to our chassis#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.312 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed0c9eb2-5a02-4561-b216-a1cb6ff3164f#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.322 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[365de3cd-b594-4cf8-b1ea-7c527b7ccf51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.323 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH taped0c9eb2-51 in ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 07:03:18 np0005604943 systemd-machined[206973]: New machine qemu-14-instance-0000000e.
Feb  2 07:03:18 np0005604943 systemd-udevd[259125]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.326 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface taped0c9eb2-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.326 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[2e576fe7-bf4c-451f-ab42-e7d7f766460b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.327 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[39418674-6073-41f1-a045-d9c92ad1f057]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.335 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[c7afe806-b36a-4be9-af66-761c6eb8dce4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:18 np0005604943 NetworkManager[49093]: <info>  [1770033798.3391] device (tapcb56f4bc-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:03:18 np0005604943 NetworkManager[49093]: <info>  [1770033798.3398] device (tapcb56f4bc-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:03:18 np0005604943 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Feb  2 07:03:18 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:18Z|00148|binding|INFO|Setting lport cb56f4bc-ae6e-4f97-afb4-1d300f11c761 ovn-installed in OVS
Feb  2 07:03:18 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:18Z|00149|binding|INFO|Setting lport cb56f4bc-ae6e-4f97-afb4-1d300f11c761 up in Southbound
Feb  2 07:03:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 99 KiB/s rd, 2.6 MiB/s wr, 133 op/s
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.343 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.347 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[57634f26-b1a2-4451-9829-857147d69e6c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.368 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0ccc11-c2b1-4d86-aa57-ff9befb81ae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:18 np0005604943 NetworkManager[49093]: <info>  [1770033798.3736] manager: (taped0c9eb2-50): new Veth device (/org/freedesktop/NetworkManager/Devices/78)
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.372 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[275e1fe2-00fe-441b-8ddf-8a71eb1455ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.394 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4341ba-8ab5-4138-b737-eb786f040207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.397 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[23ce511d-5d35-44ed-a80f-5a28279f0955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:18 np0005604943 NetworkManager[49093]: <info>  [1770033798.4118] device (taped0c9eb2-50): carrier: link connected
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.414 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[24c04980-b19a-4227-ad3b-c28f90c78086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.425 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d92405fd-de44-4f29-8af7-f70f9dd7f51c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped0c9eb2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:ce:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424376, 'reachable_time': 16550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259157, 'error': None, 'target': 'ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.433 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a732e8e8-498e-4e17-80dd-ec9a8b7ea515]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:ce88'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 424376, 'tstamp': 424376}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259158, 'error': None, 'target': 'ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.442 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[89576851-eb84-4990-b88b-d466cc138e4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped0c9eb2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:ce:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424376, 'reachable_time': 16550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259159, 'error': None, 'target': 'ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.461 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc925dd-0dcb-4a7c-aa80-1daed812e2bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.498 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[aa428107-8ea1-4de7-9c83-ddc86ab57bee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.500 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped0c9eb2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.500 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.500 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped0c9eb2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.502 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:18 np0005604943 NetworkManager[49093]: <info>  [1770033798.5033] manager: (taped0c9eb2-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Feb  2 07:03:18 np0005604943 kernel: taped0c9eb2-50: entered promiscuous mode
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.512 238887 DEBUG nova.compute.manager [req-3a863692-16bc-4ec3-ae73-f1de10117bd2 req-8bbbd557-4679-43f9-94d5-b960f3cc526b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Received event network-vif-plugged-cb56f4bc-ae6e-4f97-afb4-1d300f11c761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.513 238887 DEBUG oslo_concurrency.lockutils [req-3a863692-16bc-4ec3-ae73-f1de10117bd2 req-8bbbd557-4679-43f9-94d5-b960f3cc526b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.513 238887 DEBUG oslo_concurrency.lockutils [req-3a863692-16bc-4ec3-ae73-f1de10117bd2 req-8bbbd557-4679-43f9-94d5-b960f3cc526b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.514 238887 DEBUG oslo_concurrency.lockutils [req-3a863692-16bc-4ec3-ae73-f1de10117bd2 req-8bbbd557-4679-43f9-94d5-b960f3cc526b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.514 238887 DEBUG nova.compute.manager [req-3a863692-16bc-4ec3-ae73-f1de10117bd2 req-8bbbd557-4679-43f9-94d5-b960f3cc526b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Processing event network-vif-plugged-cb56f4bc-ae6e-4f97-afb4-1d300f11c761 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.516 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped0c9eb2-50, col_values=(('external_ids', {'iface-id': '892186f0-5766-4410-8f8e-b11282594eae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.518 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:18 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:18Z|00150|binding|INFO|Releasing lport 892186f0-5766-4410-8f8e-b11282594eae from this chassis (sb_readonly=0)
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.521 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ed0c9eb2-5a02-4561-b216-a1cb6ff3164f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ed0c9eb2-5a02-4561-b216-a1cb6ff3164f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.522 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[63f6fac9-763e-4c34-89f0-0b12d6069f40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.523 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/ed0c9eb2-5a02-4561-b216-a1cb6ff3164f.pid.haproxy
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID ed0c9eb2-5a02-4561-b216-a1cb6ff3164f
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 07:03:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:18.523 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f', 'env', 'PROCESS_TAG=haproxy-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ed0c9eb2-5a02-4561-b216-a1cb6ff3164f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.525 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.844 238887 DEBUG nova.compute.manager [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.845 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033798.8438218, a96ffff3-5920-4e78-bdab-1435004f049f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.846 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] VM Started (Lifecycle Event)#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.850 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.854 238887 INFO nova.virt.libvirt.driver [-] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Instance spawned successfully.#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.854 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.866 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.869 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.879 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.880 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.881 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.881 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.882 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.883 238887 DEBUG nova.virt.libvirt.driver [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.888 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.889 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033798.8448708, a96ffff3-5920-4e78-bdab-1435004f049f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.889 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.923 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.926 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033798.8481894, a96ffff3-5920-4e78-bdab-1435004f049f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.927 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:03:18 np0005604943 podman[259251]: 2026-02-02 12:03:18.931181612 +0000 UTC m=+0.081167422 container create 9f6a006988b1b3f1e1ab1bf2d8d1cf0acb19b21b266a0eaec80c3f50d3a3e065 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.957 238887 INFO nova.compute.manager [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Took 4.15 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.958 238887 DEBUG nova.compute.manager [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.962 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:03:18 np0005604943 podman[259251]: 2026-02-02 12:03:18.873215745 +0000 UTC m=+0.023201575 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 07:03:18 np0005604943 nova_compute[238883]: 2026-02-02 12:03:18.972 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:03:18 np0005604943 systemd[1]: Started libpod-conmon-9f6a006988b1b3f1e1ab1bf2d8d1cf0acb19b21b266a0eaec80c3f50d3a3e065.scope.
Feb  2 07:03:18 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:03:19 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62d290185e6c1e3f45e6bc5c345e8324e8b34f45fcad4b446258c92607cb88a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 07:03:19 np0005604943 nova_compute[238883]: 2026-02-02 12:03:19.007 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:03:19 np0005604943 podman[259251]: 2026-02-02 12:03:19.014688934 +0000 UTC m=+0.164674754 container init 9f6a006988b1b3f1e1ab1bf2d8d1cf0acb19b21b266a0eaec80c3f50d3a3e065 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb  2 07:03:19 np0005604943 podman[259251]: 2026-02-02 12:03:19.02282917 +0000 UTC m=+0.172814980 container start 9f6a006988b1b3f1e1ab1bf2d8d1cf0acb19b21b266a0eaec80c3f50d3a3e065 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb  2 07:03:19 np0005604943 nova_compute[238883]: 2026-02-02 12:03:19.027 238887 INFO nova.compute.manager [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Took 6.47 seconds to build instance.#033[00m
Feb  2 07:03:19 np0005604943 nova_compute[238883]: 2026-02-02 12:03:19.044 238887 DEBUG oslo_concurrency.lockutils [None req-8ce719ed-9df0-4564-b19e-2d7e3787afca 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lock "a96ffff3-5920-4e78-bdab-1435004f049f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:19 np0005604943 neutron-haproxy-ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f[259265]: [NOTICE]   (259270) : New worker (259272) forked
Feb  2 07:03:19 np0005604943 neutron-haproxy-ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f[259265]: [NOTICE]   (259270) : Loading success.
Feb  2 07:03:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:03:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:03:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:03:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:03:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 139 KiB/s rd, 3.5 MiB/s wr, 194 op/s
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:03:20 np0005604943 nova_compute[238883]: 2026-02-02 12:03:20.650 238887 DEBUG nova.compute.manager [req-9a1f45f8-0b1b-4ff8-a034-ae9465ab69cc req-85d2e314-2908-48bc-a496-b03028625729 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Received event network-vif-plugged-cb56f4bc-ae6e-4f97-afb4-1d300f11c761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:03:20 np0005604943 nova_compute[238883]: 2026-02-02 12:03:20.651 238887 DEBUG oslo_concurrency.lockutils [req-9a1f45f8-0b1b-4ff8-a034-ae9465ab69cc req-85d2e314-2908-48bc-a496-b03028625729 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:20 np0005604943 nova_compute[238883]: 2026-02-02 12:03:20.652 238887 DEBUG oslo_concurrency.lockutils [req-9a1f45f8-0b1b-4ff8-a034-ae9465ab69cc req-85d2e314-2908-48bc-a496-b03028625729 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:20 np0005604943 nova_compute[238883]: 2026-02-02 12:03:20.652 238887 DEBUG oslo_concurrency.lockutils [req-9a1f45f8-0b1b-4ff8-a034-ae9465ab69cc req-85d2e314-2908-48bc-a496-b03028625729 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:20 np0005604943 nova_compute[238883]: 2026-02-02 12:03:20.653 238887 DEBUG nova.compute.manager [req-9a1f45f8-0b1b-4ff8-a034-ae9465ab69cc req-85d2e314-2908-48bc-a496-b03028625729 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] No waiting events found dispatching network-vif-plugged-cb56f4bc-ae6e-4f97-afb4-1d300f11c761 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:03:20 np0005604943 nova_compute[238883]: 2026-02-02 12:03:20.653 238887 WARNING nova.compute.manager [req-9a1f45f8-0b1b-4ff8-a034-ae9465ab69cc req-85d2e314-2908-48bc-a496-b03028625729 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Received unexpected event network-vif-plugged-cb56f4bc-ae6e-4f97-afb4-1d300f11c761 for instance with vm_state active and task_state None.#033[00m
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:03:20 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:03:20 np0005604943 podman[259495]: 2026-02-02 12:03:20.975616168 +0000 UTC m=+0.044191492 container create bc74f2583463fae5a6509cc165585f5d7e0fed715d26ed3eefd2ac7b39c24ca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mahavira, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:03:21 np0005604943 systemd[1]: Started libpod-conmon-bc74f2583463fae5a6509cc165585f5d7e0fed715d26ed3eefd2ac7b39c24ca4.scope.
Feb  2 07:03:21 np0005604943 podman[259495]: 2026-02-02 12:03:20.953730408 +0000 UTC m=+0.022305722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:03:21 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:03:21 np0005604943 podman[259495]: 2026-02-02 12:03:21.095243108 +0000 UTC m=+0.163818472 container init bc74f2583463fae5a6509cc165585f5d7e0fed715d26ed3eefd2ac7b39c24ca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb  2 07:03:21 np0005604943 podman[259495]: 2026-02-02 12:03:21.103257131 +0000 UTC m=+0.171832495 container start bc74f2583463fae5a6509cc165585f5d7e0fed715d26ed3eefd2ac7b39c24ca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mahavira, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:03:21 np0005604943 podman[259495]: 2026-02-02 12:03:21.10775584 +0000 UTC m=+0.176331204 container attach bc74f2583463fae5a6509cc165585f5d7e0fed715d26ed3eefd2ac7b39c24ca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mahavira, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 07:03:21 np0005604943 friendly_mahavira[259511]: 167 167
Feb  2 07:03:21 np0005604943 systemd[1]: libpod-bc74f2583463fae5a6509cc165585f5d7e0fed715d26ed3eefd2ac7b39c24ca4.scope: Deactivated successfully.
Feb  2 07:03:21 np0005604943 podman[259495]: 2026-02-02 12:03:21.123053615 +0000 UTC m=+0.191628949 container died bc74f2583463fae5a6509cc165585f5d7e0fed715d26ed3eefd2ac7b39c24ca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:03:21 np0005604943 systemd[1]: var-lib-containers-storage-overlay-187de48b8a845fbb428570e883c9eecd468e303fbe91bb69e7fda9dac3b4e1fd-merged.mount: Deactivated successfully.
Feb  2 07:03:21 np0005604943 podman[259495]: 2026-02-02 12:03:21.174138229 +0000 UTC m=+0.242713553 container remove bc74f2583463fae5a6509cc165585f5d7e0fed715d26ed3eefd2ac7b39c24ca4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:03:21 np0005604943 systemd[1]: libpod-conmon-bc74f2583463fae5a6509cc165585f5d7e0fed715d26ed3eefd2ac7b39c24ca4.scope: Deactivated successfully.
Feb  2 07:03:21 np0005604943 podman[259534]: 2026-02-02 12:03:21.327512543 +0000 UTC m=+0.043910824 container create 7031c01c666dc2b546bf381ed62d2f33019aefd5d1876b401c8885c921169ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:03:21 np0005604943 systemd[1]: Started libpod-conmon-7031c01c666dc2b546bf381ed62d2f33019aefd5d1876b401c8885c921169ce7.scope.
Feb  2 07:03:21 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:03:21 np0005604943 podman[259534]: 2026-02-02 12:03:21.308982512 +0000 UTC m=+0.025380813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:03:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361a9dc9c8821533ae4df227c2e288468fdfbe0a6d143a6640456ca94c132f82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:03:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361a9dc9c8821533ae4df227c2e288468fdfbe0a6d143a6640456ca94c132f82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:03:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361a9dc9c8821533ae4df227c2e288468fdfbe0a6d143a6640456ca94c132f82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:03:21 np0005604943 nova_compute[238883]: 2026-02-02 12:03:21.427 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361a9dc9c8821533ae4df227c2e288468fdfbe0a6d143a6640456ca94c132f82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:03:21 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361a9dc9c8821533ae4df227c2e288468fdfbe0a6d143a6640456ca94c132f82/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 07:03:21 np0005604943 podman[259534]: 2026-02-02 12:03:21.449175447 +0000 UTC m=+0.165573728 container init 7031c01c666dc2b546bf381ed62d2f33019aefd5d1876b401c8885c921169ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 07:03:21 np0005604943 podman[259534]: 2026-02-02 12:03:21.457384985 +0000 UTC m=+0.173783296 container start 7031c01c666dc2b546bf381ed62d2f33019aefd5d1876b401c8885c921169ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_kare, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:03:21 np0005604943 podman[259534]: 2026-02-02 12:03:21.461826143 +0000 UTC m=+0.178224454 container attach 7031c01c666dc2b546bf381ed62d2f33019aefd5d1876b401c8885c921169ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_kare, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003481012167871297 of space, bias 1.0, pg target 0.10443036503613891 quantized to 32 (current 32)
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.034043880849407154 of space, bias 1.0, pg target 10.213164254822146 quantized to 32 (current 32)
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0003483208810754483 of space, bias 1.0, pg target 0.10101305551188 quantized to 32 (current 32)
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663636583101085 of space, bias 1.0, pg target 0.19324546090993144 quantized to 32 (current 32)
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.030259512128592e-06 of space, bias 4.0, pg target 0.0011951010340691668 quantized to 16 (current 16)
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011064783160773588 quantized to 32 (current 32)
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012171261476850949 quantized to 32 (current 32)
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:03:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Feb  2 07:03:21 np0005604943 heuristic_kare[259551]: --> passed data devices: 0 physical, 3 LVM
Feb  2 07:03:21 np0005604943 heuristic_kare[259551]: --> All data devices are unavailable
Feb  2 07:03:21 np0005604943 systemd[1]: libpod-7031c01c666dc2b546bf381ed62d2f33019aefd5d1876b401c8885c921169ce7.scope: Deactivated successfully.
Feb  2 07:03:21 np0005604943 podman[259534]: 2026-02-02 12:03:21.933466141 +0000 UTC m=+0.649864442 container died 7031c01c666dc2b546bf381ed62d2f33019aefd5d1876b401c8885c921169ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Feb  2 07:03:21 np0005604943 systemd[1]: var-lib-containers-storage-overlay-361a9dc9c8821533ae4df227c2e288468fdfbe0a6d143a6640456ca94c132f82-merged.mount: Deactivated successfully.
Feb  2 07:03:22 np0005604943 podman[259534]: 2026-02-02 12:03:22.038476144 +0000 UTC m=+0.754874425 container remove 7031c01c666dc2b546bf381ed62d2f33019aefd5d1876b401c8885c921169ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_kare, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 07:03:22 np0005604943 systemd[1]: libpod-conmon-7031c01c666dc2b546bf381ed62d2f33019aefd5d1876b401c8885c921169ce7.scope: Deactivated successfully.
Feb  2 07:03:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 239 op/s
Feb  2 07:03:22 np0005604943 podman[259646]: 2026-02-02 12:03:22.448015817 +0000 UTC m=+0.036967981 container create 3bc7062aeca253c66640c230224bc917266e250587e9a457cd7bcc4f5fa41937 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_robinson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:03:22 np0005604943 systemd[1]: Started libpod-conmon-3bc7062aeca253c66640c230224bc917266e250587e9a457cd7bcc4f5fa41937.scope.
Feb  2 07:03:22 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:03:22 np0005604943 podman[259646]: 2026-02-02 12:03:22.431751545 +0000 UTC m=+0.020703739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:03:22 np0005604943 podman[259646]: 2026-02-02 12:03:22.529985678 +0000 UTC m=+0.118937872 container init 3bc7062aeca253c66640c230224bc917266e250587e9a457cd7bcc4f5fa41937 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:03:22 np0005604943 podman[259646]: 2026-02-02 12:03:22.536968503 +0000 UTC m=+0.125920667 container start 3bc7062aeca253c66640c230224bc917266e250587e9a457cd7bcc4f5fa41937 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:03:22 np0005604943 podman[259646]: 2026-02-02 12:03:22.540460426 +0000 UTC m=+0.129412600 container attach 3bc7062aeca253c66640c230224bc917266e250587e9a457cd7bcc4f5fa41937 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_robinson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 07:03:22 np0005604943 practical_robinson[259663]: 167 167
Feb  2 07:03:22 np0005604943 systemd[1]: libpod-3bc7062aeca253c66640c230224bc917266e250587e9a457cd7bcc4f5fa41937.scope: Deactivated successfully.
Feb  2 07:03:22 np0005604943 podman[259646]: 2026-02-02 12:03:22.543702792 +0000 UTC m=+0.132654986 container died 3bc7062aeca253c66640c230224bc917266e250587e9a457cd7bcc4f5fa41937 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_robinson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:03:22 np0005604943 nova_compute[238883]: 2026-02-02 12:03:22.564 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:22 np0005604943 systemd[1]: var-lib-containers-storage-overlay-5759a768faa2f3fca95224fa91a03ee6c7b6cff9708fca2e738dc48256a3e9a0-merged.mount: Deactivated successfully.
Feb  2 07:03:22 np0005604943 podman[259646]: 2026-02-02 12:03:22.589607349 +0000 UTC m=+0.178559513 container remove 3bc7062aeca253c66640c230224bc917266e250587e9a457cd7bcc4f5fa41937 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:03:22 np0005604943 systemd[1]: libpod-conmon-3bc7062aeca253c66640c230224bc917266e250587e9a457cd7bcc4f5fa41937.scope: Deactivated successfully.
Feb  2 07:03:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:03:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Feb  2 07:03:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Feb  2 07:03:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Feb  2 07:03:22 np0005604943 podman[259685]: 2026-02-02 12:03:22.729128815 +0000 UTC m=+0.041181892 container create f69a8c2b007f9d3547a15e4386864ac72124445c8b13f0255f7e3618085a5198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 07:03:22 np0005604943 systemd[1]: Started libpod-conmon-f69a8c2b007f9d3547a15e4386864ac72124445c8b13f0255f7e3618085a5198.scope.
Feb  2 07:03:22 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:03:22 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505a197efaf189d01077dcb1dceaa520680318070c172845a4b0e02069293c86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:03:22 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505a197efaf189d01077dcb1dceaa520680318070c172845a4b0e02069293c86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:03:22 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505a197efaf189d01077dcb1dceaa520680318070c172845a4b0e02069293c86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:03:22 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505a197efaf189d01077dcb1dceaa520680318070c172845a4b0e02069293c86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:03:22 np0005604943 podman[259685]: 2026-02-02 12:03:22.710232185 +0000 UTC m=+0.022285292 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:03:22 np0005604943 podman[259685]: 2026-02-02 12:03:22.817167498 +0000 UTC m=+0.129220595 container init f69a8c2b007f9d3547a15e4386864ac72124445c8b13f0255f7e3618085a5198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chaplygin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 07:03:22 np0005604943 podman[259685]: 2026-02-02 12:03:22.822946491 +0000 UTC m=+0.134999568 container start f69a8c2b007f9d3547a15e4386864ac72124445c8b13f0255f7e3618085a5198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:03:22 np0005604943 podman[259685]: 2026-02-02 12:03:22.826555638 +0000 UTC m=+0.138608775 container attach f69a8c2b007f9d3547a15e4386864ac72124445c8b13f0255f7e3618085a5198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chaplygin, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 07:03:22 np0005604943 NetworkManager[49093]: <info>  [1770033802.9736] manager: (patch-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Feb  2 07:03:22 np0005604943 nova_compute[238883]: 2026-02-02 12:03:22.972 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:22 np0005604943 NetworkManager[49093]: <info>  [1770033802.9747] manager: (patch-br-int-to-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Feb  2 07:03:23 np0005604943 nova_compute[238883]: 2026-02-02 12:03:23.033 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:23Z|00151|binding|INFO|Releasing lport 892186f0-5766-4410-8f8e-b11282594eae from this chassis (sb_readonly=0)
Feb  2 07:03:23 np0005604943 nova_compute[238883]: 2026-02-02 12:03:23.045 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]: {
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:    "0": [
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:        {
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "devices": [
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "/dev/loop3"
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            ],
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "lv_name": "ceph_lv0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "lv_size": "21470642176",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "name": "ceph_lv0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "tags": {
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.cluster_name": "ceph",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.crush_device_class": "",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.encrypted": "0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.objectstore": "bluestore",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.osd_id": "0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.type": "block",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.vdo": "0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.with_tpm": "0"
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            },
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "type": "block",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "vg_name": "ceph_vg0"
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:        }
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:    ],
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:    "1": [
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:        {
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "devices": [
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "/dev/loop4"
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            ],
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "lv_name": "ceph_lv1",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "lv_size": "21470642176",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "name": "ceph_lv1",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "tags": {
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.cluster_name": "ceph",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.crush_device_class": "",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.encrypted": "0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.objectstore": "bluestore",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.osd_id": "1",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.type": "block",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.vdo": "0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.with_tpm": "0"
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            },
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "type": "block",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "vg_name": "ceph_vg1"
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:        }
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:    ],
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:    "2": [
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:        {
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "devices": [
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "/dev/loop5"
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            ],
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "lv_name": "ceph_lv2",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "lv_size": "21470642176",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "name": "ceph_lv2",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "tags": {
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.cluster_name": "ceph",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.crush_device_class": "",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.encrypted": "0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.objectstore": "bluestore",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.osd_id": "2",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.type": "block",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.vdo": "0",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:                "ceph.with_tpm": "0"
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            },
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "type": "block",
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:            "vg_name": "ceph_vg2"
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:        }
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]:    ]
Feb  2 07:03:23 np0005604943 trusting_chaplygin[259702]: }
Feb  2 07:03:23 np0005604943 systemd[1]: libpod-f69a8c2b007f9d3547a15e4386864ac72124445c8b13f0255f7e3618085a5198.scope: Deactivated successfully.
Feb  2 07:03:23 np0005604943 podman[259685]: 2026-02-02 12:03:23.109957487 +0000 UTC m=+0.422010574 container died f69a8c2b007f9d3547a15e4386864ac72124445c8b13f0255f7e3618085a5198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chaplygin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:03:23 np0005604943 systemd[1]: var-lib-containers-storage-overlay-505a197efaf189d01077dcb1dceaa520680318070c172845a4b0e02069293c86-merged.mount: Deactivated successfully.
Feb  2 07:03:23 np0005604943 podman[259685]: 2026-02-02 12:03:23.152968107 +0000 UTC m=+0.465021184 container remove f69a8c2b007f9d3547a15e4386864ac72124445c8b13f0255f7e3618085a5198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 07:03:23 np0005604943 systemd[1]: libpod-conmon-f69a8c2b007f9d3547a15e4386864ac72124445c8b13f0255f7e3618085a5198.scope: Deactivated successfully.
Feb  2 07:03:23 np0005604943 podman[259786]: 2026-02-02 12:03:23.543908937 +0000 UTC m=+0.040893275 container create d1fdbf6ddcbd87e4bf24a9867269b6adf19e0a03b4f2691326fa6175a132fa16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:03:23 np0005604943 systemd[1]: Started libpod-conmon-d1fdbf6ddcbd87e4bf24a9867269b6adf19e0a03b4f2691326fa6175a132fa16.scope.
Feb  2 07:03:23 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:03:23 np0005604943 podman[259786]: 2026-02-02 12:03:23.598197486 +0000 UTC m=+0.095181844 container init d1fdbf6ddcbd87e4bf24a9867269b6adf19e0a03b4f2691326fa6175a132fa16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 07:03:23 np0005604943 podman[259786]: 2026-02-02 12:03:23.603531297 +0000 UTC m=+0.100515635 container start d1fdbf6ddcbd87e4bf24a9867269b6adf19e0a03b4f2691326fa6175a132fa16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 07:03:23 np0005604943 podman[259786]: 2026-02-02 12:03:23.606924887 +0000 UTC m=+0.103909225 container attach d1fdbf6ddcbd87e4bf24a9867269b6adf19e0a03b4f2691326fa6175a132fa16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb  2 07:03:23 np0005604943 agitated_benz[259802]: 167 167
Feb  2 07:03:23 np0005604943 systemd[1]: libpod-d1fdbf6ddcbd87e4bf24a9867269b6adf19e0a03b4f2691326fa6175a132fa16.scope: Deactivated successfully.
Feb  2 07:03:23 np0005604943 podman[259786]: 2026-02-02 12:03:23.608139869 +0000 UTC m=+0.105124207 container died d1fdbf6ddcbd87e4bf24a9867269b6adf19e0a03b4f2691326fa6175a132fa16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 07:03:23 np0005604943 podman[259786]: 2026-02-02 12:03:23.527421241 +0000 UTC m=+0.024405609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:03:23 np0005604943 systemd[1]: var-lib-containers-storage-overlay-d1f3e1088c44adeeb20d7a5f3548692d8919d1b7bacd0caa2e4e536c5ce71e84-merged.mount: Deactivated successfully.
Feb  2 07:03:23 np0005604943 podman[259786]: 2026-02-02 12:03:23.644592345 +0000 UTC m=+0.141576703 container remove d1fdbf6ddcbd87e4bf24a9867269b6adf19e0a03b4f2691326fa6175a132fa16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 07:03:23 np0005604943 nova_compute[238883]: 2026-02-02 12:03:23.645 238887 DEBUG nova.compute.manager [req-e4fbbf7a-8f92-4654-afbd-efa001113e3d req-e8795648-3471-4057-a1ad-522bc65cb3a3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Received event network-changed-cb56f4bc-ae6e-4f97-afb4-1d300f11c761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:03:23 np0005604943 nova_compute[238883]: 2026-02-02 12:03:23.646 238887 DEBUG nova.compute.manager [req-e4fbbf7a-8f92-4654-afbd-efa001113e3d req-e8795648-3471-4057-a1ad-522bc65cb3a3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Refreshing instance network info cache due to event network-changed-cb56f4bc-ae6e-4f97-afb4-1d300f11c761. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:03:23 np0005604943 nova_compute[238883]: 2026-02-02 12:03:23.646 238887 DEBUG oslo_concurrency.lockutils [req-e4fbbf7a-8f92-4654-afbd-efa001113e3d req-e8795648-3471-4057-a1ad-522bc65cb3a3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-a96ffff3-5920-4e78-bdab-1435004f049f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:03:23 np0005604943 nova_compute[238883]: 2026-02-02 12:03:23.646 238887 DEBUG oslo_concurrency.lockutils [req-e4fbbf7a-8f92-4654-afbd-efa001113e3d req-e8795648-3471-4057-a1ad-522bc65cb3a3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-a96ffff3-5920-4e78-bdab-1435004f049f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:03:23 np0005604943 nova_compute[238883]: 2026-02-02 12:03:23.647 238887 DEBUG nova.network.neutron [req-e4fbbf7a-8f92-4654-afbd-efa001113e3d req-e8795648-3471-4057-a1ad-522bc65cb3a3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Refreshing network info cache for port cb56f4bc-ae6e-4f97-afb4-1d300f11c761 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:03:23 np0005604943 systemd[1]: libpod-conmon-d1fdbf6ddcbd87e4bf24a9867269b6adf19e0a03b4f2691326fa6175a132fa16.scope: Deactivated successfully.
Feb  2 07:03:23 np0005604943 podman[259827]: 2026-02-02 12:03:23.817553589 +0000 UTC m=+0.051340052 container create 25f88245cd65547e1d9157ba28ddcebc047158c8ac415665e8be3f9f3a76a6ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_elbakyan, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:03:23 np0005604943 systemd[1]: Started libpod-conmon-25f88245cd65547e1d9157ba28ddcebc047158c8ac415665e8be3f9f3a76a6ca.scope.
Feb  2 07:03:23 np0005604943 podman[259827]: 2026-02-02 12:03:23.792063293 +0000 UTC m=+0.025849836 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:03:23 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:03:23 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1450083d72502699cdae9dc7ed7000bdc2c1d3db78b700566d59acffac1522/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:03:23 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1450083d72502699cdae9dc7ed7000bdc2c1d3db78b700566d59acffac1522/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:03:23 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1450083d72502699cdae9dc7ed7000bdc2c1d3db78b700566d59acffac1522/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:03:23 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1450083d72502699cdae9dc7ed7000bdc2c1d3db78b700566d59acffac1522/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:03:23 np0005604943 podman[259827]: 2026-02-02 12:03:23.935283108 +0000 UTC m=+0.169069601 container init 25f88245cd65547e1d9157ba28ddcebc047158c8ac415665e8be3f9f3a76a6ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_elbakyan, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:03:23 np0005604943 podman[259827]: 2026-02-02 12:03:23.944078251 +0000 UTC m=+0.177864734 container start 25f88245cd65547e1d9157ba28ddcebc047158c8ac415665e8be3f9f3a76a6ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_elbakyan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 07:03:23 np0005604943 podman[259827]: 2026-02-02 12:03:23.958033521 +0000 UTC m=+0.191820004 container attach 25f88245cd65547e1d9157ba28ddcebc047158c8ac415665e8be3f9f3a76a6ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_elbakyan, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:03:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 263 op/s
Feb  2 07:03:24 np0005604943 lvm[259923]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 07:03:24 np0005604943 lvm[259923]: VG ceph_vg1 finished
Feb  2 07:03:24 np0005604943 lvm[259922]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:03:24 np0005604943 lvm[259922]: VG ceph_vg0 finished
Feb  2 07:03:24 np0005604943 lvm[259925]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 07:03:24 np0005604943 lvm[259925]: VG ceph_vg2 finished
Feb  2 07:03:24 np0005604943 stoic_elbakyan[259844]: {}
Feb  2 07:03:24 np0005604943 systemd[1]: libpod-25f88245cd65547e1d9157ba28ddcebc047158c8ac415665e8be3f9f3a76a6ca.scope: Deactivated successfully.
Feb  2 07:03:24 np0005604943 systemd[1]: libpod-25f88245cd65547e1d9157ba28ddcebc047158c8ac415665e8be3f9f3a76a6ca.scope: Consumed 1.237s CPU time.
Feb  2 07:03:24 np0005604943 conmon[259844]: conmon 25f88245cd65547e1d91 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25f88245cd65547e1d9157ba28ddcebc047158c8ac415665e8be3f9f3a76a6ca.scope/container/memory.events
Feb  2 07:03:24 np0005604943 podman[259827]: 2026-02-02 12:03:24.775523584 +0000 UTC m=+1.009310047 container died 25f88245cd65547e1d9157ba28ddcebc047158c8ac415665e8be3f9f3a76a6ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_elbakyan, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 07:03:24 np0005604943 systemd[1]: var-lib-containers-storage-overlay-1c1450083d72502699cdae9dc7ed7000bdc2c1d3db78b700566d59acffac1522-merged.mount: Deactivated successfully.
Feb  2 07:03:24 np0005604943 podman[259827]: 2026-02-02 12:03:24.812145324 +0000 UTC m=+1.045931787 container remove 25f88245cd65547e1d9157ba28ddcebc047158c8ac415665e8be3f9f3a76a6ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_elbakyan, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:03:24 np0005604943 systemd[1]: libpod-conmon-25f88245cd65547e1d9157ba28ddcebc047158c8ac415665e8be3f9f3a76a6ca.scope: Deactivated successfully.
Feb  2 07:03:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:03:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:03:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:03:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:03:25 np0005604943 nova_compute[238883]: 2026-02-02 12:03:25.701 238887 DEBUG nova.network.neutron [req-e4fbbf7a-8f92-4654-afbd-efa001113e3d req-e8795648-3471-4057-a1ad-522bc65cb3a3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Updated VIF entry in instance network info cache for port cb56f4bc-ae6e-4f97-afb4-1d300f11c761. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:03:25 np0005604943 nova_compute[238883]: 2026-02-02 12:03:25.703 238887 DEBUG nova.network.neutron [req-e4fbbf7a-8f92-4654-afbd-efa001113e3d req-e8795648-3471-4057-a1ad-522bc65cb3a3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Updating instance_info_cache with network_info: [{"id": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "address": "fa:16:3e:db:79:b0", "network": {"id": "ed0c9eb2-5a02-4561-b216-a1cb6ff3164f", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574964546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee083e554351460bb418a3d98b537343", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb56f4bc-ae", "ovs_interfaceid": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:03:25 np0005604943 nova_compute[238883]: 2026-02-02 12:03:25.722 238887 DEBUG oslo_concurrency.lockutils [req-e4fbbf7a-8f92-4654-afbd-efa001113e3d req-e8795648-3471-4057-a1ad-522bc65cb3a3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-a96ffff3-5920-4e78-bdab-1435004f049f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:03:25.890823) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033805890850, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1149, "num_deletes": 257, "total_data_size": 1417434, "memory_usage": 1446168, "flush_reason": "Manual Compaction"}
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033805897528, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1387395, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25624, "largest_seqno": 26772, "table_properties": {"data_size": 1381531, "index_size": 3195, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13267, "raw_average_key_size": 21, "raw_value_size": 1369596, "raw_average_value_size": 2173, "num_data_blocks": 140, "num_entries": 630, "num_filter_entries": 630, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770033752, "oldest_key_time": 1770033752, "file_creation_time": 1770033805, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 6733 microseconds, and 2508 cpu microseconds.
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:03:25.897554) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1387395 bytes OK
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:03:25.897567) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:03:25.899129) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:03:25.899140) EVENT_LOG_v1 {"time_micros": 1770033805899137, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:03:25.899159) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1411870, prev total WAL file size 1411870, number of live WAL files 2.
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:03:25.899605) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1354KB)], [56(10MB)]
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033805899635, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12729184, "oldest_snapshot_seqno": -1}
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5655 keys, 11006299 bytes, temperature: kUnknown
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033805940809, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 11006299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10960802, "index_size": 30283, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14149, "raw_key_size": 140929, "raw_average_key_size": 24, "raw_value_size": 10851366, "raw_average_value_size": 1918, "num_data_blocks": 1241, "num_entries": 5655, "num_filter_entries": 5655, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770033805, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:03:25.941074) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 11006299 bytes
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:03:25.942498) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 308.5 rd, 266.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.8 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(17.1) write-amplify(7.9) OK, records in: 6183, records dropped: 528 output_compression: NoCompression
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:03:25.942514) EVENT_LOG_v1 {"time_micros": 1770033805942507, "job": 30, "event": "compaction_finished", "compaction_time_micros": 41265, "compaction_time_cpu_micros": 19679, "output_level": 6, "num_output_files": 1, "total_output_size": 11006299, "num_input_records": 6183, "num_output_records": 5655, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033805942748, "job": 30, "event": "table_file_deletion", "file_number": 58}
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033805943644, "job": 30, "event": "table_file_deletion", "file_number": 56}
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:03:25.899534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:03:25.943681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:03:25.943685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:03:25.943687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:03:25.943688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:03:25 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:03:25.943690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:03:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.7 MiB/s rd, 793 KiB/s wr, 163 op/s
Feb  2 07:03:26 np0005604943 nova_compute[238883]: 2026-02-02 12:03:26.429 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:26 np0005604943 nova_compute[238883]: 2026-02-02 12:03:26.933 238887 DEBUG oslo_concurrency.lockutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "44d03b0b-b589-4231-845d-ffa7acb3bd18" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:26 np0005604943 nova_compute[238883]: 2026-02-02 12:03:26.933 238887 DEBUG oslo_concurrency.lockutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "44d03b0b-b589-4231-845d-ffa7acb3bd18" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:26 np0005604943 nova_compute[238883]: 2026-02-02 12:03:26.954 238887 DEBUG nova.compute.manager [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:03:27 np0005604943 nova_compute[238883]: 2026-02-02 12:03:27.030 238887 DEBUG oslo_concurrency.lockutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:27 np0005604943 nova_compute[238883]: 2026-02-02 12:03:27.031 238887 DEBUG oslo_concurrency.lockutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:27 np0005604943 nova_compute[238883]: 2026-02-02 12:03:27.039 238887 DEBUG nova.virt.hardware [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:03:27 np0005604943 nova_compute[238883]: 2026-02-02 12:03:27.040 238887 INFO nova.compute.claims [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:03:27 np0005604943 nova_compute[238883]: 2026-02-02 12:03:27.160 238887 DEBUG oslo_concurrency.processutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:27 np0005604943 nova_compute[238883]: 2026-02-02 12:03:27.567 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:03:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Feb  2 07:03:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Feb  2 07:03:27 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Feb  2 07:03:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:03:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3224464920' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:03:27 np0005604943 nova_compute[238883]: 2026-02-02 12:03:27.705 238887 DEBUG oslo_concurrency.processutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:27 np0005604943 nova_compute[238883]: 2026-02-02 12:03:27.712 238887 DEBUG nova.compute.provider_tree [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:03:27 np0005604943 nova_compute[238883]: 2026-02-02 12:03:27.733 238887 DEBUG nova.scheduler.client.report [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:03:27 np0005604943 nova_compute[238883]: 2026-02-02 12:03:27.768 238887 DEBUG oslo_concurrency.lockutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:27 np0005604943 nova_compute[238883]: 2026-02-02 12:03:27.769 238887 DEBUG nova.compute.manager [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:03:27 np0005604943 nova_compute[238883]: 2026-02-02 12:03:27.821 238887 DEBUG nova.compute.manager [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:03:27 np0005604943 nova_compute[238883]: 2026-02-02 12:03:27.821 238887 DEBUG nova.network.neutron [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:03:27 np0005604943 nova_compute[238883]: 2026-02-02 12:03:27.843 238887 INFO nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:03:27 np0005604943 nova_compute[238883]: 2026-02-02 12:03:27.868 238887 DEBUG nova.compute.manager [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:03:27 np0005604943 nova_compute[238883]: 2026-02-02 12:03:27.911 238887 INFO nova.virt.block_device [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Booting with volume 3a6aa117-e3db-4571-9e89-470620d6938e at /dev/vda#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.067 238887 DEBUG os_brick.utils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.069 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.076 238887 DEBUG nova.policy [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e3fc9d8415541ecaa0da4968c9fa242', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e66ed51ccbb840f083b8a86476696747', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.082 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.082 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[37fed0ad-a5bf-4210-9232-6d289477ce1d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.084 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.091 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.091 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[ef3ed01a-cb48-4976-bf33-7fa04626155e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.093 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.100 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.100 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[436f70e4-0a63-4d83-bc6b-74ca13fe17a1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.103 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[b301d14e-b015-488c-b62d-9d46622bbc8a]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.104 238887 DEBUG oslo_concurrency.processutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.122 238887 DEBUG oslo_concurrency.processutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.126 238887 DEBUG os_brick.initiator.connectors.lightos [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.127 238887 DEBUG os_brick.initiator.connectors.lightos [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.127 238887 DEBUG os_brick.initiator.connectors.lightos [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.128 238887 DEBUG os_brick.utils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] <== get_connector_properties: return (59ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.129 238887 DEBUG nova.virt.block_device [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Updating existing volume attachment record: 1cde9684-4281-4407-99d7-d2c8cefe0a00 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:03:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 54 KiB/s wr, 117 op/s
Feb  2 07:03:28 np0005604943 nova_compute[238883]: 2026-02-02 12:03:28.553 238887 DEBUG nova.network.neutron [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Successfully created port: 17612d87-daf5-475f-893f-bf1bb6834779 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:03:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:03:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4209901673' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.250 238887 DEBUG nova.compute.manager [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.252 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.253 238887 INFO nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Creating image(s)#033[00m
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.253 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.253 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Ensure instance console log exists: /var/lib/nova/instances/44d03b0b-b589-4231-845d-ffa7acb3bd18/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.254 238887 DEBUG oslo_concurrency.lockutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.254 238887 DEBUG oslo_concurrency.lockutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.254 238887 DEBUG oslo_concurrency.lockutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.476 238887 DEBUG nova.network.neutron [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Successfully updated port: 17612d87-daf5-475f-893f-bf1bb6834779 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.490 238887 DEBUG oslo_concurrency.lockutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "refresh_cache-44d03b0b-b589-4231-845d-ffa7acb3bd18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.491 238887 DEBUG oslo_concurrency.lockutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquired lock "refresh_cache-44d03b0b-b589-4231-845d-ffa7acb3bd18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.491 238887 DEBUG nova.network.neutron [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.575 238887 DEBUG nova.compute.manager [req-5cdf77fb-99b2-4185-98de-c5f4ce096903 req-7d99da36-ec06-4e74-8062-ce724bb9114d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Received event network-changed-17612d87-daf5-475f-893f-bf1bb6834779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.576 238887 DEBUG nova.compute.manager [req-5cdf77fb-99b2-4185-98de-c5f4ce096903 req-7d99da36-ec06-4e74-8062-ce724bb9114d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Refreshing instance network info cache due to event network-changed-17612d87-daf5-475f-893f-bf1bb6834779. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.576 238887 DEBUG oslo_concurrency.lockutils [req-5cdf77fb-99b2-4185-98de-c5f4ce096903 req-7d99da36-ec06-4e74-8062-ce724bb9114d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-44d03b0b-b589-4231-845d-ffa7acb3bd18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.633 238887 DEBUG nova.network.neutron [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:03:29 np0005604943 nova_compute[238883]: 2026-02-02 12:03:29.654 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:03:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 730 KiB/s rd, 639 B/s wr, 33 op/s
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.520 238887 DEBUG nova.network.neutron [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Updating instance_info_cache with network_info: [{"id": "17612d87-daf5-475f-893f-bf1bb6834779", "address": "fa:16:3e:ef:aa:5e", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17612d87-da", "ovs_interfaceid": "17612d87-daf5-475f-893f-bf1bb6834779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.536 238887 DEBUG oslo_concurrency.lockutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Releasing lock "refresh_cache-44d03b0b-b589-4231-845d-ffa7acb3bd18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.536 238887 DEBUG nova.compute.manager [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Instance network_info: |[{"id": "17612d87-daf5-475f-893f-bf1bb6834779", "address": "fa:16:3e:ef:aa:5e", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17612d87-da", "ovs_interfaceid": "17612d87-daf5-475f-893f-bf1bb6834779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.536 238887 DEBUG oslo_concurrency.lockutils [req-5cdf77fb-99b2-4185-98de-c5f4ce096903 req-7d99da36-ec06-4e74-8062-ce724bb9114d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-44d03b0b-b589-4231-845d-ffa7acb3bd18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.537 238887 DEBUG nova.network.neutron [req-5cdf77fb-99b2-4185-98de-c5f4ce096903 req-7d99da36-ec06-4e74-8062-ce724bb9114d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Refreshing network info cache for port 17612d87-daf5-475f-893f-bf1bb6834779 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.540 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Start _get_guest_xml network_info=[{"id": "17612d87-daf5-475f-893f-bf1bb6834779", "address": "fa:16:3e:ef:aa:5e", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17612d87-da", "ovs_interfaceid": "17612d87-daf5-475f-893f-bf1bb6834779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'attachment_id': '1cde9684-4281-4407-99d7-d2c8cefe0a00', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3a6aa117-e3db-4571-9e89-470620d6938e', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3a6aa117-e3db-4571-9e89-470620d6938e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '44d03b0b-b589-4231-845d-ffa7acb3bd18', 'attached_at': '', 'detached_at': '', 'volume_id': '3a6aa117-e3db-4571-9e89-470620d6938e', 'serial': '3a6aa117-e3db-4571-9e89-470620d6938e'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.545 238887 WARNING nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.549 238887 DEBUG nova.virt.libvirt.host [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.549 238887 DEBUG nova.virt.libvirt.host [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.552 238887 DEBUG nova.virt.libvirt.host [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.552 238887 DEBUG nova.virt.libvirt.host [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.552 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.553 238887 DEBUG nova.virt.hardware [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.553 238887 DEBUG nova.virt.hardware [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.553 238887 DEBUG nova.virt.hardware [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.553 238887 DEBUG nova.virt.hardware [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.553 238887 DEBUG nova.virt.hardware [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.554 238887 DEBUG nova.virt.hardware [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.554 238887 DEBUG nova.virt.hardware [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.554 238887 DEBUG nova.virt.hardware [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.554 238887 DEBUG nova.virt.hardware [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.555 238887 DEBUG nova.virt.hardware [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.555 238887 DEBUG nova.virt.hardware [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.577 238887 DEBUG nova.storage.rbd_utils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image 44d03b0b-b589-4231-845d-ffa7acb3bd18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:03:30 np0005604943 nova_compute[238883]: 2026-02-02 12:03:30.581 238887 DEBUG oslo_concurrency.processutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:31 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:31Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:db:79:b0 10.100.0.12
Feb  2 07:03:31 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:31Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:db:79:b0 10.100.0.12
Feb  2 07:03:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:03:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1480156673' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.159 238887 DEBUG oslo_concurrency.processutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.357 238887 DEBUG os_brick.encryptors [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Using volume encryption metadata '{'encryption_key_id': '197b07de-b0de-4f3d-9030-683469c84efb', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3a6aa117-e3db-4571-9e89-470620d6938e', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3a6aa117-e3db-4571-9e89-470620d6938e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '44d03b0b-b589-4231-845d-ffa7acb3bd18', 'attached_at': '', 'detached_at': '', 'volume_id': '3a6aa117-e3db-4571-9e89-470620d6938e', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.360 238887 DEBUG barbicanclient.client [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.375 238887 DEBUG barbicanclient.v1.secrets [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/197b07de-b0de-4f3d-9030-683469c84efb get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.376 238887 INFO barbicanclient.base [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Calculated Secrets uuid ref: secrets/197b07de-b0de-4f3d-9030-683469c84efb#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.400 238887 DEBUG barbicanclient.client [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.401 238887 INFO barbicanclient.base [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Calculated Secrets uuid ref: secrets/197b07de-b0de-4f3d-9030-683469c84efb#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.423 238887 DEBUG barbicanclient.client [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.423 238887 INFO barbicanclient.base [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Calculated Secrets uuid ref: secrets/197b07de-b0de-4f3d-9030-683469c84efb#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.430 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.447 238887 DEBUG barbicanclient.client [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.448 238887 INFO barbicanclient.base [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Calculated Secrets uuid ref: secrets/197b07de-b0de-4f3d-9030-683469c84efb#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.469 238887 DEBUG barbicanclient.client [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.469 238887 INFO barbicanclient.base [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Calculated Secrets uuid ref: secrets/197b07de-b0de-4f3d-9030-683469c84efb#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.487 238887 DEBUG barbicanclient.client [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.488 238887 INFO barbicanclient.base [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Calculated Secrets uuid ref: secrets/197b07de-b0de-4f3d-9030-683469c84efb#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.511 238887 DEBUG barbicanclient.client [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.512 238887 INFO barbicanclient.base [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Calculated Secrets uuid ref: secrets/197b07de-b0de-4f3d-9030-683469c84efb#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.539 238887 DEBUG barbicanclient.client [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.540 238887 INFO barbicanclient.base [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Calculated Secrets uuid ref: secrets/197b07de-b0de-4f3d-9030-683469c84efb#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.558 238887 DEBUG barbicanclient.client [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.559 238887 INFO barbicanclient.base [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Calculated Secrets uuid ref: secrets/197b07de-b0de-4f3d-9030-683469c84efb#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.582 238887 DEBUG barbicanclient.client [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.583 238887 INFO barbicanclient.base [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Calculated Secrets uuid ref: secrets/197b07de-b0de-4f3d-9030-683469c84efb#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.620 238887 DEBUG barbicanclient.client [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.622 238887 INFO barbicanclient.base [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Calculated Secrets uuid ref: secrets/197b07de-b0de-4f3d-9030-683469c84efb#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.650 238887 DEBUG barbicanclient.client [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.651 238887 INFO barbicanclient.base [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Calculated Secrets uuid ref: secrets/197b07de-b0de-4f3d-9030-683469c84efb#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.690 238887 DEBUG barbicanclient.client [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.690 238887 INFO barbicanclient.base [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Calculated Secrets uuid ref: secrets/197b07de-b0de-4f3d-9030-683469c84efb#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.716 238887 DEBUG barbicanclient.client [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.716 238887 INFO barbicanclient.base [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Calculated Secrets uuid ref: secrets/197b07de-b0de-4f3d-9030-683469c84efb#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.739 238887 DEBUG barbicanclient.client [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.740 238887 INFO barbicanclient.base [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Calculated Secrets uuid ref: secrets/197b07de-b0de-4f3d-9030-683469c84efb#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.766 238887 DEBUG barbicanclient.client [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.767 238887 DEBUG nova.virt.libvirt.host [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  <usage type="volume">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <volume>3a6aa117-e3db-4571-9e89-470620d6938e</volume>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  </usage>
Feb  2 07:03:31 np0005604943 nova_compute[238883]: </secret>
Feb  2 07:03:31 np0005604943 nova_compute[238883]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.802 238887 DEBUG nova.virt.libvirt.vif [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2142058609',display_name='tempest-TestVolumeBootPattern-server-2142058609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2142058609',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-cs06v0er',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:03:27Z,user_data=None,user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=44d03b0b-b589-4231-845d-ffa7acb3bd18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17612d87-daf5-475f-893f-bf1bb6834779", "address": "fa:16:3e:ef:aa:5e", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17612d87-da", "ovs_interfaceid": "17612d87-daf5-475f-893f-bf1bb6834779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.803 238887 DEBUG nova.network.os_vif_util [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "17612d87-daf5-475f-893f-bf1bb6834779", "address": "fa:16:3e:ef:aa:5e", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17612d87-da", "ovs_interfaceid": "17612d87-daf5-475f-893f-bf1bb6834779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.803 238887 DEBUG nova.network.os_vif_util [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:aa:5e,bridge_name='br-int',has_traffic_filtering=True,id=17612d87-daf5-475f-893f-bf1bb6834779,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17612d87-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.805 238887 DEBUG nova.objects.instance [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lazy-loading 'pci_devices' on Instance uuid 44d03b0b-b589-4231-845d-ffa7acb3bd18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.832 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  <uuid>44d03b0b-b589-4231-845d-ffa7acb3bd18</uuid>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  <name>instance-0000000f</name>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <nova:name>tempest-TestVolumeBootPattern-server-2142058609</nova:name>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:03:30</nova:creationTime>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:        <nova:user uuid="5e3fc9d8415541ecaa0da4968c9fa242">tempest-TestVolumeBootPattern-1059348902-project-member</nova:user>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:        <nova:project uuid="e66ed51ccbb840f083b8a86476696747">tempest-TestVolumeBootPattern-1059348902</nova:project>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:        <nova:port uuid="17612d87-daf5-475f-893f-bf1bb6834779">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <entry name="serial">44d03b0b-b589-4231-845d-ffa7acb3bd18</entry>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <entry name="uuid">44d03b0b-b589-4231-845d-ffa7acb3bd18</entry>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/44d03b0b-b589-4231-845d-ffa7acb3bd18_disk.config">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="volumes/volume-3a6aa117-e3db-4571-9e89-470620d6938e">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <serial>3a6aa117-e3db-4571-9e89-470620d6938e</serial>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <encryption format="luks">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:        <secret type="passphrase" uuid="eea7dd8d-c186-4b61-9e37-ae3d4c2ff14c"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      </encryption>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:ef:aa:5e"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <target dev="tap17612d87-da"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/44d03b0b-b589-4231-845d-ffa7acb3bd18/console.log" append="off"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:03:31 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:03:31 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:03:31 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:03:31 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.833 238887 DEBUG nova.compute.manager [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Preparing to wait for external event network-vif-plugged-17612d87-daf5-475f-893f-bf1bb6834779 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.833 238887 DEBUG oslo_concurrency.lockutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.833 238887 DEBUG oslo_concurrency.lockutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.833 238887 DEBUG oslo_concurrency.lockutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.834 238887 DEBUG nova.virt.libvirt.vif [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2142058609',display_name='tempest-TestVolumeBootPattern-server-2142058609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2142058609',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-cs06v0er',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:03:27Z,user_data=None,user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=44d03b0b-b589-4231-845d-ffa7acb3bd18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17612d87-daf5-475f-893f-bf1bb6834779", "address": "fa:16:3e:ef:aa:5e", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17612d87-da", "ovs_interfaceid": "17612d87-daf5-475f-893f-bf1bb6834779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.834 238887 DEBUG nova.network.os_vif_util [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "17612d87-daf5-475f-893f-bf1bb6834779", "address": "fa:16:3e:ef:aa:5e", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17612d87-da", "ovs_interfaceid": "17612d87-daf5-475f-893f-bf1bb6834779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.835 238887 DEBUG nova.network.os_vif_util [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:aa:5e,bridge_name='br-int',has_traffic_filtering=True,id=17612d87-daf5-475f-893f-bf1bb6834779,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17612d87-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.835 238887 DEBUG os_vif [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:aa:5e,bridge_name='br-int',has_traffic_filtering=True,id=17612d87-daf5-475f-893f-bf1bb6834779,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17612d87-da') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.836 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.837 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.837 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.840 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.841 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17612d87-da, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.841 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap17612d87-da, col_values=(('external_ids', {'iface-id': '17612d87-daf5-475f-893f-bf1bb6834779', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:aa:5e', 'vm-uuid': '44d03b0b-b589-4231-845d-ffa7acb3bd18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:31 np0005604943 NetworkManager[49093]: <info>  [1770033811.8436] manager: (tap17612d87-da): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.845 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.848 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.849 238887 INFO os_vif [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:aa:5e,bridge_name='br-int',has_traffic_filtering=True,id=17612d87-daf5-475f-893f-bf1bb6834779,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17612d87-da')#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.896 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.897 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.897 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No VIF found with MAC fa:16:3e:ef:aa:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.898 238887 INFO nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Using config drive#033[00m
Feb  2 07:03:31 np0005604943 nova_compute[238883]: 2026-02-02 12:03:31.917 238887 DEBUG nova.storage.rbd_utils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image 44d03b0b-b589-4231-845d-ffa7acb3bd18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:03:32 np0005604943 nova_compute[238883]: 2026-02-02 12:03:32.023 238887 DEBUG nova.network.neutron [req-5cdf77fb-99b2-4185-98de-c5f4ce096903 req-7d99da36-ec06-4e74-8062-ce724bb9114d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Updated VIF entry in instance network info cache for port 17612d87-daf5-475f-893f-bf1bb6834779. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:03:32 np0005604943 nova_compute[238883]: 2026-02-02 12:03:32.023 238887 DEBUG nova.network.neutron [req-5cdf77fb-99b2-4185-98de-c5f4ce096903 req-7d99da36-ec06-4e74-8062-ce724bb9114d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Updating instance_info_cache with network_info: [{"id": "17612d87-daf5-475f-893f-bf1bb6834779", "address": "fa:16:3e:ef:aa:5e", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17612d87-da", "ovs_interfaceid": "17612d87-daf5-475f-893f-bf1bb6834779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:03:32 np0005604943 nova_compute[238883]: 2026-02-02 12:03:32.045 238887 DEBUG oslo_concurrency.lockutils [req-5cdf77fb-99b2-4185-98de-c5f4ce096903 req-7d99da36-ec06-4e74-8062-ce724bb9114d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-44d03b0b-b589-4231-845d-ffa7acb3bd18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:03:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.4 MiB/s wr, 117 op/s
Feb  2 07:03:32 np0005604943 nova_compute[238883]: 2026-02-02 12:03:32.637 238887 INFO nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Creating config drive at /var/lib/nova/instances/44d03b0b-b589-4231-845d-ffa7acb3bd18/disk.config#033[00m
Feb  2 07:03:32 np0005604943 nova_compute[238883]: 2026-02-02 12:03:32.641 238887 DEBUG oslo_concurrency.processutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44d03b0b-b589-4231-845d-ffa7acb3bd18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpo8wjwcxz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:03:32 np0005604943 nova_compute[238883]: 2026-02-02 12:03:32.767 238887 DEBUG oslo_concurrency.processutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44d03b0b-b589-4231-845d-ffa7acb3bd18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpo8wjwcxz" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:32 np0005604943 nova_compute[238883]: 2026-02-02 12:03:32.793 238887 DEBUG nova.storage.rbd_utils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image 44d03b0b-b589-4231-845d-ffa7acb3bd18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:03:32 np0005604943 nova_compute[238883]: 2026-02-02 12:03:32.796 238887 DEBUG oslo_concurrency.processutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/44d03b0b-b589-4231-845d-ffa7acb3bd18/disk.config 44d03b0b-b589-4231-845d-ffa7acb3bd18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:32 np0005604943 nova_compute[238883]: 2026-02-02 12:03:32.907 238887 DEBUG oslo_concurrency.processutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/44d03b0b-b589-4231-845d-ffa7acb3bd18/disk.config 44d03b0b-b589-4231-845d-ffa7acb3bd18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:32 np0005604943 nova_compute[238883]: 2026-02-02 12:03:32.908 238887 INFO nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Deleting local config drive /var/lib/nova/instances/44d03b0b-b589-4231-845d-ffa7acb3bd18/disk.config because it was imported into RBD.#033[00m
Feb  2 07:03:32 np0005604943 kernel: tap17612d87-da: entered promiscuous mode
Feb  2 07:03:32 np0005604943 NetworkManager[49093]: <info>  [1770033812.9514] manager: (tap17612d87-da): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Feb  2 07:03:32 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:32Z|00152|binding|INFO|Claiming lport 17612d87-daf5-475f-893f-bf1bb6834779 for this chassis.
Feb  2 07:03:32 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:32Z|00153|binding|INFO|17612d87-daf5-475f-893f-bf1bb6834779: Claiming fa:16:3e:ef:aa:5e 10.100.0.3
Feb  2 07:03:32 np0005604943 nova_compute[238883]: 2026-02-02 12:03:32.953 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:32 np0005604943 nova_compute[238883]: 2026-02-02 12:03:32.960 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:32.962 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:aa:5e 10.100.0.3'], port_security=['fa:16:3e:ef:aa:5e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '44d03b0b-b589-4231-845d-ffa7acb3bd18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34290362-cccd-452d-8e7e-22a6057fdb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e66ed51ccbb840f083b8a86476696747', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f4be2f31-185f-4ed3-aa62-fafdc1532722', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c1fa263-7715-4982-bfcc-ab441fef3c03, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=17612d87-daf5-475f-893f-bf1bb6834779) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:03:32 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:32Z|00154|binding|INFO|Setting lport 17612d87-daf5-475f-893f-bf1bb6834779 ovn-installed in OVS
Feb  2 07:03:32 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:32Z|00155|binding|INFO|Setting lport 17612d87-daf5-475f-893f-bf1bb6834779 up in Southbound
Feb  2 07:03:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:32.964 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 17612d87-daf5-475f-893f-bf1bb6834779 in datapath 34290362-cccd-452d-8e7e-22a6057fdb60 bound to our chassis#033[00m
Feb  2 07:03:32 np0005604943 nova_compute[238883]: 2026-02-02 12:03:32.964 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:32.965 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34290362-cccd-452d-8e7e-22a6057fdb60#033[00m
Feb  2 07:03:32 np0005604943 systemd-udevd[260107]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:03:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:32.974 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab02a12-aba3-43da-b28a-6725000aead6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:32.975 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap34290362-c1 in ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 07:03:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:32.977 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap34290362-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 07:03:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:32.977 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[6a732a23-cd3c-4c06-aa3b-b90d2611ddb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:32.978 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d10e5435-16a1-42cf-82cf-7107c0b2078b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:32 np0005604943 systemd-machined[206973]: New machine qemu-15-instance-0000000f.
Feb  2 07:03:32 np0005604943 NetworkManager[49093]: <info>  [1770033812.9837] device (tap17612d87-da): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:03:32 np0005604943 NetworkManager[49093]: <info>  [1770033812.9848] device (tap17612d87-da): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:03:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:32.987 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[db343b36-5236-4081-abeb-1d276b309978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:32.997 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c8883349-c8ca-4773-a8f8-de4b2e4a0cfc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:32 np0005604943 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.014 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[399a08fa-9029-4fd8-a885-f70e988d49e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:33 np0005604943 systemd-udevd[260111]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.018 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[2822df13-f918-4ab3-89a2-18fecb98233d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:33 np0005604943 NetworkManager[49093]: <info>  [1770033813.0189] manager: (tap34290362-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/84)
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.037 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6975b5-17da-4e5d-989f-fe6acfe0b831]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.039 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b36423-b450-406e-8d00-fe549aca92ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:33 np0005604943 NetworkManager[49093]: <info>  [1770033813.0545] device (tap34290362-c0): carrier: link connected
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.060 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[18d1e222-f0ff-4b9e-9956-4712238064dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.076 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9e73d26e-c587-4747-a7d3-1d55851e4893]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34290362-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:39:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425840, 'reachable_time': 42590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260140, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.091 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[75a81635-4001-498e-b78e-cdfeca5b885d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb3:39d2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 425840, 'tstamp': 425840}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260141, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.108 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[e63b7792-14fa-4c1a-b18d-3077b873a4de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34290362-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:39:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425840, 'reachable_time': 42590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260142, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.134 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[0b638f9c-2ca2-4638-8ac4-0eea58afb534]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.177 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d9adfa14-a673-470a-af7c-ab7597f2b252]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.178 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34290362-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.179 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.179 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34290362-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:33 np0005604943 NetworkManager[49093]: <info>  [1770033813.1817] manager: (tap34290362-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Feb  2 07:03:33 np0005604943 kernel: tap34290362-c0: entered promiscuous mode
Feb  2 07:03:33 np0005604943 nova_compute[238883]: 2026-02-02 12:03:33.182 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.185 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34290362-c0, col_values=(('external_ids', {'iface-id': '54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:33 np0005604943 nova_compute[238883]: 2026-02-02 12:03:33.186 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:33 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:33Z|00156|binding|INFO|Releasing lport 54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c from this chassis (sb_readonly=0)
Feb  2 07:03:33 np0005604943 nova_compute[238883]: 2026-02-02 12:03:33.187 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.190 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/34290362-cccd-452d-8e7e-22a6057fdb60.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/34290362-cccd-452d-8e7e-22a6057fdb60.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.191 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[069f229f-2cb8-4b31-ba13-05491da1fc29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.192 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-34290362-cccd-452d-8e7e-22a6057fdb60
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/34290362-cccd-452d-8e7e-22a6057fdb60.pid.haproxy
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID 34290362-cccd-452d-8e7e-22a6057fdb60
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 07:03:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:33.192 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'env', 'PROCESS_TAG=haproxy-34290362-cccd-452d-8e7e-22a6057fdb60', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/34290362-cccd-452d-8e7e-22a6057fdb60.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 07:03:33 np0005604943 nova_compute[238883]: 2026-02-02 12:03:33.194 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:33 np0005604943 nova_compute[238883]: 2026-02-02 12:03:33.270 238887 DEBUG nova.compute.manager [req-733b46d4-5093-4d16-8db7-8acaf4277459 req-b6edb808-8889-47fb-8856-baec776583fb 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Received event network-vif-plugged-17612d87-daf5-475f-893f-bf1bb6834779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:03:33 np0005604943 nova_compute[238883]: 2026-02-02 12:03:33.270 238887 DEBUG oslo_concurrency.lockutils [req-733b46d4-5093-4d16-8db7-8acaf4277459 req-b6edb808-8889-47fb-8856-baec776583fb 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:33 np0005604943 nova_compute[238883]: 2026-02-02 12:03:33.270 238887 DEBUG oslo_concurrency.lockutils [req-733b46d4-5093-4d16-8db7-8acaf4277459 req-b6edb808-8889-47fb-8856-baec776583fb 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:33 np0005604943 nova_compute[238883]: 2026-02-02 12:03:33.271 238887 DEBUG oslo_concurrency.lockutils [req-733b46d4-5093-4d16-8db7-8acaf4277459 req-b6edb808-8889-47fb-8856-baec776583fb 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:33 np0005604943 nova_compute[238883]: 2026-02-02 12:03:33.271 238887 DEBUG nova.compute.manager [req-733b46d4-5093-4d16-8db7-8acaf4277459 req-b6edb808-8889-47fb-8856-baec776583fb 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Processing event network-vif-plugged-17612d87-daf5-475f-893f-bf1bb6834779 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:03:33 np0005604943 podman[260175]: 2026-02-02 12:03:33.50987583 +0000 UTC m=+0.045938458 container create ce260e20c9fd8114dd34afb990725221495874724c25a48f339e751481241c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 07:03:33 np0005604943 systemd[1]: Started libpod-conmon-ce260e20c9fd8114dd34afb990725221495874724c25a48f339e751481241c22.scope.
Feb  2 07:03:33 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:03:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7e4ce54c2ad6a1789259aab6e23030b5b6f2957827a0872371f2f8cec3ddb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 07:03:33 np0005604943 podman[260175]: 2026-02-02 12:03:33.482782613 +0000 UTC m=+0.018845251 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 07:03:33 np0005604943 podman[260175]: 2026-02-02 12:03:33.579753862 +0000 UTC m=+0.115816490 container init ce260e20c9fd8114dd34afb990725221495874724c25a48f339e751481241c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127)
Feb  2 07:03:33 np0005604943 podman[260175]: 2026-02-02 12:03:33.583702307 +0000 UTC m=+0.119764935 container start ce260e20c9fd8114dd34afb990725221495874724c25a48f339e751481241c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:03:33 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[260191]: [NOTICE]   (260195) : New worker (260197) forked
Feb  2 07:03:33 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[260191]: [NOTICE]   (260195) : Loading success.
Feb  2 07:03:33 np0005604943 nova_compute[238883]: 2026-02-02 12:03:33.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:03:33 np0005604943 nova_compute[238883]: 2026-02-02 12:03:33.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:03:33 np0005604943 nova_compute[238883]: 2026-02-02 12:03:33.666 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:33 np0005604943 nova_compute[238883]: 2026-02-02 12:03:33.666 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:33 np0005604943 nova_compute[238883]: 2026-02-02 12:03:33.667 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:33 np0005604943 nova_compute[238883]: 2026-02-02 12:03:33.667 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 07:03:33 np0005604943 nova_compute[238883]: 2026-02-02 12:03:33.667 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:03:34 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2522827291' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:03:34 np0005604943 nova_compute[238883]: 2026-02-02 12:03:34.205 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:34 np0005604943 nova_compute[238883]: 2026-02-02 12:03:34.276 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:03:34 np0005604943 nova_compute[238883]: 2026-02-02 12:03:34.277 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:03:34 np0005604943 nova_compute[238883]: 2026-02-02 12:03:34.277 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:03:34 np0005604943 nova_compute[238883]: 2026-02-02 12:03:34.280 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:03:34 np0005604943 nova_compute[238883]: 2026-02-02 12:03:34.281 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:03:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 612 KiB/s rd, 2.6 MiB/s wr, 112 op/s
Feb  2 07:03:34 np0005604943 nova_compute[238883]: 2026-02-02 12:03:34.426 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:03:34 np0005604943 nova_compute[238883]: 2026-02-02 12:03:34.427 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4220MB free_disk=59.967240734025836GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 07:03:34 np0005604943 nova_compute[238883]: 2026-02-02 12:03:34.427 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:34 np0005604943 nova_compute[238883]: 2026-02-02 12:03:34.428 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:34 np0005604943 nova_compute[238883]: 2026-02-02 12:03:34.500 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance a96ffff3-5920-4e78-bdab-1435004f049f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 07:03:34 np0005604943 nova_compute[238883]: 2026-02-02 12:03:34.501 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance 44d03b0b-b589-4231-845d-ffa7acb3bd18 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 07:03:34 np0005604943 nova_compute[238883]: 2026-02-02 12:03:34.501 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 07:03:34 np0005604943 nova_compute[238883]: 2026-02-02 12:03:34.501 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 07:03:34 np0005604943 nova_compute[238883]: 2026-02-02 12:03:34.559 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:03:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2004371417' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:03:35 np0005604943 nova_compute[238883]: 2026-02-02 12:03:35.098 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:35 np0005604943 nova_compute[238883]: 2026-02-02 12:03:35.103 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:03:35 np0005604943 nova_compute[238883]: 2026-02-02 12:03:35.122 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:03:35 np0005604943 nova_compute[238883]: 2026-02-02 12:03:35.149 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 07:03:35 np0005604943 nova_compute[238883]: 2026-02-02 12:03:35.150 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:35 np0005604943 nova_compute[238883]: 2026-02-02 12:03:35.465 238887 DEBUG nova.compute.manager [req-f3d26fff-becb-4251-8fa7-c87af51a3bad req-d387dc5f-43a0-433a-986e-ea655eeae055 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Received event network-vif-plugged-17612d87-daf5-475f-893f-bf1bb6834779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:03:35 np0005604943 nova_compute[238883]: 2026-02-02 12:03:35.465 238887 DEBUG oslo_concurrency.lockutils [req-f3d26fff-becb-4251-8fa7-c87af51a3bad req-d387dc5f-43a0-433a-986e-ea655eeae055 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:35 np0005604943 nova_compute[238883]: 2026-02-02 12:03:35.465 238887 DEBUG oslo_concurrency.lockutils [req-f3d26fff-becb-4251-8fa7-c87af51a3bad req-d387dc5f-43a0-433a-986e-ea655eeae055 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:35 np0005604943 nova_compute[238883]: 2026-02-02 12:03:35.466 238887 DEBUG oslo_concurrency.lockutils [req-f3d26fff-becb-4251-8fa7-c87af51a3bad req-d387dc5f-43a0-433a-986e-ea655eeae055 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:35 np0005604943 nova_compute[238883]: 2026-02-02 12:03:35.466 238887 DEBUG nova.compute.manager [req-f3d26fff-becb-4251-8fa7-c87af51a3bad req-d387dc5f-43a0-433a-986e-ea655eeae055 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] No waiting events found dispatching network-vif-plugged-17612d87-daf5-475f-893f-bf1bb6834779 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:03:35 np0005604943 nova_compute[238883]: 2026-02-02 12:03:35.466 238887 WARNING nova.compute.manager [req-f3d26fff-becb-4251-8fa7-c87af51a3bad req-d387dc5f-43a0-433a-986e-ea655eeae055 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Received unexpected event network-vif-plugged-17612d87-daf5-475f-893f-bf1bb6834779 for instance with vm_state building and task_state spawning.#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.150 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.151 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.151 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.174 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.295 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "refresh_cache-a96ffff3-5920-4e78-bdab-1435004f049f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.295 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquired lock "refresh_cache-a96ffff3-5920-4e78-bdab-1435004f049f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.295 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.295 238887 DEBUG nova.objects.instance [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lazy-loading 'info_cache' on Instance uuid a96ffff3-5920-4e78-bdab-1435004f049f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:03:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 612 KiB/s rd, 2.6 MiB/s wr, 112 op/s
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.357 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033816.356654, 44d03b0b-b589-4231-845d-ffa7acb3bd18 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.357 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] VM Started (Lifecycle Event)#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.360 238887 DEBUG nova.compute.manager [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.365 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.369 238887 INFO nova.virt.libvirt.driver [-] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Instance spawned successfully.#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.369 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.378 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.389 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.397 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.398 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.399 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.400 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.400 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.401 238887 DEBUG nova.virt.libvirt.driver [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.410 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.411 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033816.3603387, 44d03b0b-b589-4231-845d-ffa7acb3bd18 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.411 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.432 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.449 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.451 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033816.3631675, 44d03b0b-b589-4231-845d-ffa7acb3bd18 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.452 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.475 238887 INFO nova.compute.manager [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Took 7.22 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.476 238887 DEBUG nova.compute.manager [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.478 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.484 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.518 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.543 238887 INFO nova.compute.manager [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Took 9.54 seconds to build instance.#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.566 238887 DEBUG oslo_concurrency.lockutils [None req-cde97aa8-90a2-4915-930d-39663fc95642 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "44d03b0b-b589-4231-845d-ffa7acb3bd18" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:36 np0005604943 nova_compute[238883]: 2026-02-02 12:03:36.843 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:37 np0005604943 nova_compute[238883]: 2026-02-02 12:03:37.371 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Updating instance_info_cache with network_info: [{"id": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "address": "fa:16:3e:db:79:b0", "network": {"id": "ed0c9eb2-5a02-4561-b216-a1cb6ff3164f", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574964546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee083e554351460bb418a3d98b537343", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb56f4bc-ae", "ovs_interfaceid": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:03:37 np0005604943 nova_compute[238883]: 2026-02-02 12:03:37.385 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Releasing lock "refresh_cache-a96ffff3-5920-4e78-bdab-1435004f049f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:03:37 np0005604943 nova_compute[238883]: 2026-02-02 12:03:37.385 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 07:03:37 np0005604943 nova_compute[238883]: 2026-02-02 12:03:37.386 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:03:37 np0005604943 nova_compute[238883]: 2026-02-02 12:03:37.386 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:03:37 np0005604943 nova_compute[238883]: 2026-02-02 12:03:37.386 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:03:37 np0005604943 nova_compute[238883]: 2026-02-02 12:03:37.386 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 07:03:37 np0005604943 nova_compute[238883]: 2026-02-02 12:03:37.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:03:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:03:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 598 KiB/s rd, 2.4 MiB/s wr, 109 op/s
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.625 238887 DEBUG oslo_concurrency.lockutils [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "44d03b0b-b589-4231-845d-ffa7acb3bd18" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.625 238887 DEBUG oslo_concurrency.lockutils [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "44d03b0b-b589-4231-845d-ffa7acb3bd18" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.626 238887 DEBUG oslo_concurrency.lockutils [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.626 238887 DEBUG oslo_concurrency.lockutils [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.626 238887 DEBUG oslo_concurrency.lockutils [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.627 238887 INFO nova.compute.manager [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Terminating instance#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.629 238887 DEBUG nova.compute.manager [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.635 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:03:38 np0005604943 kernel: tap17612d87-da (unregistering): left promiscuous mode
Feb  2 07:03:38 np0005604943 NetworkManager[49093]: <info>  [1770033818.6602] device (tap17612d87-da): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:03:38 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:38Z|00157|binding|INFO|Releasing lport 17612d87-daf5-475f-893f-bf1bb6834779 from this chassis (sb_readonly=0)
Feb  2 07:03:38 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:38Z|00158|binding|INFO|Setting lport 17612d87-daf5-475f-893f-bf1bb6834779 down in Southbound
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.666 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:38 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:38Z|00159|binding|INFO|Removing iface tap17612d87-da ovn-installed in OVS
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.668 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:38.675 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:aa:5e 10.100.0.3'], port_security=['fa:16:3e:ef:aa:5e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '44d03b0b-b589-4231-845d-ffa7acb3bd18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34290362-cccd-452d-8e7e-22a6057fdb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e66ed51ccbb840f083b8a86476696747', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f4be2f31-185f-4ed3-aa62-fafdc1532722', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c1fa263-7715-4982-bfcc-ab441fef3c03, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=17612d87-daf5-475f-893f-bf1bb6834779) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:03:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:38.676 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 17612d87-daf5-475f-893f-bf1bb6834779 in datapath 34290362-cccd-452d-8e7e-22a6057fdb60 unbound from our chassis#033[00m
Feb  2 07:03:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:38.678 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34290362-cccd-452d-8e7e-22a6057fdb60, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.677 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:38.679 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[2bab09a3-948b-4650-9284-f8194a746d66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:38.680 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 namespace which is not needed anymore#033[00m
Feb  2 07:03:38 np0005604943 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Feb  2 07:03:38 np0005604943 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 3.864s CPU time.
Feb  2 07:03:38 np0005604943 systemd-machined[206973]: Machine qemu-15-instance-0000000f terminated.
Feb  2 07:03:38 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[260191]: [NOTICE]   (260195) : haproxy version is 2.8.14-c23fe91
Feb  2 07:03:38 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[260191]: [NOTICE]   (260195) : path to executable is /usr/sbin/haproxy
Feb  2 07:03:38 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[260191]: [WARNING]  (260195) : Exiting Master process...
Feb  2 07:03:38 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[260191]: [ALERT]    (260195) : Current worker (260197) exited with code 143 (Terminated)
Feb  2 07:03:38 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[260191]: [WARNING]  (260195) : All workers exited. Exiting... (0)
Feb  2 07:03:38 np0005604943 systemd[1]: libpod-ce260e20c9fd8114dd34afb990725221495874724c25a48f339e751481241c22.scope: Deactivated successfully.
Feb  2 07:03:38 np0005604943 podman[260316]: 2026-02-02 12:03:38.796477113 +0000 UTC m=+0.041770178 container died ce260e20c9fd8114dd34afb990725221495874724c25a48f339e751481241c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 07:03:38 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ce260e20c9fd8114dd34afb990725221495874724c25a48f339e751481241c22-userdata-shm.mount: Deactivated successfully.
Feb  2 07:03:38 np0005604943 systemd[1]: var-lib-containers-storage-overlay-4f7e4ce54c2ad6a1789259aab6e23030b5b6f2957827a0872371f2f8cec3ddb6-merged.mount: Deactivated successfully.
Feb  2 07:03:38 np0005604943 podman[260316]: 2026-02-02 12:03:38.828794289 +0000 UTC m=+0.074087354 container cleanup ce260e20c9fd8114dd34afb990725221495874724c25a48f339e751481241c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:03:38 np0005604943 systemd[1]: libpod-conmon-ce260e20c9fd8114dd34afb990725221495874724c25a48f339e751481241c22.scope: Deactivated successfully.
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.843 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.847 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.855 238887 INFO nova.virt.libvirt.driver [-] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Instance destroyed successfully.#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.856 238887 DEBUG nova.objects.instance [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lazy-loading 'resources' on Instance uuid 44d03b0b-b589-4231-845d-ffa7acb3bd18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.867 238887 DEBUG nova.virt.libvirt.vif [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2142058609',display_name='tempest-TestVolumeBootPattern-server-2142058609',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2142058609',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:03:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-cs06v0er',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:03:36Z,user_data=None,user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=44d03b0b-b589-4231-845d-ffa7acb3bd18,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "17612d87-daf5-475f-893f-bf1bb6834779", "address": "fa:16:3e:ef:aa:5e", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17612d87-da", "ovs_interfaceid": "17612d87-daf5-475f-893f-bf1bb6834779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.867 238887 DEBUG nova.network.os_vif_util [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "17612d87-daf5-475f-893f-bf1bb6834779", "address": "fa:16:3e:ef:aa:5e", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17612d87-da", "ovs_interfaceid": "17612d87-daf5-475f-893f-bf1bb6834779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.868 238887 DEBUG nova.network.os_vif_util [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:aa:5e,bridge_name='br-int',has_traffic_filtering=True,id=17612d87-daf5-475f-893f-bf1bb6834779,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17612d87-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.869 238887 DEBUG os_vif [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:aa:5e,bridge_name='br-int',has_traffic_filtering=True,id=17612d87-daf5-475f-893f-bf1bb6834779,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17612d87-da') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.870 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.871 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17612d87-da, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.872 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.874 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.876 238887 INFO os_vif [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:aa:5e,bridge_name='br-int',has_traffic_filtering=True,id=17612d87-daf5-475f-893f-bf1bb6834779,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17612d87-da')#033[00m
Feb  2 07:03:38 np0005604943 podman[260347]: 2026-02-02 12:03:38.886844027 +0000 UTC m=+0.041857079 container remove ce260e20c9fd8114dd34afb990725221495874724c25a48f339e751481241c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 07:03:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:38.891 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5d6b1d-8135-45d2-abfb-209694a37ea5]: (4, ('Mon Feb  2 12:03:38 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 (ce260e20c9fd8114dd34afb990725221495874724c25a48f339e751481241c22)\nce260e20c9fd8114dd34afb990725221495874724c25a48f339e751481241c22\nMon Feb  2 12:03:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 (ce260e20c9fd8114dd34afb990725221495874724c25a48f339e751481241c22)\nce260e20c9fd8114dd34afb990725221495874724c25a48f339e751481241c22\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:38.892 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b8a141-21a6-4923-b24c-a855cb8f200a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:38.893 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34290362-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:38 np0005604943 kernel: tap34290362-c0: left promiscuous mode
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.901 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:38.906 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b74944e6-5afa-4dab-bc2a-7784543d7b84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:38.918 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ba698018-4a21-487f-8a0f-06746059cb68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:38.919 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4a05cf-ad2e-499a-a93a-2a681bb1257f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.931 238887 DEBUG nova.compute.manager [req-c79ab557-cc22-48f1-b7b5-6aef41491bc4 req-a6477e94-8ad2-4ed0-bde1-be53cacb9e7f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Received event network-vif-unplugged-17612d87-daf5-475f-893f-bf1bb6834779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.931 238887 DEBUG oslo_concurrency.lockutils [req-c79ab557-cc22-48f1-b7b5-6aef41491bc4 req-a6477e94-8ad2-4ed0-bde1-be53cacb9e7f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.931 238887 DEBUG oslo_concurrency.lockutils [req-c79ab557-cc22-48f1-b7b5-6aef41491bc4 req-a6477e94-8ad2-4ed0-bde1-be53cacb9e7f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.932 238887 DEBUG oslo_concurrency.lockutils [req-c79ab557-cc22-48f1-b7b5-6aef41491bc4 req-a6477e94-8ad2-4ed0-bde1-be53cacb9e7f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.932 238887 DEBUG nova.compute.manager [req-c79ab557-cc22-48f1-b7b5-6aef41491bc4 req-a6477e94-8ad2-4ed0-bde1-be53cacb9e7f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] No waiting events found dispatching network-vif-unplugged-17612d87-daf5-475f-893f-bf1bb6834779 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:03:38 np0005604943 nova_compute[238883]: 2026-02-02 12:03:38.932 238887 DEBUG nova.compute.manager [req-c79ab557-cc22-48f1-b7b5-6aef41491bc4 req-a6477e94-8ad2-4ed0-bde1-be53cacb9e7f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Received event network-vif-unplugged-17612d87-daf5-475f-893f-bf1bb6834779 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:03:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:38.939 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[8886ed59-221c-4a7e-8ed3-870e8e273c6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425836, 'reachable_time': 28637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260390, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:38 np0005604943 systemd[1]: run-netns-ovnmeta\x2d34290362\x2dcccd\x2d452d\x2d8e7e\x2d22a6057fdb60.mount: Deactivated successfully.
Feb  2 07:03:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:38.943 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 07:03:38 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:38.943 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[8d176056-3e8f-438f-b20d-e410db602b57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.027 238887 INFO nova.virt.libvirt.driver [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Deleting instance files /var/lib/nova/instances/44d03b0b-b589-4231-845d-ffa7acb3bd18_del#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.028 238887 INFO nova.virt.libvirt.driver [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Deletion of /var/lib/nova/instances/44d03b0b-b589-4231-845d-ffa7acb3bd18_del complete#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.106 238887 INFO nova.compute.manager [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Took 0.48 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.107 238887 DEBUG oslo.service.loopingcall [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.107 238887 DEBUG nova.compute.manager [-] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.107 238887 DEBUG nova.network.neutron [-] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.203 238887 DEBUG oslo_concurrency.lockutils [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Acquiring lock "a96ffff3-5920-4e78-bdab-1435004f049f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.204 238887 DEBUG oslo_concurrency.lockutils [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lock "a96ffff3-5920-4e78-bdab-1435004f049f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.204 238887 DEBUG oslo_concurrency.lockutils [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Acquiring lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.204 238887 DEBUG oslo_concurrency.lockutils [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.205 238887 DEBUG oslo_concurrency.lockutils [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.206 238887 INFO nova.compute.manager [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Terminating instance#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.208 238887 DEBUG nova.compute.manager [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:03:39 np0005604943 kernel: tapcb56f4bc-ae (unregistering): left promiscuous mode
Feb  2 07:03:39 np0005604943 NetworkManager[49093]: <info>  [1770033819.2605] device (tapcb56f4bc-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.264 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:39 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:39Z|00160|binding|INFO|Releasing lport cb56f4bc-ae6e-4f97-afb4-1d300f11c761 from this chassis (sb_readonly=0)
Feb  2 07:03:39 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:39Z|00161|binding|INFO|Setting lport cb56f4bc-ae6e-4f97-afb4-1d300f11c761 down in Southbound
Feb  2 07:03:39 np0005604943 ovn_controller[145056]: 2026-02-02T12:03:39Z|00162|binding|INFO|Removing iface tapcb56f4bc-ae ovn-installed in OVS
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.268 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:39.276 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:79:b0 10.100.0.12'], port_security=['fa:16:3e:db:79:b0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a96ffff3-5920-4e78-bdab-1435004f049f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ee083e554351460bb418a3d98b537343', 'neutron:revision_number': '4', 'neutron:security_group_ids': '83a2be6b-fd76-4549-82b1-9fd8e8284c8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c85056b-f6a9-4ab1-bc66-86e8e15bd6fd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=cb56f4bc-ae6e-4f97-afb4-1d300f11c761) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.277 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:39.279 155011 INFO neutron.agent.ovn.metadata.agent [-] Port cb56f4bc-ae6e-4f97-afb4-1d300f11c761 in datapath ed0c9eb2-5a02-4561-b216-a1cb6ff3164f unbound from our chassis#033[00m
Feb  2 07:03:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:39.280 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed0c9eb2-5a02-4561-b216-a1cb6ff3164f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:03:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:39.281 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[bdcf257d-2684-46a7-937c-beb0b2d2a4d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:39.282 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f namespace which is not needed anymore#033[00m
Feb  2 07:03:39 np0005604943 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Feb  2 07:03:39 np0005604943 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 13.141s CPU time.
Feb  2 07:03:39 np0005604943 systemd-machined[206973]: Machine qemu-14-instance-0000000e terminated.
Feb  2 07:03:39 np0005604943 podman[260397]: 2026-02-02 12:03:39.336141724 +0000 UTC m=+0.053669923 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb  2 07:03:39 np0005604943 podman[260394]: 2026-02-02 12:03:39.363034396 +0000 UTC m=+0.080184456 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 07:03:39 np0005604943 neutron-haproxy-ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f[259265]: [NOTICE]   (259270) : haproxy version is 2.8.14-c23fe91
Feb  2 07:03:39 np0005604943 neutron-haproxy-ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f[259265]: [NOTICE]   (259270) : path to executable is /usr/sbin/haproxy
Feb  2 07:03:39 np0005604943 neutron-haproxy-ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f[259265]: [WARNING]  (259270) : Exiting Master process...
Feb  2 07:03:39 np0005604943 neutron-haproxy-ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f[259265]: [ALERT]    (259270) : Current worker (259272) exited with code 143 (Terminated)
Feb  2 07:03:39 np0005604943 neutron-haproxy-ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f[259265]: [WARNING]  (259270) : All workers exited. Exiting... (0)
Feb  2 07:03:39 np0005604943 systemd[1]: libpod-9f6a006988b1b3f1e1ab1bf2d8d1cf0acb19b21b266a0eaec80c3f50d3a3e065.scope: Deactivated successfully.
Feb  2 07:03:39 np0005604943 podman[260454]: 2026-02-02 12:03:39.406853948 +0000 UTC m=+0.039152269 container died 9f6a006988b1b3f1e1ab1bf2d8d1cf0acb19b21b266a0eaec80c3f50d3a3e065 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Feb  2 07:03:39 np0005604943 NetworkManager[49093]: <info>  [1770033819.4235] manager: (tapcb56f4bc-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/86)
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.428 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:39 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f6a006988b1b3f1e1ab1bf2d8d1cf0acb19b21b266a0eaec80c3f50d3a3e065-userdata-shm.mount: Deactivated successfully.
Feb  2 07:03:39 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a62d290185e6c1e3f45e6bc5c345e8324e8b34f45fcad4b446258c92607cb88a-merged.mount: Deactivated successfully.
Feb  2 07:03:39 np0005604943 podman[260454]: 2026-02-02 12:03:39.442504982 +0000 UTC m=+0.074803303 container cleanup 9f6a006988b1b3f1e1ab1bf2d8d1cf0acb19b21b266a0eaec80c3f50d3a3e065 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.444 238887 INFO nova.virt.libvirt.driver [-] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Instance destroyed successfully.#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.444 238887 DEBUG nova.objects.instance [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lazy-loading 'resources' on Instance uuid a96ffff3-5920-4e78-bdab-1435004f049f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:03:39 np0005604943 systemd[1]: libpod-conmon-9f6a006988b1b3f1e1ab1bf2d8d1cf0acb19b21b266a0eaec80c3f50d3a3e065.scope: Deactivated successfully.
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.460 238887 DEBUG nova.virt.libvirt.vif [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:03:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-instance-1917573818',display_name='tempest-instance-1917573818',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1917573818',id=14,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHreMOpv+vG56C29AwHK7dAMPisR6ubzzXv8jjNXBzeJA+YypvAIWLUASYtM8sV/rTkRA72DOyN4tEasdZNuM3qKxpn4WpIZks80MjgEBt2yWjxqqZJ8dPVOJ01rpDLaHQ==',key_name='tempest-keypair-1465596797',keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:03:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ee083e554351460bb418a3d98b537343',ramdisk_id='',reservation_id='r-u9a62mcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-1399459303',owner_user_name='tempest-VolumesBackupsTest-1399459303-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:03:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='60fb6bd172e548f3a5aaa37de0e4fc9f',uuid=a96ffff3-5920-4e78-bdab-1435004f049f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "address": "fa:16:3e:db:79:b0", "network": {"id": "ed0c9eb2-5a02-4561-b216-a1cb6ff3164f", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574964546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee083e554351460bb418a3d98b537343", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb56f4bc-ae", "ovs_interfaceid": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.461 238887 DEBUG nova.network.os_vif_util [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Converting VIF {"id": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "address": "fa:16:3e:db:79:b0", "network": {"id": "ed0c9eb2-5a02-4561-b216-a1cb6ff3164f", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574964546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee083e554351460bb418a3d98b537343", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb56f4bc-ae", "ovs_interfaceid": "cb56f4bc-ae6e-4f97-afb4-1d300f11c761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.461 238887 DEBUG nova.network.os_vif_util [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:db:79:b0,bridge_name='br-int',has_traffic_filtering=True,id=cb56f4bc-ae6e-4f97-afb4-1d300f11c761,network=Network(ed0c9eb2-5a02-4561-b216-a1cb6ff3164f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb56f4bc-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.462 238887 DEBUG os_vif [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:79:b0,bridge_name='br-int',has_traffic_filtering=True,id=cb56f4bc-ae6e-4f97-afb4-1d300f11c761,network=Network(ed0c9eb2-5a02-4561-b216-a1cb6ff3164f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb56f4bc-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.464 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.464 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcb56f4bc-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.465 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.468 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.470 238887 INFO os_vif [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:79:b0,bridge_name='br-int',has_traffic_filtering=True,id=cb56f4bc-ae6e-4f97-afb4-1d300f11c761,network=Network(ed0c9eb2-5a02-4561-b216-a1cb6ff3164f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb56f4bc-ae')#033[00m
Feb  2 07:03:39 np0005604943 podman[260489]: 2026-02-02 12:03:39.499943404 +0000 UTC m=+0.038893011 container remove 9f6a006988b1b3f1e1ab1bf2d8d1cf0acb19b21b266a0eaec80c3f50d3a3e065 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127)
Feb  2 07:03:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:39.505 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9e92c172-9d70-43a6-861f-199f71aca968]: (4, ('Mon Feb  2 12:03:39 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f (9f6a006988b1b3f1e1ab1bf2d8d1cf0acb19b21b266a0eaec80c3f50d3a3e065)\n9f6a006988b1b3f1e1ab1bf2d8d1cf0acb19b21b266a0eaec80c3f50d3a3e065\nMon Feb  2 12:03:39 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f (9f6a006988b1b3f1e1ab1bf2d8d1cf0acb19b21b266a0eaec80c3f50d3a3e065)\n9f6a006988b1b3f1e1ab1bf2d8d1cf0acb19b21b266a0eaec80c3f50d3a3e065\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:39.508 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[eb97d02f-68c7-4445-bfc8-52e36b64b465]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:39.509 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped0c9eb2-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.511 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:39 np0005604943 kernel: taped0c9eb2-50: left promiscuous mode
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.519 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:39.520 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a3faf024-c504-4482-a8a0-7e582cddf287]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:39.535 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7ffd9e81-6c4f-4f71-9f0f-97ebd96d4897]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:39.536 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f191f2-cf00-48fc-a026-7bf67b96febe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:39.547 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad71fc1-ca30-4837-8801-0c65a439ca60]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424371, 'reachable_time': 29689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260530, 'error': None, 'target': 'ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:39.548 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ed0c9eb2-5a02-4561-b216-a1cb6ff3164f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 07:03:39 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:03:39.549 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[76cf1f9e-53b7-4177-a73e-3057ee19ef7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.635 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.695 238887 INFO nova.virt.libvirt.driver [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Deleting instance files /var/lib/nova/instances/a96ffff3-5920-4e78-bdab-1435004f049f_del#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.696 238887 INFO nova.virt.libvirt.driver [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Deletion of /var/lib/nova/instances/a96ffff3-5920-4e78-bdab-1435004f049f_del complete#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.723 238887 DEBUG nova.network.neutron [-] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.753 238887 INFO nova.compute.manager [-] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Took 0.65 seconds to deallocate network for instance.#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.768 238887 INFO nova.compute.manager [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Took 0.56 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.769 238887 DEBUG oslo.service.loopingcall [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.771 238887 DEBUG nova.compute.manager [-] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.772 238887 DEBUG nova.network.neutron [-] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:03:39 np0005604943 systemd[1]: run-netns-ovnmeta\x2ded0c9eb2\x2d5a02\x2d4561\x2db216\x2da1cb6ff3164f.mount: Deactivated successfully.
Feb  2 07:03:39 np0005604943 nova_compute[238883]: 2026-02-02 12:03:39.888 238887 DEBUG nova.compute.manager [req-94573ad1-f834-4e6b-97b6-f207e8a875a9 req-4e79352c-f476-4631-b680-50d6c0ae90d1 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Received event network-vif-deleted-17612d87-daf5-475f-893f-bf1bb6834779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:03:40 np0005604943 nova_compute[238883]: 2026-02-02 12:03:40.007 238887 INFO nova.compute.manager [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Took 0.25 seconds to detach 1 volumes for instance.#033[00m
Feb  2 07:03:40 np0005604943 nova_compute[238883]: 2026-02-02 12:03:40.050 238887 DEBUG oslo_concurrency.lockutils [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:40 np0005604943 nova_compute[238883]: 2026-02-02 12:03:40.051 238887 DEBUG oslo_concurrency.lockutils [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:40 np0005604943 nova_compute[238883]: 2026-02-02 12:03:40.131 238887 DEBUG oslo_concurrency.processutils [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 536 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Feb  2 07:03:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:03:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833548411' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:03:40 np0005604943 nova_compute[238883]: 2026-02-02 12:03:40.664 238887 DEBUG oslo_concurrency.processutils [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:40 np0005604943 nova_compute[238883]: 2026-02-02 12:03:40.670 238887 DEBUG nova.compute.provider_tree [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:03:40 np0005604943 nova_compute[238883]: 2026-02-02 12:03:40.686 238887 DEBUG nova.scheduler.client.report [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:03:40 np0005604943 nova_compute[238883]: 2026-02-02 12:03:40.717 238887 DEBUG oslo_concurrency.lockutils [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:03:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:03:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:03:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:03:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:03:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:03:40 np0005604943 nova_compute[238883]: 2026-02-02 12:03:40.762 238887 INFO nova.scheduler.client.report [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Deleted allocations for instance 44d03b0b-b589-4231-845d-ffa7acb3bd18#033[00m
Feb  2 07:03:40 np0005604943 nova_compute[238883]: 2026-02-02 12:03:40.833 238887 DEBUG oslo_concurrency.lockutils [None req-d54a6303-cb87-4825-a06d-45a36d01fbde 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "44d03b0b-b589-4231-845d-ffa7acb3bd18" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.034 238887 DEBUG nova.network.neutron [-] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.066 238887 INFO nova.compute.manager [-] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Took 1.29 seconds to deallocate network for instance.#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.163 238887 DEBUG nova.compute.manager [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Received event network-vif-plugged-17612d87-daf5-475f-893f-bf1bb6834779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.164 238887 DEBUG oslo_concurrency.lockutils [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.164 238887 DEBUG oslo_concurrency.lockutils [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.165 238887 DEBUG oslo_concurrency.lockutils [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "44d03b0b-b589-4231-845d-ffa7acb3bd18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.165 238887 DEBUG nova.compute.manager [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] No waiting events found dispatching network-vif-plugged-17612d87-daf5-475f-893f-bf1bb6834779 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.166 238887 WARNING nova.compute.manager [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Received unexpected event network-vif-plugged-17612d87-daf5-475f-893f-bf1bb6834779 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.166 238887 DEBUG nova.compute.manager [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Received event network-vif-unplugged-cb56f4bc-ae6e-4f97-afb4-1d300f11c761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.167 238887 DEBUG oslo_concurrency.lockutils [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.167 238887 DEBUG oslo_concurrency.lockutils [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.167 238887 DEBUG oslo_concurrency.lockutils [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.167 238887 DEBUG nova.compute.manager [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] No waiting events found dispatching network-vif-unplugged-cb56f4bc-ae6e-4f97-afb4-1d300f11c761 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.168 238887 DEBUG nova.compute.manager [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Received event network-vif-unplugged-cb56f4bc-ae6e-4f97-afb4-1d300f11c761 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.168 238887 DEBUG nova.compute.manager [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Received event network-vif-plugged-cb56f4bc-ae6e-4f97-afb4-1d300f11c761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.168 238887 DEBUG oslo_concurrency.lockutils [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.169 238887 DEBUG oslo_concurrency.lockutils [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.169 238887 DEBUG oslo_concurrency.lockutils [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "a96ffff3-5920-4e78-bdab-1435004f049f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.169 238887 DEBUG nova.compute.manager [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] No waiting events found dispatching network-vif-plugged-cb56f4bc-ae6e-4f97-afb4-1d300f11c761 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.170 238887 WARNING nova.compute.manager [req-a8f66dae-1f11-4c77-9392-b4e745d5e652 req-7eb5811e-8a2c-4c75-bbeb-322dc051928a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Received unexpected event network-vif-plugged-cb56f4bc-ae6e-4f97-afb4-1d300f11c761 for instance with vm_state active and task_state deleting.#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.434 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.545 238887 INFO nova.compute.manager [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Took 0.48 seconds to detach 1 volumes for instance.#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.592 238887 DEBUG oslo_concurrency.lockutils [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.592 238887 DEBUG oslo_concurrency.lockutils [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:41 np0005604943 nova_compute[238883]: 2026-02-02 12:03:41.625 238887 DEBUG oslo_concurrency.processutils [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:42 np0005604943 nova_compute[238883]: 2026-02-02 12:03:42.064 238887 DEBUG nova.compute.manager [req-02f81a97-456c-4746-9683-beda613a1d98 req-6abe6129-9646-429b-8528-79e79c74e713 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Received event network-vif-deleted-cb56f4bc-ae6e-4f97-afb4-1d300f11c761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:03:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:03:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/892130156' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:03:42 np0005604943 nova_compute[238883]: 2026-02-02 12:03:42.168 238887 DEBUG oslo_concurrency.processutils [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:42 np0005604943 nova_compute[238883]: 2026-02-02 12:03:42.174 238887 DEBUG nova.compute.provider_tree [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:03:42 np0005604943 nova_compute[238883]: 2026-02-02 12:03:42.205 238887 DEBUG nova.scheduler.client.report [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:03:42 np0005604943 nova_compute[238883]: 2026-02-02 12:03:42.233 238887 DEBUG oslo_concurrency.lockutils [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:42 np0005604943 nova_compute[238883]: 2026-02-02 12:03:42.311 238887 INFO nova.scheduler.client.report [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Deleted allocations for instance a96ffff3-5920-4e78-bdab-1435004f049f#033[00m
Feb  2 07:03:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 548 KiB/s rd, 2.2 MiB/s wr, 117 op/s
Feb  2 07:03:42 np0005604943 nova_compute[238883]: 2026-02-02 12:03:42.388 238887 DEBUG oslo_concurrency.lockutils [None req-cb056a86-ecf9-45ee-a86f-0826a639a9b8 60fb6bd172e548f3a5aaa37de0e4fc9f ee083e554351460bb418a3d98b537343 - - default default] Lock "a96ffff3-5920-4e78-bdab-1435004f049f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:03:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3631168752' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:03:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:03:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1255160062' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:03:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:03:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1255160062' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:03:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:03:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Feb  2 07:03:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Feb  2 07:03:43 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Feb  2 07:03:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Feb  2 07:03:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Feb  2 07:03:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Feb  2 07:03:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 400 KiB/s rd, 1.1 MiB/s wr, 102 op/s
Feb  2 07:03:44 np0005604943 nova_compute[238883]: 2026-02-02 12:03:44.466 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:03:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/695731329' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:03:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:03:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/695731329' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:03:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Feb  2 07:03:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Feb  2 07:03:46 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Feb  2 07:03:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 480 KiB/s rd, 1.5 MiB/s wr, 120 op/s
Feb  2 07:03:46 np0005604943 nova_compute[238883]: 2026-02-02 12:03:46.436 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:03:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Feb  2 07:03:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Feb  2 07:03:48 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Feb  2 07:03:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 7.2 MiB/s rd, 4.7 MiB/s wr, 243 op/s
Feb  2 07:03:49 np0005604943 nova_compute[238883]: 2026-02-02 12:03:49.468 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 6.9 MiB/s rd, 2.6 MiB/s wr, 129 op/s
Feb  2 07:03:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Feb  2 07:03:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Feb  2 07:03:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Feb  2 07:03:51 np0005604943 nova_compute[238883]: 2026-02-02 12:03:51.438 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 7.1 MiB/s rd, 4.6 MiB/s wr, 263 op/s
Feb  2 07:03:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:03:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Feb  2 07:03:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Feb  2 07:03:52 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Feb  2 07:03:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:03:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3312703706' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:03:53 np0005604943 nova_compute[238883]: 2026-02-02 12:03:53.855 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033818.854136, 44d03b0b-b589-4231-845d-ffa7acb3bd18 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:03:53 np0005604943 nova_compute[238883]: 2026-02-02 12:03:53.856 238887 INFO nova.compute.manager [-] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:03:53 np0005604943 nova_compute[238883]: 2026-02-02 12:03:53.876 238887 DEBUG nova.compute.manager [None req-666e31a5-ad77-4c88-bf60-49f4024ba9dc - - - - - -] [instance: 44d03b0b-b589-4231-845d-ffa7acb3bd18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:03:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.4 MiB/s wr, 162 op/s
Feb  2 07:03:54 np0005604943 nova_compute[238883]: 2026-02-02 12:03:54.444 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033819.442866, a96ffff3-5920-4e78-bdab-1435004f049f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:03:54 np0005604943 nova_compute[238883]: 2026-02-02 12:03:54.444 238887 INFO nova.compute.manager [-] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:03:54 np0005604943 nova_compute[238883]: 2026-02-02 12:03:54.463 238887 DEBUG nova.compute.manager [None req-c5660a84-b1a4-4850-aaea-1c1ef5ae69d6 - - - - - -] [instance: a96ffff3-5920-4e78-bdab-1435004f049f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:03:54 np0005604943 nova_compute[238883]: 2026-02-02 12:03:54.470 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:55 np0005604943 nova_compute[238883]: 2026-02-02 12:03:55.359 238887 DEBUG oslo_concurrency.lockutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:55 np0005604943 nova_compute[238883]: 2026-02-02 12:03:55.360 238887 DEBUG oslo_concurrency.lockutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:55 np0005604943 nova_compute[238883]: 2026-02-02 12:03:55.378 238887 DEBUG nova.compute.manager [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:03:55 np0005604943 nova_compute[238883]: 2026-02-02 12:03:55.453 238887 DEBUG oslo_concurrency.lockutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:03:55 np0005604943 nova_compute[238883]: 2026-02-02 12:03:55.453 238887 DEBUG oslo_concurrency.lockutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:03:55 np0005604943 nova_compute[238883]: 2026-02-02 12:03:55.464 238887 DEBUG nova.virt.hardware [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:03:55 np0005604943 nova_compute[238883]: 2026-02-02 12:03:55.464 238887 INFO nova.compute.claims [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:03:55 np0005604943 nova_compute[238883]: 2026-02-02 12:03:55.575 238887 DEBUG oslo_concurrency.processutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:03:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:03:56 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2688752741' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:03:56 np0005604943 nova_compute[238883]: 2026-02-02 12:03:56.149 238887 DEBUG oslo_concurrency.processutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:03:56 np0005604943 nova_compute[238883]: 2026-02-02 12:03:56.156 238887 DEBUG nova.compute.provider_tree [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:03:56 np0005604943 nova_compute[238883]: 2026-02-02 12:03:56.173 238887 DEBUG nova.scheduler.client.report [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:03:56 np0005604943 nova_compute[238883]: 2026-02-02 12:03:56.199 238887 DEBUG oslo_concurrency.lockutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:03:56 np0005604943 nova_compute[238883]: 2026-02-02 12:03:56.200 238887 DEBUG nova.compute.manager [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:03:56 np0005604943 nova_compute[238883]: 2026-02-02 12:03:56.252 238887 DEBUG nova.compute.manager [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:03:56 np0005604943 nova_compute[238883]: 2026-02-02 12:03:56.252 238887 DEBUG nova.network.neutron [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:03:56 np0005604943 nova_compute[238883]: 2026-02-02 12:03:56.279 238887 INFO nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:03:56 np0005604943 nova_compute[238883]: 2026-02-02 12:03:56.301 238887 DEBUG nova.compute.manager [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:03:56 np0005604943 nova_compute[238883]: 2026-02-02 12:03:56.344 238887 INFO nova.virt.block_device [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Booting with volume snapshot c4975fab-d680-44b0-9184-184bfe0df6b2 at /dev/vda#033[00m
Feb  2 07:03:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.7 MiB/s wr, 126 op/s
Feb  2 07:03:56 np0005604943 nova_compute[238883]: 2026-02-02 12:03:56.440 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:56 np0005604943 nova_compute[238883]: 2026-02-02 12:03:56.453 238887 DEBUG nova.policy [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e3fc9d8415541ecaa0da4968c9fa242', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e66ed51ccbb840f083b8a86476696747', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:03:56 np0005604943 nova_compute[238883]: 2026-02-02 12:03:56.985 238887 DEBUG nova.network.neutron [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Successfully created port: 61877ae2-b263-468a-aa34-5363e313ade2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:03:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:03:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Feb  2 07:03:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Feb  2 07:03:57 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Feb  2 07:03:58 np0005604943 nova_compute[238883]: 2026-02-02 12:03:58.184 238887 DEBUG nova.network.neutron [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Successfully updated port: 61877ae2-b263-468a-aa34-5363e313ade2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:03:58 np0005604943 nova_compute[238883]: 2026-02-02 12:03:58.208 238887 DEBUG oslo_concurrency.lockutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "refresh_cache-2d1ce093-ec3c-4af8-96cd-d55b8a832810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:03:58 np0005604943 nova_compute[238883]: 2026-02-02 12:03:58.209 238887 DEBUG oslo_concurrency.lockutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquired lock "refresh_cache-2d1ce093-ec3c-4af8-96cd-d55b8a832810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:03:58 np0005604943 nova_compute[238883]: 2026-02-02 12:03:58.209 238887 DEBUG nova.network.neutron [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:03:58 np0005604943 nova_compute[238883]: 2026-02-02 12:03:58.278 238887 DEBUG nova.compute.manager [req-fcd352f8-711c-4852-9452-d22ff933e941 req-8fadb494-35e7-48ce-93f4-b5a831b140fc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Received event network-changed-61877ae2-b263-468a-aa34-5363e313ade2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:03:58 np0005604943 nova_compute[238883]: 2026-02-02 12:03:58.278 238887 DEBUG nova.compute.manager [req-fcd352f8-711c-4852-9452-d22ff933e941 req-8fadb494-35e7-48ce-93f4-b5a831b140fc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Refreshing instance network info cache due to event network-changed-61877ae2-b263-468a-aa34-5363e313ade2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:03:58 np0005604943 nova_compute[238883]: 2026-02-02 12:03:58.278 238887 DEBUG oslo_concurrency.lockutils [req-fcd352f8-711c-4852-9452-d22ff933e941 req-8fadb494-35e7-48ce-93f4-b5a831b140fc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-2d1ce093-ec3c-4af8-96cd-d55b8a832810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:03:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 2.6 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 227 KiB/s rd, 63 MiB/s wr, 363 op/s
Feb  2 07:03:58 np0005604943 nova_compute[238883]: 2026-02-02 12:03:58.840 238887 DEBUG nova.network.neutron [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:03:59 np0005604943 nova_compute[238883]: 2026-02-02 12:03:59.473 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:03:59 np0005604943 nova_compute[238883]: 2026-02-02 12:03:59.624 238887 DEBUG nova.network.neutron [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Updating instance_info_cache with network_info: [{"id": "61877ae2-b263-468a-aa34-5363e313ade2", "address": "fa:16:3e:8c:af:11", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61877ae2-b2", "ovs_interfaceid": "61877ae2-b263-468a-aa34-5363e313ade2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:03:59 np0005604943 nova_compute[238883]: 2026-02-02 12:03:59.654 238887 DEBUG oslo_concurrency.lockutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Releasing lock "refresh_cache-2d1ce093-ec3c-4af8-96cd-d55b8a832810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:03:59 np0005604943 nova_compute[238883]: 2026-02-02 12:03:59.654 238887 DEBUG nova.compute.manager [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Instance network_info: |[{"id": "61877ae2-b263-468a-aa34-5363e313ade2", "address": "fa:16:3e:8c:af:11", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61877ae2-b2", "ovs_interfaceid": "61877ae2-b263-468a-aa34-5363e313ade2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:03:59 np0005604943 nova_compute[238883]: 2026-02-02 12:03:59.655 238887 DEBUG oslo_concurrency.lockutils [req-fcd352f8-711c-4852-9452-d22ff933e941 req-8fadb494-35e7-48ce-93f4-b5a831b140fc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-2d1ce093-ec3c-4af8-96cd-d55b8a832810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:03:59 np0005604943 nova_compute[238883]: 2026-02-02 12:03:59.655 238887 DEBUG nova.network.neutron [req-fcd352f8-711c-4852-9452-d22ff933e941 req-8fadb494-35e7-48ce-93f4-b5a831b140fc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Refreshing network info cache for port 61877ae2-b263-468a-aa34-5363e313ade2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:04:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:00 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4146840165' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:00 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4146840165' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 3.0 GiB data, 3.1 GiB used, 57 GiB / 60 GiB avail; 133 KiB/s rd, 101 MiB/s wr, 240 op/s
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.834 238887 DEBUG nova.network.neutron [req-fcd352f8-711c-4852-9452-d22ff933e941 req-8fadb494-35e7-48ce-93f4-b5a831b140fc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Updated VIF entry in instance network info cache for port 61877ae2-b263-468a-aa34-5363e313ade2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.835 238887 DEBUG nova.network.neutron [req-fcd352f8-711c-4852-9452-d22ff933e941 req-8fadb494-35e7-48ce-93f4-b5a831b140fc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Updating instance_info_cache with network_info: [{"id": "61877ae2-b263-468a-aa34-5363e313ade2", "address": "fa:16:3e:8c:af:11", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61877ae2-b2", "ovs_interfaceid": "61877ae2-b263-468a-aa34-5363e313ade2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.859 238887 DEBUG oslo_concurrency.lockutils [req-fcd352f8-711c-4852-9452-d22ff933e941 req-8fadb494-35e7-48ce-93f4-b5a831b140fc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-2d1ce093-ec3c-4af8-96cd-d55b8a832810" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.891 238887 DEBUG os_brick.utils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.893 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.905 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.905 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[1b30321e-121d-48b7-884d-d53efea24674]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.907 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.914 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.914 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b4aed6-22b3-4585-b08a-7157d68648d2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.916 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.922 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.923 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[98570ef1-340b-4a95-a42d-8aff78b8a86b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.924 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[03334002-4a5c-4b29-8770-54f39a79134c]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.925 238887 DEBUG oslo_concurrency.processutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.947 238887 DEBUG oslo_concurrency.processutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.949 238887 DEBUG os_brick.initiator.connectors.lightos [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.949 238887 DEBUG os_brick.initiator.connectors.lightos [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.949 238887 DEBUG os_brick.initiator.connectors.lightos [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.950 238887 DEBUG os_brick.utils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] <== get_connector_properties: return (57ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:04:00 np0005604943 nova_compute[238883]: 2026-02-02 12:04:00.950 238887 DEBUG nova.virt.block_device [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Updating existing volume attachment record: 51891967-1df3-4a73-a421-4e21581f844e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:04:01 np0005604943 nova_compute[238883]: 2026-02-02 12:04:01.442 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:04:01 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2701825213' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.033 238887 DEBUG nova.compute.manager [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.035 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.035 238887 INFO nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Creating image(s)#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.036 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.036 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Ensure instance console log exists: /var/lib/nova/instances/2d1ce093-ec3c-4af8-96cd-d55b8a832810/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.036 238887 DEBUG oslo_concurrency.lockutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.037 238887 DEBUG oslo_concurrency.lockutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.037 238887 DEBUG oslo_concurrency.lockutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.039 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Start _get_guest_xml network_info=[{"id": "61877ae2-b263-468a-aa34-5363e313ade2", "address": "fa:16:3e:8c:af:11", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61877ae2-b2", "ovs_interfaceid": "61877ae2-b263-468a-aa34-5363e313ade2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'attachment_id': '51891967-1df3-4a73-a421-4e21581f844e', 'delete_on_termination': True, 'guest_format': None, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0605fe93-3aa9-4484-a759-2a613d4bc37b', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0605fe93-3aa9-4484-a759-2a613d4bc37b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '2d1ce093-ec3c-4af8-96cd-d55b8a832810', 'attached_at': '', 'detached_at': '', 'volume_id': '0605fe93-3aa9-4484-a759-2a613d4bc37b', 'serial': '0605fe93-3aa9-4484-a759-2a613d4bc37b'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.044 238887 WARNING nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.050 238887 DEBUG nova.virt.libvirt.host [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.051 238887 DEBUG nova.virt.libvirt.host [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.054 238887 DEBUG nova.virt.libvirt.host [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.055 238887 DEBUG nova.virt.libvirt.host [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.055 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.056 238887 DEBUG nova.virt.hardware [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.056 238887 DEBUG nova.virt.hardware [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.056 238887 DEBUG nova.virt.hardware [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.056 238887 DEBUG nova.virt.hardware [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.057 238887 DEBUG nova.virt.hardware [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.057 238887 DEBUG nova.virt.hardware [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.057 238887 DEBUG nova.virt.hardware [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.057 238887 DEBUG nova.virt.hardware [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.058 238887 DEBUG nova.virt.hardware [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.058 238887 DEBUG nova.virt.hardware [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.058 238887 DEBUG nova.virt.hardware [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.081 238887 DEBUG nova.storage.rbd_utils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image 2d1ce093-ec3c-4af8-96cd-d55b8a832810_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.085 238887 DEBUG oslo_concurrency.processutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 2.6 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 199 KiB/s rd, 105 MiB/s wr, 347 op/s
Feb  2 07:04:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:04:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359389497' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.620 238887 DEBUG oslo_concurrency.processutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.643 238887 DEBUG nova.virt.libvirt.vif [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:03:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-793011599',display_name='tempest-TestVolumeBootPattern-server-793011599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-793011599',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-j39ppa2m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:03:56Z,user_data=None,user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=2d1ce093-ec3c-4af8-96cd-d55b8a832810,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61877ae2-b263-468a-aa34-5363e313ade2", "address": "fa:16:3e:8c:af:11", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61877ae2-b2", "ovs_interfaceid": "61877ae2-b263-468a-aa34-5363e313ade2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.643 238887 DEBUG nova.network.os_vif_util [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "61877ae2-b263-468a-aa34-5363e313ade2", "address": "fa:16:3e:8c:af:11", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61877ae2-b2", "ovs_interfaceid": "61877ae2-b263-468a-aa34-5363e313ade2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.644 238887 DEBUG nova.network.os_vif_util [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:af:11,bridge_name='br-int',has_traffic_filtering=True,id=61877ae2-b263-468a-aa34-5363e313ade2,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61877ae2-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.646 238887 DEBUG nova.objects.instance [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d1ce093-ec3c-4af8-96cd-d55b8a832810 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.658 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  <uuid>2d1ce093-ec3c-4af8-96cd-d55b8a832810</uuid>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  <name>instance-00000010</name>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <nova:name>tempest-TestVolumeBootPattern-server-793011599</nova:name>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:04:02</nova:creationTime>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:        <nova:user uuid="5e3fc9d8415541ecaa0da4968c9fa242">tempest-TestVolumeBootPattern-1059348902-project-member</nova:user>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:        <nova:project uuid="e66ed51ccbb840f083b8a86476696747">tempest-TestVolumeBootPattern-1059348902</nova:project>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:        <nova:port uuid="61877ae2-b263-468a-aa34-5363e313ade2">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <entry name="serial">2d1ce093-ec3c-4af8-96cd-d55b8a832810</entry>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <entry name="uuid">2d1ce093-ec3c-4af8-96cd-d55b8a832810</entry>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/2d1ce093-ec3c-4af8-96cd-d55b8a832810_disk.config">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="volumes/volume-0605fe93-3aa9-4484-a759-2a613d4bc37b">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <serial>0605fe93-3aa9-4484-a759-2a613d4bc37b</serial>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:8c:af:11"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <target dev="tap61877ae2-b2"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/2d1ce093-ec3c-4af8-96cd-d55b8a832810/console.log" append="off"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:04:02 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:04:02 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:04:02 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:04:02 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.659 238887 DEBUG nova.compute.manager [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Preparing to wait for external event network-vif-plugged-61877ae2-b263-468a-aa34-5363e313ade2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.659 238887 DEBUG oslo_concurrency.lockutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.660 238887 DEBUG oslo_concurrency.lockutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.660 238887 DEBUG oslo_concurrency.lockutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.661 238887 DEBUG nova.virt.libvirt.vif [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:03:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-793011599',display_name='tempest-TestVolumeBootPattern-server-793011599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-793011599',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-j39ppa2m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:03:56Z,user_data=None,user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=2d1ce093-ec3c-4af8-96cd-d55b8a832810,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61877ae2-b263-468a-aa34-5363e313ade2", "address": "fa:16:3e:8c:af:11", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61877ae2-b2", "ovs_interfaceid": "61877ae2-b263-468a-aa34-5363e313ade2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.661 238887 DEBUG nova.network.os_vif_util [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "61877ae2-b263-468a-aa34-5363e313ade2", "address": "fa:16:3e:8c:af:11", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61877ae2-b2", "ovs_interfaceid": "61877ae2-b263-468a-aa34-5363e313ade2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.662 238887 DEBUG nova.network.os_vif_util [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:af:11,bridge_name='br-int',has_traffic_filtering=True,id=61877ae2-b263-468a-aa34-5363e313ade2,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61877ae2-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.663 238887 DEBUG os_vif [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:af:11,bridge_name='br-int',has_traffic_filtering=True,id=61877ae2-b263-468a-aa34-5363e313ade2,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61877ae2-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.664 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.664 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.665 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.667 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.668 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61877ae2-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.668 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap61877ae2-b2, col_values=(('external_ids', {'iface-id': '61877ae2-b263-468a-aa34-5363e313ade2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:af:11', 'vm-uuid': '2d1ce093-ec3c-4af8-96cd-d55b8a832810'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.670 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:02 np0005604943 NetworkManager[49093]: <info>  [1770033842.6715] manager: (tap61877ae2-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Feb  2 07:04:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.673 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.675 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.677 238887 INFO os_vif [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:af:11,bridge_name='br-int',has_traffic_filtering=True,id=61877ae2-b263-468a-aa34-5363e313ade2,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61877ae2-b2')#033[00m
Feb  2 07:04:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3463993754' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3463993754' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.731 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.732 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.732 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No VIF found with MAC fa:16:3e:8c:af:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.733 238887 INFO nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Using config drive#033[00m
Feb  2 07:04:02 np0005604943 nova_compute[238883]: 2026-02-02 12:04:02.753 238887 DEBUG nova.storage.rbd_utils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image 2d1ce093-ec3c-4af8-96cd-d55b8a832810_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.008 238887 INFO nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Creating config drive at /var/lib/nova/instances/2d1ce093-ec3c-4af8-96cd-d55b8a832810/disk.config#033[00m
Feb  2 07:04:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.014 238887 DEBUG oslo_concurrency.processutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d1ce093-ec3c-4af8-96cd-d55b8a832810/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp31_byyv0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Feb  2 07:04:03 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.142 238887 DEBUG oslo_concurrency.processutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d1ce093-ec3c-4af8-96cd-d55b8a832810/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp31_byyv0" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.178 238887 DEBUG nova.storage.rbd_utils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image 2d1ce093-ec3c-4af8-96cd-d55b8a832810_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.181 238887 DEBUG oslo_concurrency.processutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d1ce093-ec3c-4af8-96cd-d55b8a832810/disk.config 2d1ce093-ec3c-4af8-96cd-d55b8a832810_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.305 238887 DEBUG oslo_concurrency.processutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d1ce093-ec3c-4af8-96cd-d55b8a832810/disk.config 2d1ce093-ec3c-4af8-96cd-d55b8a832810_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.306 238887 INFO nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Deleting local config drive /var/lib/nova/instances/2d1ce093-ec3c-4af8-96cd-d55b8a832810/disk.config because it was imported into RBD.#033[00m
Feb  2 07:04:03 np0005604943 kernel: tap61877ae2-b2: entered promiscuous mode
Feb  2 07:04:03 np0005604943 NetworkManager[49093]: <info>  [1770033843.3504] manager: (tap61877ae2-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Feb  2 07:04:03 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:03Z|00163|binding|INFO|Claiming lport 61877ae2-b263-468a-aa34-5363e313ade2 for this chassis.
Feb  2 07:04:03 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:03Z|00164|binding|INFO|61877ae2-b263-468a-aa34-5363e313ade2: Claiming fa:16:3e:8c:af:11 10.100.0.6
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.351 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:03 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:03Z|00165|binding|INFO|Setting lport 61877ae2-b263-468a-aa34-5363e313ade2 ovn-installed in OVS
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.361 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:03 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:03Z|00166|binding|INFO|Setting lport 61877ae2-b263-468a-aa34-5363e313ade2 up in Southbound
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.368 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:af:11 10.100.0.6'], port_security=['fa:16:3e:8c:af:11 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2d1ce093-ec3c-4af8-96cd-d55b8a832810', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34290362-cccd-452d-8e7e-22a6057fdb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e66ed51ccbb840f083b8a86476696747', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f4be2f31-185f-4ed3-aa62-fafdc1532722', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c1fa263-7715-4982-bfcc-ab441fef3c03, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=61877ae2-b263-468a-aa34-5363e313ade2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.369 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 61877ae2-b263-468a-aa34-5363e313ade2 in datapath 34290362-cccd-452d-8e7e-22a6057fdb60 bound to our chassis#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.370 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34290362-cccd-452d-8e7e-22a6057fdb60#033[00m
Feb  2 07:04:03 np0005604943 systemd-udevd[260720]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.383 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[6a475c2d-7fb4-4c65-a5a0-bc91c7f699e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.384 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap34290362-c1 in ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 07:04:03 np0005604943 systemd-machined[206973]: New machine qemu-16-instance-00000010.
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.387 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap34290362-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.387 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[97e78e7b-88d3-4f1c-80e0-90cf6599a3a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.388 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7146bf10-3ce0-413e-a418-0f4d2602f960]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:03 np0005604943 NetworkManager[49093]: <info>  [1770033843.3906] device (tap61877ae2-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:04:03 np0005604943 NetworkManager[49093]: <info>  [1770033843.3918] device (tap61877ae2-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.395 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[0faa42db-e16b-462b-973b-6edd87537851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:03 np0005604943 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.405 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[f60bb565-bbf5-4d90-8943-bf6cbb67cd57]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.426 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[94096ae6-5db9-4a26-bded-d9b16afe3162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.432 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[dcbd4fd6-d7f7-4d2c-bccf-116502f8db1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:03 np0005604943 NetworkManager[49093]: <info>  [1770033843.4347] manager: (tap34290362-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/89)
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.466 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a1a694-b4e5-4e46-997b-c46b7afe2558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.470 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[1365570a-3c96-4715-a88a-b62b10b4a8ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:03 np0005604943 NetworkManager[49093]: <info>  [1770033843.4905] device (tap34290362-c0): carrier: link connected
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.497 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f49b87-8786-4c2f-a514-2a971d7e8671]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.508 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[90b15edd-1b91-49ce-b878-b06f80604bd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34290362-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:39:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428883, 'reachable_time': 31189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260753, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.520 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3911b0-9b12-41d9-8558-19d6beb0f477]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb3:39d2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428883, 'tstamp': 428883}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260754, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.535 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c2f979-1b26-4ad9-a013-5c1716fcbe13]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34290362-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:39:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428883, 'reachable_time': 31189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260755, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.556 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ac16644d-145a-43d2-b7e9-c3eeda403a45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.591 238887 DEBUG nova.compute.manager [req-a6d0c0a6-4e21-41cf-b7f2-cacd435b1323 req-d10d3a0c-8d00-4c73-9361-cbd075f07a32 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Received event network-vif-plugged-61877ae2-b263-468a-aa34-5363e313ade2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.591 238887 DEBUG oslo_concurrency.lockutils [req-a6d0c0a6-4e21-41cf-b7f2-cacd435b1323 req-d10d3a0c-8d00-4c73-9361-cbd075f07a32 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.592 238887 DEBUG oslo_concurrency.lockutils [req-a6d0c0a6-4e21-41cf-b7f2-cacd435b1323 req-d10d3a0c-8d00-4c73-9361-cbd075f07a32 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.592 238887 DEBUG oslo_concurrency.lockutils [req-a6d0c0a6-4e21-41cf-b7f2-cacd435b1323 req-d10d3a0c-8d00-4c73-9361-cbd075f07a32 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.593 238887 DEBUG nova.compute.manager [req-a6d0c0a6-4e21-41cf-b7f2-cacd435b1323 req-d10d3a0c-8d00-4c73-9361-cbd075f07a32 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Processing event network-vif-plugged-61877ae2-b263-468a-aa34-5363e313ade2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.604 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1d742df6-64b8-48ad-a441-3aadd1d14b10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.606 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34290362-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.606 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.607 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34290362-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.609 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:03 np0005604943 NetworkManager[49093]: <info>  [1770033843.6101] manager: (tap34290362-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Feb  2 07:04:03 np0005604943 kernel: tap34290362-c0: entered promiscuous mode
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.616 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34290362-c0, col_values=(('external_ids', {'iface-id': '54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:04:03 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:03Z|00167|binding|INFO|Releasing lport 54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c from this chassis (sb_readonly=0)
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.618 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.622 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/34290362-cccd-452d-8e7e-22a6057fdb60.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/34290362-cccd-452d-8e7e-22a6057fdb60.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.624 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a36f9010-c27a-4431-812d-14701f23cbcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.625 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-34290362-cccd-452d-8e7e-22a6057fdb60
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/34290362-cccd-452d-8e7e-22a6057fdb60.pid.haproxy
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID 34290362-cccd-452d-8e7e-22a6057fdb60
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 07:04:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:03.625 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'env', 'PROCESS_TAG=haproxy-34290362-cccd-452d-8e7e-22a6057fdb60', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/34290362-cccd-452d-8e7e-22a6057fdb60.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.628 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.783 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033843.78339, 2d1ce093-ec3c-4af8-96cd-d55b8a832810 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.784 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] VM Started (Lifecycle Event)#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.786 238887 DEBUG nova.compute.manager [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.790 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.793 238887 INFO nova.virt.libvirt.driver [-] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Instance spawned successfully.#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.793 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.805 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.812 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.817 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.818 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.819 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.820 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.820 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.821 238887 DEBUG nova.virt.libvirt.driver [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.833 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.834 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033843.7846854, 2d1ce093-ec3c-4af8-96cd-d55b8a832810 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.834 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.859 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.862 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033843.788638, 2d1ce093-ec3c-4af8-96cd-d55b8a832810 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.862 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.883 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.888 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.891 238887 INFO nova.compute.manager [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Took 1.86 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.892 238887 DEBUG nova.compute.manager [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.915 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.957 238887 INFO nova.compute.manager [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Took 8.53 seconds to build instance.#033[00m
Feb  2 07:04:03 np0005604943 nova_compute[238883]: 2026-02-02 12:04:03.971 238887 DEBUG oslo_concurrency.lockutils [None req-78e37c1d-ded7-4c0f-a767-e63e79281bb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:03 np0005604943 podman[260827]: 2026-02-02 12:04:03.9729072 +0000 UTC m=+0.043932227 container create 19b3a4412c10516f6d6391e9987a3e16bfe81176ec9340e4d79e0c1870f505c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb  2 07:04:04 np0005604943 systemd[1]: Started libpod-conmon-19b3a4412c10516f6d6391e9987a3e16bfe81176ec9340e4d79e0c1870f505c7.scope.
Feb  2 07:04:04 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:04:04 np0005604943 podman[260827]: 2026-02-02 12:04:03.946100377 +0000 UTC m=+0.017125394 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 07:04:04 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8167246b779eb27102aeb722d81b81a544c1d15a52bbf02c02aba3464a57875/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 07:04:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2808913604' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2808913604' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:04 np0005604943 podman[260827]: 2026-02-02 12:04:04.055782549 +0000 UTC m=+0.126807576 container init 19b3a4412c10516f6d6391e9987a3e16bfe81176ec9340e4d79e0c1870f505c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 07:04:04 np0005604943 podman[260827]: 2026-02-02 12:04:04.060300782 +0000 UTC m=+0.131325789 container start 19b3a4412c10516f6d6391e9987a3e16bfe81176ec9340e4d79e0c1870f505c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:04:04 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[260842]: [NOTICE]   (260846) : New worker (260848) forked
Feb  2 07:04:04 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[260842]: [NOTICE]   (260846) : Loading success.
Feb  2 07:04:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 253 KiB/s rd, 128 MiB/s wr, 445 op/s
Feb  2 07:04:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1320920250' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1320920250' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:05 np0005604943 nova_compute[238883]: 2026-02-02 12:04:05.674 238887 DEBUG nova.compute.manager [req-bf588326-3670-4b45-8056-f0b6e03b3212 req-a3abaddd-7f86-428c-88a3-20f9928b07e9 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Received event network-vif-plugged-61877ae2-b263-468a-aa34-5363e313ade2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:04:05 np0005604943 nova_compute[238883]: 2026-02-02 12:04:05.674 238887 DEBUG oslo_concurrency.lockutils [req-bf588326-3670-4b45-8056-f0b6e03b3212 req-a3abaddd-7f86-428c-88a3-20f9928b07e9 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:05 np0005604943 nova_compute[238883]: 2026-02-02 12:04:05.674 238887 DEBUG oslo_concurrency.lockutils [req-bf588326-3670-4b45-8056-f0b6e03b3212 req-a3abaddd-7f86-428c-88a3-20f9928b07e9 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:05 np0005604943 nova_compute[238883]: 2026-02-02 12:04:05.675 238887 DEBUG oslo_concurrency.lockutils [req-bf588326-3670-4b45-8056-f0b6e03b3212 req-a3abaddd-7f86-428c-88a3-20f9928b07e9 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:05 np0005604943 nova_compute[238883]: 2026-02-02 12:04:05.675 238887 DEBUG nova.compute.manager [req-bf588326-3670-4b45-8056-f0b6e03b3212 req-a3abaddd-7f86-428c-88a3-20f9928b07e9 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] No waiting events found dispatching network-vif-plugged-61877ae2-b263-468a-aa34-5363e313ade2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:04:05 np0005604943 nova_compute[238883]: 2026-02-02 12:04:05.675 238887 WARNING nova.compute.manager [req-bf588326-3670-4b45-8056-f0b6e03b3212 req-a3abaddd-7f86-428c-88a3-20f9928b07e9 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Received unexpected event network-vif-plugged-61877ae2-b263-468a-aa34-5363e313ade2 for instance with vm_state active and task_state None.#033[00m
Feb  2 07:04:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Feb  2 07:04:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Feb  2 07:04:06 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Feb  2 07:04:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 132 KiB/s rd, 74 MiB/s wr, 240 op/s
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.435 238887 DEBUG oslo_concurrency.lockutils [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.436 238887 DEBUG oslo_concurrency.lockutils [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.436 238887 DEBUG oslo_concurrency.lockutils [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.436 238887 DEBUG oslo_concurrency.lockutils [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.436 238887 DEBUG oslo_concurrency.lockutils [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.437 238887 INFO nova.compute.manager [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Terminating instance#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.439 238887 DEBUG nova.compute.manager [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.445 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:06 np0005604943 kernel: tap61877ae2-b2 (unregistering): left promiscuous mode
Feb  2 07:04:06 np0005604943 NetworkManager[49093]: <info>  [1770033846.4741] device (tap61877ae2-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:04:06 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:06Z|00168|binding|INFO|Releasing lport 61877ae2-b263-468a-aa34-5363e313ade2 from this chassis (sb_readonly=0)
Feb  2 07:04:06 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:06Z|00169|binding|INFO|Setting lport 61877ae2-b263-468a-aa34-5363e313ade2 down in Southbound
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.484 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:06 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:06Z|00170|binding|INFO|Removing iface tap61877ae2-b2 ovn-installed in OVS
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.486 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:06 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:06.494 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:af:11 10.100.0.6'], port_security=['fa:16:3e:8c:af:11 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2d1ce093-ec3c-4af8-96cd-d55b8a832810', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34290362-cccd-452d-8e7e-22a6057fdb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e66ed51ccbb840f083b8a86476696747', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f4be2f31-185f-4ed3-aa62-fafdc1532722', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c1fa263-7715-4982-bfcc-ab441fef3c03, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=61877ae2-b263-468a-aa34-5363e313ade2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:04:06 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:06.496 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 61877ae2-b263-468a-aa34-5363e313ade2 in datapath 34290362-cccd-452d-8e7e-22a6057fdb60 unbound from our chassis#033[00m
Feb  2 07:04:06 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:06.497 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34290362-cccd-452d-8e7e-22a6057fdb60, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.498 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:06 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:06.498 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee52867-8c36-4820-b289-80081749d7f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:06 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:06.500 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 namespace which is not needed anymore#033[00m
Feb  2 07:04:06 np0005604943 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Feb  2 07:04:06 np0005604943 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 3.150s CPU time.
Feb  2 07:04:06 np0005604943 systemd-machined[206973]: Machine qemu-16-instance-00000010 terminated.
Feb  2 07:04:06 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[260842]: [NOTICE]   (260846) : haproxy version is 2.8.14-c23fe91
Feb  2 07:04:06 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[260842]: [NOTICE]   (260846) : path to executable is /usr/sbin/haproxy
Feb  2 07:04:06 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[260842]: [WARNING]  (260846) : Exiting Master process...
Feb  2 07:04:06 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[260842]: [ALERT]    (260846) : Current worker (260848) exited with code 143 (Terminated)
Feb  2 07:04:06 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[260842]: [WARNING]  (260846) : All workers exited. Exiting... (0)
Feb  2 07:04:06 np0005604943 systemd[1]: libpod-19b3a4412c10516f6d6391e9987a3e16bfe81176ec9340e4d79e0c1870f505c7.scope: Deactivated successfully.
Feb  2 07:04:06 np0005604943 conmon[260842]: conmon 19b3a4412c10516f6d63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-19b3a4412c10516f6d6391e9987a3e16bfe81176ec9340e4d79e0c1870f505c7.scope/container/memory.events
Feb  2 07:04:06 np0005604943 podman[260882]: 2026-02-02 12:04:06.626444085 +0000 UTC m=+0.039735953 container died 19b3a4412c10516f6d6391e9987a3e16bfe81176ec9340e4d79e0c1870f505c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb  2 07:04:06 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-19b3a4412c10516f6d6391e9987a3e16bfe81176ec9340e4d79e0c1870f505c7-userdata-shm.mount: Deactivated successfully.
Feb  2 07:04:06 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f8167246b779eb27102aeb722d81b81a544c1d15a52bbf02c02aba3464a57875-merged.mount: Deactivated successfully.
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.657 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.662 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.669 238887 INFO nova.virt.libvirt.driver [-] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Instance destroyed successfully.#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.670 238887 DEBUG nova.objects.instance [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lazy-loading 'resources' on Instance uuid 2d1ce093-ec3c-4af8-96cd-d55b8a832810 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:04:06 np0005604943 podman[260882]: 2026-02-02 12:04:06.672121185 +0000 UTC m=+0.085413053 container cleanup 19b3a4412c10516f6d6391e9987a3e16bfe81176ec9340e4d79e0c1870f505c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true)
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.685 238887 DEBUG nova.virt.libvirt.vif [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:03:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-793011599',display_name='tempest-TestVolumeBootPattern-server-793011599',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-793011599',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:04:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-j39ppa2m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:04:03Z,user_data=None,user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=2d1ce093-ec3c-4af8-96cd-d55b8a832810,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "61877ae2-b263-468a-aa34-5363e313ade2", "address": "fa:16:3e:8c:af:11", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61877ae2-b2", "ovs_interfaceid": "61877ae2-b263-468a-aa34-5363e313ade2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.686 238887 DEBUG nova.network.os_vif_util [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "61877ae2-b263-468a-aa34-5363e313ade2", "address": "fa:16:3e:8c:af:11", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61877ae2-b2", "ovs_interfaceid": "61877ae2-b263-468a-aa34-5363e313ade2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.687 238887 DEBUG nova.network.os_vif_util [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:af:11,bridge_name='br-int',has_traffic_filtering=True,id=61877ae2-b263-468a-aa34-5363e313ade2,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61877ae2-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.687 238887 DEBUG os_vif [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:af:11,bridge_name='br-int',has_traffic_filtering=True,id=61877ae2-b263-468a-aa34-5363e313ade2,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61877ae2-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.689 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.689 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61877ae2-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:04:06 np0005604943 systemd[1]: libpod-conmon-19b3a4412c10516f6d6391e9987a3e16bfe81176ec9340e4d79e0c1870f505c7.scope: Deactivated successfully.
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.693 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.695 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.698 238887 INFO os_vif [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:af:11,bridge_name='br-int',has_traffic_filtering=True,id=61877ae2-b263-468a-aa34-5363e313ade2,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61877ae2-b2')#033[00m
Feb  2 07:04:06 np0005604943 podman[260920]: 2026-02-02 12:04:06.744109135 +0000 UTC m=+0.047312240 container remove 19b3a4412c10516f6d6391e9987a3e16bfe81176ec9340e4d79e0c1870f505c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:04:06 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:06.750 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7f4117f4-7d67-42b9-8537-b8006b8a811d]: (4, ('Mon Feb  2 12:04:06 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 (19b3a4412c10516f6d6391e9987a3e16bfe81176ec9340e4d79e0c1870f505c7)\n19b3a4412c10516f6d6391e9987a3e16bfe81176ec9340e4d79e0c1870f505c7\nMon Feb  2 12:04:06 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 (19b3a4412c10516f6d6391e9987a3e16bfe81176ec9340e4d79e0c1870f505c7)\n19b3a4412c10516f6d6391e9987a3e16bfe81176ec9340e4d79e0c1870f505c7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:06 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:06.753 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[450c25b6-0f7e-495e-90e3-68ff915c09bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:06 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:06.754 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34290362-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.757 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:06 np0005604943 kernel: tap34290362-c0: left promiscuous mode
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.765 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:06 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:06.769 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc2d3c3-9d96-4608-b230-540a18df0cdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:06 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:06.791 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[5147a807-6401-418d-8b54-e75b467e48e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:06 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:06.793 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c24efa7d-9f4d-4ecf-b8d5-696f72f82d9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:06 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:06.810 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a5359762-fdd3-4808-82ed-d8d163b7a1ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428877, 'reachable_time': 31364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260953, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:06 np0005604943 systemd[1]: run-netns-ovnmeta\x2d34290362\x2dcccd\x2d452d\x2d8e7e\x2d22a6057fdb60.mount: Deactivated successfully.
Feb  2 07:04:06 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:06.815 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 07:04:06 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:06.815 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[0859b6bf-cc7a-4d8d-8617-12b4c0ebb67e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.871 238887 INFO nova.virt.libvirt.driver [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Deleting instance files /var/lib/nova/instances/2d1ce093-ec3c-4af8-96cd-d55b8a832810_del#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.872 238887 INFO nova.virt.libvirt.driver [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Deletion of /var/lib/nova/instances/2d1ce093-ec3c-4af8-96cd-d55b8a832810_del complete#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.923 238887 INFO nova.compute.manager [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Took 0.48 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.924 238887 DEBUG oslo.service.loopingcall [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:04:06 np0005604943 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.925 238887 DEBUG nova.compute.manager [-] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:04:06 np0005604943 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 07:04:06 np0005604943 nova_compute[238883]: 2026-02-02 12:04:06.925 238887 DEBUG nova.network.neutron [-] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:04:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Feb  2 07:04:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Feb  2 07:04:07 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.433 238887 DEBUG nova.network.neutron [-] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.451 238887 INFO nova.compute.manager [-] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Took 0.53 seconds to deallocate network for instance.#033[00m
Feb  2 07:04:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1140011219' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1140011219' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.633 238887 INFO nova.compute.manager [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Took 0.18 seconds to detach 1 volumes for instance.#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.635 238887 DEBUG nova.compute.manager [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Deleting volume: 0605fe93-3aa9-4484-a759-2a613d4bc37b _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Feb  2 07:04:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.741 238887 DEBUG nova.compute.manager [req-b1decfc5-a7f9-42ce-8bca-37b2d36c830c req-7900f688-c774-475b-bace-aa6731c31b13 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Received event network-vif-unplugged-61877ae2-b263-468a-aa34-5363e313ade2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.742 238887 DEBUG oslo_concurrency.lockutils [req-b1decfc5-a7f9-42ce-8bca-37b2d36c830c req-7900f688-c774-475b-bace-aa6731c31b13 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.742 238887 DEBUG oslo_concurrency.lockutils [req-b1decfc5-a7f9-42ce-8bca-37b2d36c830c req-7900f688-c774-475b-bace-aa6731c31b13 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.742 238887 DEBUG oslo_concurrency.lockutils [req-b1decfc5-a7f9-42ce-8bca-37b2d36c830c req-7900f688-c774-475b-bace-aa6731c31b13 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.743 238887 DEBUG nova.compute.manager [req-b1decfc5-a7f9-42ce-8bca-37b2d36c830c req-7900f688-c774-475b-bace-aa6731c31b13 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] No waiting events found dispatching network-vif-unplugged-61877ae2-b263-468a-aa34-5363e313ade2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.743 238887 DEBUG nova.compute.manager [req-b1decfc5-a7f9-42ce-8bca-37b2d36c830c req-7900f688-c774-475b-bace-aa6731c31b13 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Received event network-vif-unplugged-61877ae2-b263-468a-aa34-5363e313ade2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.743 238887 DEBUG nova.compute.manager [req-b1decfc5-a7f9-42ce-8bca-37b2d36c830c req-7900f688-c774-475b-bace-aa6731c31b13 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Received event network-vif-plugged-61877ae2-b263-468a-aa34-5363e313ade2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.743 238887 DEBUG oslo_concurrency.lockutils [req-b1decfc5-a7f9-42ce-8bca-37b2d36c830c req-7900f688-c774-475b-bace-aa6731c31b13 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.743 238887 DEBUG oslo_concurrency.lockutils [req-b1decfc5-a7f9-42ce-8bca-37b2d36c830c req-7900f688-c774-475b-bace-aa6731c31b13 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.744 238887 DEBUG oslo_concurrency.lockutils [req-b1decfc5-a7f9-42ce-8bca-37b2d36c830c req-7900f688-c774-475b-bace-aa6731c31b13 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.744 238887 DEBUG nova.compute.manager [req-b1decfc5-a7f9-42ce-8bca-37b2d36c830c req-7900f688-c774-475b-bace-aa6731c31b13 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] No waiting events found dispatching network-vif-plugged-61877ae2-b263-468a-aa34-5363e313ade2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.744 238887 WARNING nova.compute.manager [req-b1decfc5-a7f9-42ce-8bca-37b2d36c830c req-7900f688-c774-475b-bace-aa6731c31b13 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Received unexpected event network-vif-plugged-61877ae2-b263-468a-aa34-5363e313ade2 for instance with vm_state active and task_state deleting.#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.744 238887 DEBUG nova.compute.manager [req-b1decfc5-a7f9-42ce-8bca-37b2d36c830c req-7900f688-c774-475b-bace-aa6731c31b13 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Received event network-vif-deleted-61877ae2-b263-468a-aa34-5363e313ade2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.790 238887 DEBUG oslo_concurrency.lockutils [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.791 238887 DEBUG oslo_concurrency.lockutils [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:07 np0005604943 nova_compute[238883]: 2026-02-02 12:04:07.843 238887 DEBUG oslo_concurrency.processutils [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3692239077' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3692239077' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2206630001' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2206630001' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:04:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2442096620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:04:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 8 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 291 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.4 MiB/s wr, 266 op/s
Feb  2 07:04:08 np0005604943 nova_compute[238883]: 2026-02-02 12:04:08.373 238887 DEBUG oslo_concurrency.processutils [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:08 np0005604943 nova_compute[238883]: 2026-02-02 12:04:08.380 238887 DEBUG nova.compute.provider_tree [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:04:08 np0005604943 nova_compute[238883]: 2026-02-02 12:04:08.407 238887 DEBUG nova.scheduler.client.report [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:04:08 np0005604943 nova_compute[238883]: 2026-02-02 12:04:08.434 238887 DEBUG oslo_concurrency.lockutils [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:08 np0005604943 nova_compute[238883]: 2026-02-02 12:04:08.463 238887 INFO nova.scheduler.client.report [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Deleted allocations for instance 2d1ce093-ec3c-4af8-96cd-d55b8a832810#033[00m
Feb  2 07:04:08 np0005604943 nova_compute[238883]: 2026-02-02 12:04:08.520 238887 DEBUG oslo_concurrency.lockutils [None req-93692f9f-04af-4833-b57c-e964e8226c4a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "2d1ce093-ec3c-4af8-96cd-d55b8a832810" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Feb  2 07:04:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Feb  2 07:04:09 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Feb  2 07:04:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_12:04:09
Feb  2 07:04:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 07:04:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 07:04:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'volumes', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.log']
Feb  2 07:04:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 07:04:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:09.778 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:04:09 np0005604943 nova_compute[238883]: 2026-02-02 12:04:09.779 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:09 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:09.780 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 07:04:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:10.029 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:10.030 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:10.030 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:10 np0005604943 podman[260979]: 2026-02-02 12:04:10.040010577 +0000 UTC m=+0.051917574 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:04:10 np0005604943 podman[260978]: 2026-02-02 12:04:10.092890335 +0000 UTC m=+0.111701673 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Feb  2 07:04:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Feb  2 07:04:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Feb  2 07:04:10 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 8 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 293 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 5.6 MiB/s rd, 47 KiB/s wr, 492 op/s
Feb  2 07:04:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/219394069' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/219394069' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:04:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:04:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Feb  2 07:04:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Feb  2 07:04:11 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Feb  2 07:04:11 np0005604943 nova_compute[238883]: 2026-02-02 12:04:11.447 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3829059444' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3829059444' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1459619422' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1459619422' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:11 np0005604943 nova_compute[238883]: 2026-02-02 12:04:11.724 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 877 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 41 KiB/s wr, 543 op/s
Feb  2 07:04:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:04:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Feb  2 07:04:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Feb  2 07:04:12 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Feb  2 07:04:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 88 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 7.8 KiB/s wr, 268 op/s
Feb  2 07:04:15 np0005604943 nova_compute[238883]: 2026-02-02 12:04:15.336 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:15 np0005604943 nova_compute[238883]: 2026-02-02 12:04:15.423 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 88 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 6.6 KiB/s wr, 225 op/s
Feb  2 07:04:16 np0005604943 nova_compute[238883]: 2026-02-02 12:04:16.491 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:16 np0005604943 nova_compute[238883]: 2026-02-02 12:04:16.727 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:04:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Feb  2 07:04:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Feb  2 07:04:17 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Feb  2 07:04:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:17.782 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:04:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 6.2 KiB/s wr, 206 op/s
Feb  2 07:04:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.0 KiB/s wr, 114 op/s
Feb  2 07:04:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Feb  2 07:04:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Feb  2 07:04:20 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Feb  2 07:04:21 np0005604943 nova_compute[238883]: 2026-02-02 12:04:21.494 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.9402152169082224e-06 of space, bias 1.0, pg target 0.0005820645650724668 quantized to 32 (current 32)
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003588698901895388 of space, bias 1.0, pg target 0.10766096705686164 quantized to 32 (current 32)
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.3397996011574907e-06 of space, bias 1.0, pg target 0.0007019398803472472 quantized to 32 (current 32)
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663732217564017 of space, bias 1.0, pg target 0.1999119665269205 quantized to 32 (current 32)
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0763068850279367e-06 of space, bias 4.0, pg target 0.001291568262033524 quantized to 16 (current 16)
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:04:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 07:04:21 np0005604943 nova_compute[238883]: 2026-02-02 12:04:21.669 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033846.6678703, 2d1ce093-ec3c-4af8-96cd-d55b8a832810 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:04:21 np0005604943 nova_compute[238883]: 2026-02-02 12:04:21.670 238887 INFO nova.compute.manager [-] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:04:21 np0005604943 nova_compute[238883]: 2026-02-02 12:04:21.700 238887 DEBUG nova.compute.manager [None req-2c5576ca-e8a0-41d6-ad1e-3bb13dbea4e7 - - - - - -] [instance: 2d1ce093-ec3c-4af8-96cd-d55b8a832810] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:04:21 np0005604943 nova_compute[238883]: 2026-02-02 12:04:21.728 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 114 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 47 op/s
Feb  2 07:04:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:04:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Feb  2 07:04:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Feb  2 07:04:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Feb  2 07:04:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.2 MiB/s wr, 106 op/s
Feb  2 07:04:24 np0005604943 nova_compute[238883]: 2026-02-02 12:04:24.926 238887 DEBUG oslo_concurrency.lockutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:24 np0005604943 nova_compute[238883]: 2026-02-02 12:04:24.926 238887 DEBUG oslo_concurrency.lockutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:24 np0005604943 nova_compute[238883]: 2026-02-02 12:04:24.944 238887 DEBUG nova.compute.manager [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3366884937' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3366884937' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:25 np0005604943 nova_compute[238883]: 2026-02-02 12:04:25.011 238887 DEBUG oslo_concurrency.lockutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:25 np0005604943 nova_compute[238883]: 2026-02-02 12:04:25.011 238887 DEBUG oslo_concurrency.lockutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:25 np0005604943 nova_compute[238883]: 2026-02-02 12:04:25.018 238887 DEBUG nova.virt.hardware [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:04:25 np0005604943 nova_compute[238883]: 2026-02-02 12:04:25.018 238887 INFO nova.compute.claims [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:04:25 np0005604943 nova_compute[238883]: 2026-02-02 12:04:25.099 238887 DEBUG oslo_concurrency.processutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3520000926' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:04:25 np0005604943 nova_compute[238883]: 2026-02-02 12:04:25.665 238887 DEBUG oslo_concurrency.processutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:25 np0005604943 nova_compute[238883]: 2026-02-02 12:04:25.671 238887 DEBUG nova.compute.provider_tree [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:04:25 np0005604943 nova_compute[238883]: 2026-02-02 12:04:25.696 238887 DEBUG nova.scheduler.client.report [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:04:25 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:04:25 np0005604943 nova_compute[238883]: 2026-02-02 12:04:25.803 238887 DEBUG oslo_concurrency.lockutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:25 np0005604943 nova_compute[238883]: 2026-02-02 12:04:25.804 238887 DEBUG nova.compute.manager [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:04:25 np0005604943 nova_compute[238883]: 2026-02-02 12:04:25.846 238887 DEBUG nova.compute.manager [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:04:25 np0005604943 nova_compute[238883]: 2026-02-02 12:04:25.847 238887 DEBUG nova.network.neutron [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:04:25 np0005604943 nova_compute[238883]: 2026-02-02 12:04:25.867 238887 INFO nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:04:25 np0005604943 nova_compute[238883]: 2026-02-02 12:04:25.892 238887 DEBUG nova.compute.manager [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:04:25 np0005604943 nova_compute[238883]: 2026-02-02 12:04:25.936 238887 INFO nova.virt.block_device [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Booting with volume 1645c0c1-d976-4f9f-ad42-eca5c2c0ddb0 at /dev/vda#033[00m
Feb  2 07:04:25 np0005604943 podman[261190]: 2026-02-02 12:04:25.967860333 +0000 UTC m=+0.050665483 container create 4427d181eec41848c860946220af2348877ecb81c27f0f14424b56bc0d8a0e8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 07:04:26 np0005604943 systemd[1]: Started libpod-conmon-4427d181eec41848c860946220af2348877ecb81c27f0f14424b56bc0d8a0e8b.scope.
Feb  2 07:04:26 np0005604943 podman[261190]: 2026-02-02 12:04:25.948160336 +0000 UTC m=+0.030965506 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:04:26 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:04:26 np0005604943 podman[261190]: 2026-02-02 12:04:26.061653533 +0000 UTC m=+0.144458683 container init 4427d181eec41848c860946220af2348877ecb81c27f0f14424b56bc0d8a0e8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 07:04:26 np0005604943 podman[261190]: 2026-02-02 12:04:26.069274572 +0000 UTC m=+0.152079722 container start 4427d181eec41848c860946220af2348877ecb81c27f0f14424b56bc0d8a0e8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feynman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 07:04:26 np0005604943 podman[261190]: 2026-02-02 12:04:26.073605569 +0000 UTC m=+0.156410719 container attach 4427d181eec41848c860946220af2348877ecb81c27f0f14424b56bc0d8a0e8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feynman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 07:04:26 np0005604943 upbeat_feynman[261206]: 167 167
Feb  2 07:04:26 np0005604943 systemd[1]: libpod-4427d181eec41848c860946220af2348877ecb81c27f0f14424b56bc0d8a0e8b.scope: Deactivated successfully.
Feb  2 07:04:26 np0005604943 conmon[261206]: conmon 4427d181eec41848c860 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4427d181eec41848c860946220af2348877ecb81c27f0f14424b56bc0d8a0e8b.scope/container/memory.events
Feb  2 07:04:26 np0005604943 podman[261190]: 2026-02-02 12:04:26.077148036 +0000 UTC m=+0.159953186 container died 4427d181eec41848c860946220af2348877ecb81c27f0f14424b56bc0d8a0e8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feynman, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.077 238887 DEBUG os_brick.utils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.080 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.091 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.091 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[1847840c-ef32-4ddf-856f-ae5bb6542fd2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.094 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:26 np0005604943 systemd[1]: var-lib-containers-storage-overlay-deb54ce559c372d8bd967a0be7a288f5542160a9e24863b6ce9d81938cc829a6-merged.mount: Deactivated successfully.
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.105 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.105 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[04cbbd8e-a623-4ca9-b6e6-39bbb40541d2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.107 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:26 np0005604943 podman[261190]: 2026-02-02 12:04:26.115374062 +0000 UTC m=+0.198179212 container remove 4427d181eec41848c860946220af2348877ecb81c27f0f14424b56bc0d8a0e8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_feynman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.117 238887 DEBUG nova.policy [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e3fc9d8415541ecaa0da4968c9fa242', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e66ed51ccbb840f083b8a86476696747', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.115 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.115 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[bdecc42b-6584-4cb5-951e-7dc762aeeb6e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.124 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[b90418ce-3ff8-4c54-8713-83eae1a0b9b8]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.125 238887 DEBUG oslo_concurrency.processutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:26 np0005604943 systemd[1]: libpod-conmon-4427d181eec41848c860946220af2348877ecb81c27f0f14424b56bc0d8a0e8b.scope: Deactivated successfully.
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.148 238887 DEBUG oslo_concurrency.processutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.150 238887 DEBUG os_brick.initiator.connectors.lightos [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.150 238887 DEBUG os_brick.initiator.connectors.lightos [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.151 238887 DEBUG os_brick.initiator.connectors.lightos [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.151 238887 DEBUG os_brick.utils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.151 238887 DEBUG nova.virt.block_device [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Updating existing volume attachment record: 2b083d4e-d019-45b9-a383-d587282afffe _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:04:26 np0005604943 podman[261238]: 2026-02-02 12:04:26.234780465 +0000 UTC m=+0.037192601 container create 035f1e5141b74cd0143871f3dcf2920aae7d395c909ebe63583d7e43ce83ff1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 07:04:26 np0005604943 systemd[1]: Started libpod-conmon-035f1e5141b74cd0143871f3dcf2920aae7d395c909ebe63583d7e43ce83ff1a.scope.
Feb  2 07:04:26 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:04:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b30fea03cc63ff0984039080506a216b547ee4e01f3c20dbfa8cfa8671f077/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:04:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b30fea03cc63ff0984039080506a216b547ee4e01f3c20dbfa8cfa8671f077/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:04:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b30fea03cc63ff0984039080506a216b547ee4e01f3c20dbfa8cfa8671f077/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:04:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b30fea03cc63ff0984039080506a216b547ee4e01f3c20dbfa8cfa8671f077/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:04:26 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b30fea03cc63ff0984039080506a216b547ee4e01f3c20dbfa8cfa8671f077/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 07:04:26 np0005604943 podman[261238]: 2026-02-02 12:04:26.309542464 +0000 UTC m=+0.111954590 container init 035f1e5141b74cd0143871f3dcf2920aae7d395c909ebe63583d7e43ce83ff1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:04:26 np0005604943 podman[261238]: 2026-02-02 12:04:26.217777135 +0000 UTC m=+0.020189251 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:04:26 np0005604943 podman[261238]: 2026-02-02 12:04:26.319728286 +0000 UTC m=+0.122140392 container start 035f1e5141b74cd0143871f3dcf2920aae7d395c909ebe63583d7e43ce83ff1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:04:26 np0005604943 podman[261238]: 2026-02-02 12:04:26.323407327 +0000 UTC m=+0.125819463 container attach 035f1e5141b74cd0143871f3dcf2920aae7d395c909ebe63583d7e43ce83ff1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:04:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.7 MiB/s wr, 89 op/s
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.497 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Feb  2 07:04:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Feb  2 07:04:26 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Feb  2 07:04:26 np0005604943 nova_compute[238883]: 2026-02-02 12:04:26.730 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:26 np0005604943 elastic_meninsky[261255]: --> passed data devices: 0 physical, 3 LVM
Feb  2 07:04:26 np0005604943 elastic_meninsky[261255]: --> All data devices are unavailable
Feb  2 07:04:26 np0005604943 systemd[1]: libpod-035f1e5141b74cd0143871f3dcf2920aae7d395c909ebe63583d7e43ce83ff1a.scope: Deactivated successfully.
Feb  2 07:04:26 np0005604943 podman[261238]: 2026-02-02 12:04:26.840521296 +0000 UTC m=+0.642933402 container died 035f1e5141b74cd0143871f3dcf2920aae7d395c909ebe63583d7e43ce83ff1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:04:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:04:26 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3687440157' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:04:26 np0005604943 systemd[1]: var-lib-containers-storage-overlay-56b30fea03cc63ff0984039080506a216b547ee4e01f3c20dbfa8cfa8671f077-merged.mount: Deactivated successfully.
Feb  2 07:04:26 np0005604943 podman[261238]: 2026-02-02 12:04:26.883627372 +0000 UTC m=+0.686039498 container remove 035f1e5141b74cd0143871f3dcf2920aae7d395c909ebe63583d7e43ce83ff1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb  2 07:04:26 np0005604943 systemd[1]: libpod-conmon-035f1e5141b74cd0143871f3dcf2920aae7d395c909ebe63583d7e43ce83ff1a.scope: Deactivated successfully.
Feb  2 07:04:27 np0005604943 nova_compute[238883]: 2026-02-02 12:04:27.254 238887 DEBUG nova.compute.manager [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:04:27 np0005604943 nova_compute[238883]: 2026-02-02 12:04:27.258 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:04:27 np0005604943 nova_compute[238883]: 2026-02-02 12:04:27.258 238887 INFO nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Creating image(s)#033[00m
Feb  2 07:04:27 np0005604943 nova_compute[238883]: 2026-02-02 12:04:27.259 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 07:04:27 np0005604943 nova_compute[238883]: 2026-02-02 12:04:27.259 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Ensure instance console log exists: /var/lib/nova/instances/49fa37c8-ff56-455b-9ce3-0bc67080ed52/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:04:27 np0005604943 nova_compute[238883]: 2026-02-02 12:04:27.259 238887 DEBUG oslo_concurrency.lockutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:27 np0005604943 nova_compute[238883]: 2026-02-02 12:04:27.260 238887 DEBUG oslo_concurrency.lockutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:27 np0005604943 nova_compute[238883]: 2026-02-02 12:04:27.260 238887 DEBUG oslo_concurrency.lockutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:27 np0005604943 podman[261349]: 2026-02-02 12:04:27.300215046 +0000 UTC m=+0.031560552 container create b1c2eaf002aeaa86d953387e9384774285e427ed6fba2257798b73667c731047 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 07:04:27 np0005604943 systemd[1]: Started libpod-conmon-b1c2eaf002aeaa86d953387e9384774285e427ed6fba2257798b73667c731047.scope.
Feb  2 07:04:27 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:04:27 np0005604943 podman[261349]: 2026-02-02 12:04:27.375028605 +0000 UTC m=+0.106374131 container init b1c2eaf002aeaa86d953387e9384774285e427ed6fba2257798b73667c731047 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:04:27 np0005604943 podman[261349]: 2026-02-02 12:04:27.381907245 +0000 UTC m=+0.113252751 container start b1c2eaf002aeaa86d953387e9384774285e427ed6fba2257798b73667c731047 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bardeen, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:04:27 np0005604943 podman[261349]: 2026-02-02 12:04:27.286914396 +0000 UTC m=+0.018259902 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:04:27 np0005604943 podman[261349]: 2026-02-02 12:04:27.384882349 +0000 UTC m=+0.116227855 container attach b1c2eaf002aeaa86d953387e9384774285e427ed6fba2257798b73667c731047 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bardeen, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 07:04:27 np0005604943 quizzical_bardeen[261365]: 167 167
Feb  2 07:04:27 np0005604943 systemd[1]: libpod-b1c2eaf002aeaa86d953387e9384774285e427ed6fba2257798b73667c731047.scope: Deactivated successfully.
Feb  2 07:04:27 np0005604943 conmon[261365]: conmon b1c2eaf002aeaa86d953 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1c2eaf002aeaa86d953387e9384774285e427ed6fba2257798b73667c731047.scope/container/memory.events
Feb  2 07:04:27 np0005604943 podman[261349]: 2026-02-02 12:04:27.388279783 +0000 UTC m=+0.119625289 container died b1c2eaf002aeaa86d953387e9384774285e427ed6fba2257798b73667c731047 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bardeen, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:04:27 np0005604943 systemd[1]: var-lib-containers-storage-overlay-7ac1df6123185da1f5b8e89a6b0708d19cfb9cbd2b9dcab7d63f7426809b5966-merged.mount: Deactivated successfully.
Feb  2 07:04:27 np0005604943 podman[261349]: 2026-02-02 12:04:27.420791187 +0000 UTC m=+0.152136693 container remove b1c2eaf002aeaa86d953387e9384774285e427ed6fba2257798b73667c731047 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bardeen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:04:27 np0005604943 systemd[1]: libpod-conmon-b1c2eaf002aeaa86d953387e9384774285e427ed6fba2257798b73667c731047.scope: Deactivated successfully.
Feb  2 07:04:27 np0005604943 podman[261388]: 2026-02-02 12:04:27.554590457 +0000 UTC m=+0.041760824 container create 429ff83a0a7b08b96b97b96c38894274ed2e808049840a8b2bfe93e947fc9f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 07:04:27 np0005604943 systemd[1]: Started libpod-conmon-429ff83a0a7b08b96b97b96c38894274ed2e808049840a8b2bfe93e947fc9f35.scope.
Feb  2 07:04:27 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:04:27 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0946b2ce724a206c5c00cc287be61f79ec4503217b46e390e0cbeef0ff0df9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:04:27 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0946b2ce724a206c5c00cc287be61f79ec4503217b46e390e0cbeef0ff0df9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:04:27 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0946b2ce724a206c5c00cc287be61f79ec4503217b46e390e0cbeef0ff0df9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:04:27 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0946b2ce724a206c5c00cc287be61f79ec4503217b46e390e0cbeef0ff0df9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:04:27 np0005604943 podman[261388]: 2026-02-02 12:04:27.537032852 +0000 UTC m=+0.024203249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:04:27 np0005604943 podman[261388]: 2026-02-02 12:04:27.639425874 +0000 UTC m=+0.126596271 container init 429ff83a0a7b08b96b97b96c38894274ed2e808049840a8b2bfe93e947fc9f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_snyder, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 07:04:27 np0005604943 podman[261388]: 2026-02-02 12:04:27.646391657 +0000 UTC m=+0.133562044 container start 429ff83a0a7b08b96b97b96c38894274ed2e808049840a8b2bfe93e947fc9f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_snyder, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:04:27 np0005604943 podman[261388]: 2026-02-02 12:04:27.649991546 +0000 UTC m=+0.137161953 container attach 429ff83a0a7b08b96b97b96c38894274ed2e808049840a8b2bfe93e947fc9f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 07:04:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]: {
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:    "0": [
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:        {
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "devices": [
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "/dev/loop3"
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            ],
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "lv_name": "ceph_lv0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "lv_size": "21470642176",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "name": "ceph_lv0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "tags": {
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.cluster_name": "ceph",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.crush_device_class": "",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.encrypted": "0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.objectstore": "bluestore",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.osd_id": "0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.type": "block",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.vdo": "0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.with_tpm": "0"
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            },
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "type": "block",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "vg_name": "ceph_vg0"
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:        }
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:    ],
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:    "1": [
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:        {
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "devices": [
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "/dev/loop4"
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            ],
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "lv_name": "ceph_lv1",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "lv_size": "21470642176",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "name": "ceph_lv1",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "tags": {
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.cluster_name": "ceph",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.crush_device_class": "",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.encrypted": "0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.objectstore": "bluestore",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.osd_id": "1",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.type": "block",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.vdo": "0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.with_tpm": "0"
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            },
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "type": "block",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "vg_name": "ceph_vg1"
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:        }
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:    ],
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:    "2": [
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:        {
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "devices": [
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "/dev/loop5"
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            ],
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "lv_name": "ceph_lv2",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "lv_size": "21470642176",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "name": "ceph_lv2",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "tags": {
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.cluster_name": "ceph",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.crush_device_class": "",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.encrypted": "0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.objectstore": "bluestore",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.osd_id": "2",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.type": "block",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.vdo": "0",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:                "ceph.with_tpm": "0"
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            },
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "type": "block",
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:            "vg_name": "ceph_vg2"
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:        }
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]:    ]
Feb  2 07:04:27 np0005604943 reverent_snyder[261404]: }
Feb  2 07:04:27 np0005604943 systemd[1]: libpod-429ff83a0a7b08b96b97b96c38894274ed2e808049840a8b2bfe93e947fc9f35.scope: Deactivated successfully.
Feb  2 07:04:27 np0005604943 podman[261388]: 2026-02-02 12:04:27.958966797 +0000 UTC m=+0.446137214 container died 429ff83a0a7b08b96b97b96c38894274ed2e808049840a8b2bfe93e947fc9f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_snyder, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Feb  2 07:04:27 np0005604943 systemd[1]: var-lib-containers-storage-overlay-bd0946b2ce724a206c5c00cc287be61f79ec4503217b46e390e0cbeef0ff0df9-merged.mount: Deactivated successfully.
Feb  2 07:04:28 np0005604943 podman[261388]: 2026-02-02 12:04:28.008273427 +0000 UTC m=+0.495443834 container remove 429ff83a0a7b08b96b97b96c38894274ed2e808049840a8b2bfe93e947fc9f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_snyder, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Feb  2 07:04:28 np0005604943 systemd[1]: libpod-conmon-429ff83a0a7b08b96b97b96c38894274ed2e808049840a8b2bfe93e947fc9f35.scope: Deactivated successfully.
Feb  2 07:04:28 np0005604943 nova_compute[238883]: 2026-02-02 12:04:28.118 238887 DEBUG nova.network.neutron [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Successfully created port: 41c28d19-861c-496e-ac87-5f0a4a987967 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:04:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 2.8 MiB/s wr, 108 op/s
Feb  2 07:04:28 np0005604943 podman[261488]: 2026-02-02 12:04:28.44221946 +0000 UTC m=+0.035712305 container create 095ea960c28f6a15c859a79fa595e6de0af96d6f3b5f8af6a6ff1ea23e60b895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:04:28 np0005604943 systemd[1]: Started libpod-conmon-095ea960c28f6a15c859a79fa595e6de0af96d6f3b5f8af6a6ff1ea23e60b895.scope.
Feb  2 07:04:28 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:04:28 np0005604943 podman[261488]: 2026-02-02 12:04:28.511869301 +0000 UTC m=+0.105362166 container init 095ea960c28f6a15c859a79fa595e6de0af96d6f3b5f8af6a6ff1ea23e60b895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:04:28 np0005604943 podman[261488]: 2026-02-02 12:04:28.516917376 +0000 UTC m=+0.110410221 container start 095ea960c28f6a15c859a79fa595e6de0af96d6f3b5f8af6a6ff1ea23e60b895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 07:04:28 np0005604943 podman[261488]: 2026-02-02 12:04:28.519620943 +0000 UTC m=+0.113113878 container attach 095ea960c28f6a15c859a79fa595e6de0af96d6f3b5f8af6a6ff1ea23e60b895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 07:04:28 np0005604943 dazzling_lovelace[261504]: 167 167
Feb  2 07:04:28 np0005604943 systemd[1]: libpod-095ea960c28f6a15c859a79fa595e6de0af96d6f3b5f8af6a6ff1ea23e60b895.scope: Deactivated successfully.
Feb  2 07:04:28 np0005604943 conmon[261504]: conmon 095ea960c28f6a15c859 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-095ea960c28f6a15c859a79fa595e6de0af96d6f3b5f8af6a6ff1ea23e60b895.scope/container/memory.events
Feb  2 07:04:28 np0005604943 podman[261488]: 2026-02-02 12:04:28.523604072 +0000 UTC m=+0.117096917 container died 095ea960c28f6a15c859a79fa595e6de0af96d6f3b5f8af6a6ff1ea23e60b895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:04:28 np0005604943 podman[261488]: 2026-02-02 12:04:28.427805822 +0000 UTC m=+0.021298687 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:04:28 np0005604943 systemd[1]: var-lib-containers-storage-overlay-821371b7336e4c9ea35c3281564f489a4379ba706ef0017b0f26e9e7a9c267dc-merged.mount: Deactivated successfully.
Feb  2 07:04:28 np0005604943 podman[261488]: 2026-02-02 12:04:28.564470973 +0000 UTC m=+0.157963818 container remove 095ea960c28f6a15c859a79fa595e6de0af96d6f3b5f8af6a6ff1ea23e60b895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:04:28 np0005604943 systemd[1]: libpod-conmon-095ea960c28f6a15c859a79fa595e6de0af96d6f3b5f8af6a6ff1ea23e60b895.scope: Deactivated successfully.
Feb  2 07:04:28 np0005604943 podman[261528]: 2026-02-02 12:04:28.707429218 +0000 UTC m=+0.041607290 container create 5f94b8a9e99e67d7c55fc6fb7142b5bea4ca23e622ac88b7df5128625a91451b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_clarke, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:04:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Feb  2 07:04:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Feb  2 07:04:28 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Feb  2 07:04:28 np0005604943 systemd[1]: Started libpod-conmon-5f94b8a9e99e67d7c55fc6fb7142b5bea4ca23e622ac88b7df5128625a91451b.scope.
Feb  2 07:04:28 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:04:28 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1530d3b294853bf0e19d305fd1636ea63df72f1e364486887272a331549864ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:04:28 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1530d3b294853bf0e19d305fd1636ea63df72f1e364486887272a331549864ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:04:28 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1530d3b294853bf0e19d305fd1636ea63df72f1e364486887272a331549864ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:04:28 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1530d3b294853bf0e19d305fd1636ea63df72f1e364486887272a331549864ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:04:28 np0005604943 podman[261528]: 2026-02-02 12:04:28.689357231 +0000 UTC m=+0.023535323 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:04:28 np0005604943 podman[261528]: 2026-02-02 12:04:28.791393305 +0000 UTC m=+0.125571397 container init 5f94b8a9e99e67d7c55fc6fb7142b5bea4ca23e622ac88b7df5128625a91451b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 07:04:28 np0005604943 podman[261528]: 2026-02-02 12:04:28.797358572 +0000 UTC m=+0.131536644 container start 5f94b8a9e99e67d7c55fc6fb7142b5bea4ca23e622ac88b7df5128625a91451b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_clarke, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:04:28 np0005604943 podman[261528]: 2026-02-02 12:04:28.803401551 +0000 UTC m=+0.137579623 container attach 5f94b8a9e99e67d7c55fc6fb7142b5bea4ca23e622ac88b7df5128625a91451b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_clarke, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:04:29 np0005604943 lvm[261623]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:04:29 np0005604943 lvm[261624]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 07:04:29 np0005604943 lvm[261624]: VG ceph_vg1 finished
Feb  2 07:04:29 np0005604943 lvm[261623]: VG ceph_vg0 finished
Feb  2 07:04:29 np0005604943 lvm[261626]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 07:04:29 np0005604943 lvm[261626]: VG ceph_vg2 finished
Feb  2 07:04:29 np0005604943 lvm[261627]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:04:29 np0005604943 lvm[261627]: VG ceph_vg0 finished
Feb  2 07:04:29 np0005604943 elated_clarke[261544]: {}
Feb  2 07:04:29 np0005604943 systemd[1]: libpod-5f94b8a9e99e67d7c55fc6fb7142b5bea4ca23e622ac88b7df5128625a91451b.scope: Deactivated successfully.
Feb  2 07:04:29 np0005604943 systemd[1]: libpod-5f94b8a9e99e67d7c55fc6fb7142b5bea4ca23e622ac88b7df5128625a91451b.scope: Consumed 1.243s CPU time.
Feb  2 07:04:29 np0005604943 podman[261528]: 2026-02-02 12:04:29.619852814 +0000 UTC m=+0.954030886 container died 5f94b8a9e99e67d7c55fc6fb7142b5bea4ca23e622ac88b7df5128625a91451b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_clarke, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 07:04:29 np0005604943 systemd[1]: var-lib-containers-storage-overlay-1530d3b294853bf0e19d305fd1636ea63df72f1e364486887272a331549864ab-merged.mount: Deactivated successfully.
Feb  2 07:04:29 np0005604943 podman[261528]: 2026-02-02 12:04:29.666920787 +0000 UTC m=+1.001098859 container remove 5f94b8a9e99e67d7c55fc6fb7142b5bea4ca23e622ac88b7df5128625a91451b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 07:04:29 np0005604943 systemd[1]: libpod-conmon-5f94b8a9e99e67d7c55fc6fb7142b5bea4ca23e622ac88b7df5128625a91451b.scope: Deactivated successfully.
Feb  2 07:04:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:04:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:04:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:04:29 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:04:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 959 KiB/s wr, 126 op/s
Feb  2 07:04:30 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:04:30 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:04:31 np0005604943 nova_compute[238883]: 2026-02-02 12:04:31.547 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:31 np0005604943 nova_compute[238883]: 2026-02-02 12:04:31.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:04:31 np0005604943 nova_compute[238883]: 2026-02-02 12:04:31.732 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 5.2 KiB/s wr, 88 op/s
Feb  2 07:04:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:04:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Feb  2 07:04:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Feb  2 07:04:32 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Feb  2 07:04:33 np0005604943 nova_compute[238883]: 2026-02-02 12:04:33.331 238887 DEBUG nova.network.neutron [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Successfully updated port: 41c28d19-861c-496e-ac87-5f0a4a987967 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:04:33 np0005604943 nova_compute[238883]: 2026-02-02 12:04:33.461 238887 DEBUG nova.compute.manager [req-d6796368-a09f-45e5-b8f1-04e047a54688 req-8faa7024-9a8b-490f-9c33-f626b683824d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Received event network-changed-41c28d19-861c-496e-ac87-5f0a4a987967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:04:33 np0005604943 nova_compute[238883]: 2026-02-02 12:04:33.462 238887 DEBUG nova.compute.manager [req-d6796368-a09f-45e5-b8f1-04e047a54688 req-8faa7024-9a8b-490f-9c33-f626b683824d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Refreshing instance network info cache due to event network-changed-41c28d19-861c-496e-ac87-5f0a4a987967. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:04:33 np0005604943 nova_compute[238883]: 2026-02-02 12:04:33.462 238887 DEBUG oslo_concurrency.lockutils [req-d6796368-a09f-45e5-b8f1-04e047a54688 req-8faa7024-9a8b-490f-9c33-f626b683824d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-49fa37c8-ff56-455b-9ce3-0bc67080ed52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:04:33 np0005604943 nova_compute[238883]: 2026-02-02 12:04:33.462 238887 DEBUG oslo_concurrency.lockutils [req-d6796368-a09f-45e5-b8f1-04e047a54688 req-8faa7024-9a8b-490f-9c33-f626b683824d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-49fa37c8-ff56-455b-9ce3-0bc67080ed52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:04:33 np0005604943 nova_compute[238883]: 2026-02-02 12:04:33.462 238887 DEBUG nova.network.neutron [req-d6796368-a09f-45e5-b8f1-04e047a54688 req-8faa7024-9a8b-490f-9c33-f626b683824d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Refreshing network info cache for port 41c28d19-861c-496e-ac87-5f0a4a987967 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:04:33 np0005604943 nova_compute[238883]: 2026-02-02 12:04:33.465 238887 DEBUG oslo_concurrency.lockutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "refresh_cache-49fa37c8-ff56-455b-9ce3-0bc67080ed52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:04:33 np0005604943 nova_compute[238883]: 2026-02-02 12:04:33.653 238887 DEBUG nova.network.neutron [req-d6796368-a09f-45e5-b8f1-04e047a54688 req-8faa7024-9a8b-490f-9c33-f626b683824d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:04:34 np0005604943 nova_compute[238883]: 2026-02-02 12:04:34.025 238887 DEBUG nova.network.neutron [req-d6796368-a09f-45e5-b8f1-04e047a54688 req-8faa7024-9a8b-490f-9c33-f626b683824d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:04:34 np0005604943 nova_compute[238883]: 2026-02-02 12:04:34.063 238887 DEBUG oslo_concurrency.lockutils [req-d6796368-a09f-45e5-b8f1-04e047a54688 req-8faa7024-9a8b-490f-9c33-f626b683824d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-49fa37c8-ff56-455b-9ce3-0bc67080ed52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:04:34 np0005604943 nova_compute[238883]: 2026-02-02 12:04:34.065 238887 DEBUG oslo_concurrency.lockutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquired lock "refresh_cache-49fa37c8-ff56-455b-9ce3-0bc67080ed52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:04:34 np0005604943 nova_compute[238883]: 2026-02-02 12:04:34.065 238887 DEBUG nova.network.neutron [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:04:34 np0005604943 nova_compute[238883]: 2026-02-02 12:04:34.247 238887 DEBUG nova.network.neutron [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:04:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 6.5 KiB/s wr, 103 op/s
Feb  2 07:04:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Feb  2 07:04:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Feb  2 07:04:34 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.013 238887 DEBUG nova.network.neutron [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Updating instance_info_cache with network_info: [{"id": "41c28d19-861c-496e-ac87-5f0a4a987967", "address": "fa:16:3e:03:72:f2", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c28d19-86", "ovs_interfaceid": "41c28d19-861c-496e-ac87-5f0a4a987967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.031 238887 DEBUG oslo_concurrency.lockutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Releasing lock "refresh_cache-49fa37c8-ff56-455b-9ce3-0bc67080ed52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.031 238887 DEBUG nova.compute.manager [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Instance network_info: |[{"id": "41c28d19-861c-496e-ac87-5f0a4a987967", "address": "fa:16:3e:03:72:f2", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c28d19-86", "ovs_interfaceid": "41c28d19-861c-496e-ac87-5f0a4a987967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.034 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Start _get_guest_xml network_info=[{"id": "41c28d19-861c-496e-ac87-5f0a4a987967", "address": "fa:16:3e:03:72:f2", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c28d19-86", "ovs_interfaceid": "41c28d19-861c-496e-ac87-5f0a4a987967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'attachment_id': '2b083d4e-d019-45b9-a383-d587282afffe', 'delete_on_termination': True, 'guest_format': None, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1645c0c1-d976-4f9f-ad42-eca5c2c0ddb0', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1645c0c1-d976-4f9f-ad42-eca5c2c0ddb0', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '49fa37c8-ff56-455b-9ce3-0bc67080ed52', 'attached_at': '', 'detached_at': '', 'volume_id': '1645c0c1-d976-4f9f-ad42-eca5c2c0ddb0', 'serial': '1645c0c1-d976-4f9f-ad42-eca5c2c0ddb0'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.040 238887 WARNING nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.046 238887 DEBUG nova.virt.libvirt.host [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.047 238887 DEBUG nova.virt.libvirt.host [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.053 238887 DEBUG nova.virt.libvirt.host [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.053 238887 DEBUG nova.virt.libvirt.host [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.054 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.054 238887 DEBUG nova.virt.hardware [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.054 238887 DEBUG nova.virt.hardware [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.054 238887 DEBUG nova.virt.hardware [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.055 238887 DEBUG nova.virt.hardware [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.055 238887 DEBUG nova.virt.hardware [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.055 238887 DEBUG nova.virt.hardware [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.055 238887 DEBUG nova.virt.hardware [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.056 238887 DEBUG nova.virt.hardware [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.056 238887 DEBUG nova.virt.hardware [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.056 238887 DEBUG nova.virt.hardware [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.056 238887 DEBUG nova.virt.hardware [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.082 238887 DEBUG nova.storage.rbd_utils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image 49fa37c8-ff56-455b-9ce3-0bc67080ed52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.087 238887 DEBUG oslo_concurrency.processutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:04:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4235263744' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.644 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.646 238887 DEBUG oslo_concurrency.processutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.674 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.674 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.675 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.675 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.675 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.699 238887 DEBUG nova.virt.libvirt.vif [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-857108296',display_name='tempest-TestVolumeBootPattern-volume-backed-server-857108296',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-857108296',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBACjE8dh8V4cWVkX+yw8FrLRJLdPBbPG6UdwUbgn0Rgy2SUN5h0MPu7kenkNTdDGKiMuhvvLOA289aOvZUc8b0RlFCKC9xUSfOOYeEtIvthB/OR92xZN54m1j4SqVjCg9g==',key_name='tempest-keypair-1934849553',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-0kq0995v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:04:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=49fa37c8-ff56-455b-9ce3-0bc67080ed52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41c28d19-861c-496e-ac87-5f0a4a987967", "address": "fa:16:3e:03:72:f2", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c28d19-86", "ovs_interfaceid": "41c28d19-861c-496e-ac87-5f0a4a987967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.700 238887 DEBUG nova.network.os_vif_util [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "41c28d19-861c-496e-ac87-5f0a4a987967", "address": "fa:16:3e:03:72:f2", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c28d19-86", "ovs_interfaceid": "41c28d19-861c-496e-ac87-5f0a4a987967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.700 238887 DEBUG nova.network.os_vif_util [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:72:f2,bridge_name='br-int',has_traffic_filtering=True,id=41c28d19-861c-496e-ac87-5f0a4a987967,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c28d19-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.702 238887 DEBUG nova.objects.instance [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49fa37c8-ff56-455b-9ce3-0bc67080ed52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.718 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  <uuid>49fa37c8-ff56-455b-9ce3-0bc67080ed52</uuid>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  <name>instance-00000011</name>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-857108296</nova:name>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:04:35</nova:creationTime>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:        <nova:user uuid="5e3fc9d8415541ecaa0da4968c9fa242">tempest-TestVolumeBootPattern-1059348902-project-member</nova:user>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:        <nova:project uuid="e66ed51ccbb840f083b8a86476696747">tempest-TestVolumeBootPattern-1059348902</nova:project>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:        <nova:port uuid="41c28d19-861c-496e-ac87-5f0a4a987967">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <entry name="serial">49fa37c8-ff56-455b-9ce3-0bc67080ed52</entry>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <entry name="uuid">49fa37c8-ff56-455b-9ce3-0bc67080ed52</entry>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/49fa37c8-ff56-455b-9ce3-0bc67080ed52_disk.config">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="volumes/volume-1645c0c1-d976-4f9f-ad42-eca5c2c0ddb0">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <serial>1645c0c1-d976-4f9f-ad42-eca5c2c0ddb0</serial>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:03:72:f2"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <target dev="tap41c28d19-86"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/49fa37c8-ff56-455b-9ce3-0bc67080ed52/console.log" append="off"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:04:35 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:04:35 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:04:35 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:04:35 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.719 238887 DEBUG nova.compute.manager [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Preparing to wait for external event network-vif-plugged-41c28d19-861c-496e-ac87-5f0a4a987967 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.720 238887 DEBUG oslo_concurrency.lockutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.720 238887 DEBUG oslo_concurrency.lockutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.721 238887 DEBUG oslo_concurrency.lockutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.721 238887 DEBUG nova.virt.libvirt.vif [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-857108296',display_name='tempest-TestVolumeBootPattern-volume-backed-server-857108296',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-857108296',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBACjE8dh8V4cWVkX+yw8FrLRJLdPBbPG6UdwUbgn0Rgy2SUN5h0MPu7kenkNTdDGKiMuhvvLOA289aOvZUc8b0RlFCKC9xUSfOOYeEtIvthB/OR92xZN54m1j4SqVjCg9g==',key_name='tempest-keypair-1934849553',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-0kq0995v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:04:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=49fa37c8-ff56-455b-9ce3-0bc67080ed52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41c28d19-861c-496e-ac87-5f0a4a987967", "address": "fa:16:3e:03:72:f2", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c28d19-86", "ovs_interfaceid": "41c28d19-861c-496e-ac87-5f0a4a987967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.722 238887 DEBUG nova.network.os_vif_util [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "41c28d19-861c-496e-ac87-5f0a4a987967", "address": "fa:16:3e:03:72:f2", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c28d19-86", "ovs_interfaceid": "41c28d19-861c-496e-ac87-5f0a4a987967", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.723 238887 DEBUG nova.network.os_vif_util [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:72:f2,bridge_name='br-int',has_traffic_filtering=True,id=41c28d19-861c-496e-ac87-5f0a4a987967,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c28d19-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.723 238887 DEBUG os_vif [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:72:f2,bridge_name='br-int',has_traffic_filtering=True,id=41c28d19-861c-496e-ac87-5f0a4a987967,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c28d19-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.724 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.724 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.725 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.730 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.730 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41c28d19-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.731 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41c28d19-86, col_values=(('external_ids', {'iface-id': '41c28d19-861c-496e-ac87-5f0a4a987967', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:72:f2', 'vm-uuid': '49fa37c8-ff56-455b-9ce3-0bc67080ed52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.733 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:35 np0005604943 NetworkManager[49093]: <info>  [1770033875.7347] manager: (tap41c28d19-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.737 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:04:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3014770686' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3014770686' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.741 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.743 238887 INFO os_vif [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:72:f2,bridge_name='br-int',has_traffic_filtering=True,id=41c28d19-861c-496e-ac87-5f0a4a987967,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c28d19-86')#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.794 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.795 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.795 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No VIF found with MAC fa:16:3e:03:72:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.796 238887 INFO nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Using config drive#033[00m
Feb  2 07:04:35 np0005604943 nova_compute[238883]: 2026-02-02 12:04:35.820 238887 DEBUG nova.storage.rbd_utils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image 49fa37c8-ff56-455b-9ce3-0bc67080ed52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.047 238887 INFO nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Creating config drive at /var/lib/nova/instances/49fa37c8-ff56-455b-9ce3-0bc67080ed52/disk.config#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.052 238887 DEBUG oslo_concurrency.processutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49fa37c8-ff56-455b-9ce3-0bc67080ed52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpahr9d_1a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.188 238887 DEBUG oslo_concurrency.processutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49fa37c8-ff56-455b-9ce3-0bc67080ed52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpahr9d_1a" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.212 238887 DEBUG nova.storage.rbd_utils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image 49fa37c8-ff56-455b-9ce3-0bc67080ed52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.216 238887 DEBUG oslo_concurrency.processutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49fa37c8-ff56-455b-9ce3-0bc67080ed52/disk.config 49fa37c8-ff56-455b-9ce3-0bc67080ed52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:04:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3246746792' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.253 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.314 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.315 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.362 238887 DEBUG oslo_concurrency.processutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49fa37c8-ff56-455b-9ce3-0bc67080ed52/disk.config 49fa37c8-ff56-455b-9ce3-0bc67080ed52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.363 238887 INFO nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Deleting local config drive /var/lib/nova/instances/49fa37c8-ff56-455b-9ce3-0bc67080ed52/disk.config because it was imported into RBD.#033[00m
Feb  2 07:04:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.1 KiB/s wr, 30 op/s
Feb  2 07:04:36 np0005604943 kernel: tap41c28d19-86: entered promiscuous mode
Feb  2 07:04:36 np0005604943 NetworkManager[49093]: <info>  [1770033876.4107] manager: (tap41c28d19-86): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.410 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:36 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:36Z|00171|binding|INFO|Claiming lport 41c28d19-861c-496e-ac87-5f0a4a987967 for this chassis.
Feb  2 07:04:36 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:36Z|00172|binding|INFO|41c28d19-861c-496e-ac87-5f0a4a987967: Claiming fa:16:3e:03:72:f2 10.100.0.14
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.414 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.423 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:72:f2 10.100.0.14'], port_security=['fa:16:3e:03:72:f2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '49fa37c8-ff56-455b-9ce3-0bc67080ed52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34290362-cccd-452d-8e7e-22a6057fdb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e66ed51ccbb840f083b8a86476696747', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dec0c13f-4257-499f-8319-0d7aea717815', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c1fa263-7715-4982-bfcc-ab441fef3c03, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=41c28d19-861c-496e-ac87-5f0a4a987967) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.424 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 41c28d19-861c-496e-ac87-5f0a4a987967 in datapath 34290362-cccd-452d-8e7e-22a6057fdb60 bound to our chassis#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.426 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34290362-cccd-452d-8e7e-22a6057fdb60#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.437 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d8aa957d-251d-4524-a5f2-30d5886fc7db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.438 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap34290362-c1 in ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.442 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap34290362-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.442 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[eaad4876-7afa-4ab8-9fc5-791c8be5f233]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:36 np0005604943 systemd-udevd[261802]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.443 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1396ede2-ca73-4806-89de-84e655e84fdf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.446 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:36 np0005604943 systemd-machined[206973]: New machine qemu-17-instance-00000011.
Feb  2 07:04:36 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:36Z|00173|binding|INFO|Setting lport 41c28d19-861c-496e-ac87-5f0a4a987967 ovn-installed in OVS
Feb  2 07:04:36 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:36Z|00174|binding|INFO|Setting lport 41c28d19-861c-496e-ac87-5f0a4a987967 up in Southbound
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.452 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.454 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[1be226fa-f0d7-483b-9a60-a34a3e0fa1d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:36 np0005604943 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Feb  2 07:04:36 np0005604943 NetworkManager[49093]: <info>  [1770033876.4631] device (tap41c28d19-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:04:36 np0005604943 NetworkManager[49093]: <info>  [1770033876.4635] device (tap41c28d19-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.477 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec28cf9-562d-4c6b-a01d-db5125fe078c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.513 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[b42efd23-fa8a-4e83-8fd5-1d6c9f27ae78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:36 np0005604943 NetworkManager[49093]: <info>  [1770033876.5207] manager: (tap34290362-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/93)
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.519 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[753fbd76-7800-4eed-9c01-0d2ea0cc8238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.549 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.551 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[83b2c6a4-6dc0-41c3-8442-a2e68b06ecc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.554 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[03fa4d29-d812-40ab-ae64-fc582c3c4109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.567 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.568 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4470MB free_disk=59.98816534038633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.568 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.569 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:36 np0005604943 NetworkManager[49093]: <info>  [1770033876.5761] device (tap34290362-c0): carrier: link connected
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.579 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[bbbccb95-d7ff-43a4-a6dc-f7bf4868221d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.595 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d514caa8-5ef8-468c-8351-b0649745b8f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34290362-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:39:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432192, 'reachable_time': 23049, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261834, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.610 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[72da9bd0-e085-4385-83b3-f18fb9a23c84]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb3:39d2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 432192, 'tstamp': 432192}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261835, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.621 238887 DEBUG nova.compute.manager [req-0ee3f6d3-fe75-4ba2-8bab-568ab86841e8 req-e14e2680-eec1-45a6-a6cd-d4bf0b9887c3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Received event network-vif-plugged-41c28d19-861c-496e-ac87-5f0a4a987967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.622 238887 DEBUG oslo_concurrency.lockutils [req-0ee3f6d3-fe75-4ba2-8bab-568ab86841e8 req-e14e2680-eec1-45a6-a6cd-d4bf0b9887c3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.622 238887 DEBUG oslo_concurrency.lockutils [req-0ee3f6d3-fe75-4ba2-8bab-568ab86841e8 req-e14e2680-eec1-45a6-a6cd-d4bf0b9887c3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.622 238887 DEBUG oslo_concurrency.lockutils [req-0ee3f6d3-fe75-4ba2-8bab-568ab86841e8 req-e14e2680-eec1-45a6-a6cd-d4bf0b9887c3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.623 238887 DEBUG nova.compute.manager [req-0ee3f6d3-fe75-4ba2-8bab-568ab86841e8 req-e14e2680-eec1-45a6-a6cd-d4bf0b9887c3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Processing event network-vif-plugged-41c28d19-861c-496e-ac87-5f0a4a987967 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.626 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7c5b3ed1-d7f7-49b6-8007-a71841e77789]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34290362-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:39:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432192, 'reachable_time': 23049, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261836, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.638 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance 49fa37c8-ff56-455b-9ce3-0bc67080ed52 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.639 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.639 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.656 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1b630c3d-44f6-41ed-a295-503bbc52bd54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.668 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:04:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Feb  2 07:04:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Feb  2 07:04:36 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.720 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[3dbef93e-d796-49b1-8dce-62dcd3f944c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.722 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34290362-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.723 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.724 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34290362-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:04:36 np0005604943 NetworkManager[49093]: <info>  [1770033876.7276] manager: (tap34290362-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Feb  2 07:04:36 np0005604943 kernel: tap34290362-c0: entered promiscuous mode
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.729 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34290362-c0, col_values=(('external_ids', {'iface-id': '54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.729 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:36 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:36Z|00175|binding|INFO|Releasing lport 54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c from this chassis (sb_readonly=0)
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.733 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/34290362-cccd-452d-8e7e-22a6057fdb60.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/34290362-cccd-452d-8e7e-22a6057fdb60.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.736 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[16f32411-6698-445b-a5e6-139ae6a02403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.737 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-34290362-cccd-452d-8e7e-22a6057fdb60
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/34290362-cccd-452d-8e7e-22a6057fdb60.pid.haproxy
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID 34290362-cccd-452d-8e7e-22a6057fdb60
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 07:04:36 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:04:36.738 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'env', 'PROCESS_TAG=haproxy-34290362-cccd-452d-8e7e-22a6057fdb60', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/34290362-cccd-452d-8e7e-22a6057fdb60.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.744 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.920 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033876.920065, 49fa37c8-ff56-455b-9ce3-0bc67080ed52 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.921 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] VM Started (Lifecycle Event)#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.923 238887 DEBUG nova.compute.manager [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.929 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.933 238887 INFO nova.virt.libvirt.driver [-] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Instance spawned successfully.#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.933 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.953 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.960 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.970 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.971 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.972 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.972 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.972 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.973 238887 DEBUG nova.virt.libvirt.driver [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.978 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.979 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033876.920455, 49fa37c8-ff56-455b-9ce3-0bc67080ed52 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:04:36 np0005604943 nova_compute[238883]: 2026-02-02 12:04:36.979 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:04:37 np0005604943 nova_compute[238883]: 2026-02-02 12:04:37.001 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:04:37 np0005604943 nova_compute[238883]: 2026-02-02 12:04:37.005 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033876.928712, 49fa37c8-ff56-455b-9ce3-0bc67080ed52 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:04:37 np0005604943 nova_compute[238883]: 2026-02-02 12:04:37.005 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:04:37 np0005604943 nova_compute[238883]: 2026-02-02 12:04:37.022 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:04:37 np0005604943 nova_compute[238883]: 2026-02-02 12:04:37.028 238887 INFO nova.compute.manager [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Took 9.77 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:04:37 np0005604943 nova_compute[238883]: 2026-02-02 12:04:37.029 238887 DEBUG nova.compute.manager [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:04:37 np0005604943 nova_compute[238883]: 2026-02-02 12:04:37.030 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:04:37 np0005604943 nova_compute[238883]: 2026-02-02 12:04:37.057 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:04:37 np0005604943 nova_compute[238883]: 2026-02-02 12:04:37.090 238887 INFO nova.compute.manager [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Took 12.10 seconds to build instance.#033[00m
Feb  2 07:04:37 np0005604943 podman[261930]: 2026-02-02 12:04:37.107671449 +0000 UTC m=+0.051508435 container create be27336f3d7020d9eb75fa3952be1efac520b2dfae573b91897d49bc7d107e44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 07:04:37 np0005604943 nova_compute[238883]: 2026-02-02 12:04:37.113 238887 DEBUG oslo_concurrency.lockutils [None req-e2514191-3a53-4399-bd81-798766eb08a2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:37 np0005604943 systemd[1]: Started libpod-conmon-be27336f3d7020d9eb75fa3952be1efac520b2dfae573b91897d49bc7d107e44.scope.
Feb  2 07:04:37 np0005604943 podman[261930]: 2026-02-02 12:04:37.079539283 +0000 UTC m=+0.023376289 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 07:04:37 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:04:37 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d825938c429ed5ae861a4a4772a57336c43e79f85087c4003cd7fbafe1468c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 07:04:37 np0005604943 podman[261930]: 2026-02-02 12:04:37.199390437 +0000 UTC m=+0.143227443 container init be27336f3d7020d9eb75fa3952be1efac520b2dfae573b91897d49bc7d107e44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb  2 07:04:37 np0005604943 podman[261930]: 2026-02-02 12:04:37.203929069 +0000 UTC m=+0.147766065 container start be27336f3d7020d9eb75fa3952be1efac520b2dfae573b91897d49bc7d107e44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb  2 07:04:37 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[261945]: [NOTICE]   (261949) : New worker (261951) forked
Feb  2 07:04:37 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[261945]: [NOTICE]   (261949) : Loading success.
Feb  2 07:04:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:04:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/894341932' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:04:37 np0005604943 nova_compute[238883]: 2026-02-02 12:04:37.280 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:04:37 np0005604943 nova_compute[238883]: 2026-02-02 12:04:37.286 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:04:37 np0005604943 nova_compute[238883]: 2026-02-02 12:04:37.302 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:04:37 np0005604943 nova_compute[238883]: 2026-02-02 12:04:37.337 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 07:04:37 np0005604943 nova_compute[238883]: 2026-02-02 12:04:37.338 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:04:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Feb  2 07:04:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Feb  2 07:04:37 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Feb  2 07:04:38 np0005604943 nova_compute[238883]: 2026-02-02 12:04:38.338 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:04:38 np0005604943 nova_compute[238883]: 2026-02-02 12:04:38.338 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 07:04:38 np0005604943 nova_compute[238883]: 2026-02-02 12:04:38.339 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 07:04:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 37 KiB/s wr, 166 op/s
Feb  2 07:04:38 np0005604943 nova_compute[238883]: 2026-02-02 12:04:38.689 238887 DEBUG nova.compute.manager [req-e9729f9c-266f-4558-8c24-1098aa30a165 req-56b2deb3-d0e0-4096-8b9e-8687035f0006 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Received event network-vif-plugged-41c28d19-861c-496e-ac87-5f0a4a987967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:04:38 np0005604943 nova_compute[238883]: 2026-02-02 12:04:38.690 238887 DEBUG oslo_concurrency.lockutils [req-e9729f9c-266f-4558-8c24-1098aa30a165 req-56b2deb3-d0e0-4096-8b9e-8687035f0006 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:04:38 np0005604943 nova_compute[238883]: 2026-02-02 12:04:38.690 238887 DEBUG oslo_concurrency.lockutils [req-e9729f9c-266f-4558-8c24-1098aa30a165 req-56b2deb3-d0e0-4096-8b9e-8687035f0006 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:04:38 np0005604943 nova_compute[238883]: 2026-02-02 12:04:38.690 238887 DEBUG oslo_concurrency.lockutils [req-e9729f9c-266f-4558-8c24-1098aa30a165 req-56b2deb3-d0e0-4096-8b9e-8687035f0006 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:04:38 np0005604943 nova_compute[238883]: 2026-02-02 12:04:38.691 238887 DEBUG nova.compute.manager [req-e9729f9c-266f-4558-8c24-1098aa30a165 req-56b2deb3-d0e0-4096-8b9e-8687035f0006 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] No waiting events found dispatching network-vif-plugged-41c28d19-861c-496e-ac87-5f0a4a987967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:04:38 np0005604943 nova_compute[238883]: 2026-02-02 12:04:38.691 238887 WARNING nova.compute.manager [req-e9729f9c-266f-4558-8c24-1098aa30a165 req-56b2deb3-d0e0-4096-8b9e-8687035f0006 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Received unexpected event network-vif-plugged-41c28d19-861c-496e-ac87-5f0a4a987967 for instance with vm_state active and task_state None.#033[00m
Feb  2 07:04:38 np0005604943 nova_compute[238883]: 2026-02-02 12:04:38.851 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "refresh_cache-49fa37c8-ff56-455b-9ce3-0bc67080ed52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:04:38 np0005604943 nova_compute[238883]: 2026-02-02 12:04:38.852 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquired lock "refresh_cache-49fa37c8-ff56-455b-9ce3-0bc67080ed52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:04:38 np0005604943 nova_compute[238883]: 2026-02-02 12:04:38.852 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 07:04:38 np0005604943 nova_compute[238883]: 2026-02-02 12:04:38.853 238887 DEBUG nova.objects.instance [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 49fa37c8-ff56-455b-9ce3-0bc67080ed52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:04:39 np0005604943 NetworkManager[49093]: <info>  [1770033879.2078] manager: (patch-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Feb  2 07:04:39 np0005604943 NetworkManager[49093]: <info>  [1770033879.2087] manager: (patch-br-int-to-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Feb  2 07:04:39 np0005604943 nova_compute[238883]: 2026-02-02 12:04:39.209 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:39 np0005604943 nova_compute[238883]: 2026-02-02 12:04:39.278 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:39 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:39Z|00176|binding|INFO|Releasing lport 54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c from this chassis (sb_readonly=0)
Feb  2 07:04:39 np0005604943 nova_compute[238883]: 2026-02-02 12:04:39.294 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:39 np0005604943 nova_compute[238883]: 2026-02-02 12:04:39.589 238887 DEBUG nova.compute.manager [req-aca75dc3-c560-478e-80e0-a309c38cc5cc req-dc6f1b7b-65b1-4309-ae38-7e019e81eac2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Received event network-changed-41c28d19-861c-496e-ac87-5f0a4a987967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:04:39 np0005604943 nova_compute[238883]: 2026-02-02 12:04:39.589 238887 DEBUG nova.compute.manager [req-aca75dc3-c560-478e-80e0-a309c38cc5cc req-dc6f1b7b-65b1-4309-ae38-7e019e81eac2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Refreshing instance network info cache due to event network-changed-41c28d19-861c-496e-ac87-5f0a4a987967. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:04:39 np0005604943 nova_compute[238883]: 2026-02-02 12:04:39.589 238887 DEBUG oslo_concurrency.lockutils [req-aca75dc3-c560-478e-80e0-a309c38cc5cc req-dc6f1b7b-65b1-4309-ae38-7e019e81eac2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-49fa37c8-ff56-455b-9ce3-0bc67080ed52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:04:39 np0005604943 nova_compute[238883]: 2026-02-02 12:04:39.882 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Updating instance_info_cache with network_info: [{"id": "41c28d19-861c-496e-ac87-5f0a4a987967", "address": "fa:16:3e:03:72:f2", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c28d19-86", "ovs_interfaceid": "41c28d19-861c-496e-ac87-5f0a4a987967", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:04:39 np0005604943 nova_compute[238883]: 2026-02-02 12:04:39.912 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Releasing lock "refresh_cache-49fa37c8-ff56-455b-9ce3-0bc67080ed52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:04:39 np0005604943 nova_compute[238883]: 2026-02-02 12:04:39.913 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 07:04:39 np0005604943 nova_compute[238883]: 2026-02-02 12:04:39.913 238887 DEBUG oslo_concurrency.lockutils [req-aca75dc3-c560-478e-80e0-a309c38cc5cc req-dc6f1b7b-65b1-4309-ae38-7e019e81eac2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-49fa37c8-ff56-455b-9ce3-0bc67080ed52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:04:39 np0005604943 nova_compute[238883]: 2026-02-02 12:04:39.913 238887 DEBUG nova.network.neutron [req-aca75dc3-c560-478e-80e0-a309c38cc5cc req-dc6f1b7b-65b1-4309-ae38-7e019e81eac2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Refreshing network info cache for port 41c28d19-861c-496e-ac87-5f0a4a987967 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:04:39 np0005604943 nova_compute[238883]: 2026-02-02 12:04:39.915 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:04:39 np0005604943 nova_compute[238883]: 2026-02-02 12:04:39.915 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:04:39 np0005604943 nova_compute[238883]: 2026-02-02 12:04:39.915 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:04:39 np0005604943 nova_compute[238883]: 2026-02-02 12:04:39.916 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 07:04:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 36 KiB/s wr, 233 op/s
Feb  2 07:04:40 np0005604943 nova_compute[238883]: 2026-02-02 12:04:40.734 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Feb  2 07:04:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:04:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:04:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:04:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:04:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:04:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:04:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Feb  2 07:04:40 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Feb  2 07:04:40 np0005604943 nova_compute[238883]: 2026-02-02 12:04:40.924 238887 DEBUG nova.network.neutron [req-aca75dc3-c560-478e-80e0-a309c38cc5cc req-dc6f1b7b-65b1-4309-ae38-7e019e81eac2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Updated VIF entry in instance network info cache for port 41c28d19-861c-496e-ac87-5f0a4a987967. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:04:40 np0005604943 nova_compute[238883]: 2026-02-02 12:04:40.925 238887 DEBUG nova.network.neutron [req-aca75dc3-c560-478e-80e0-a309c38cc5cc req-dc6f1b7b-65b1-4309-ae38-7e019e81eac2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Updating instance_info_cache with network_info: [{"id": "41c28d19-861c-496e-ac87-5f0a4a987967", "address": "fa:16:3e:03:72:f2", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c28d19-86", "ovs_interfaceid": "41c28d19-861c-496e-ac87-5f0a4a987967", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:04:40 np0005604943 nova_compute[238883]: 2026-02-02 12:04:40.946 238887 DEBUG oslo_concurrency.lockutils [req-aca75dc3-c560-478e-80e0-a309c38cc5cc req-dc6f1b7b-65b1-4309-ae38-7e019e81eac2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-49fa37c8-ff56-455b-9ce3-0bc67080ed52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:04:41 np0005604943 podman[261964]: 2026-02-02 12:04:41.043492178 +0000 UTC m=+0.054940351 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:04:41 np0005604943 podman[261963]: 2026-02-02 12:04:41.074050743 +0000 UTC m=+0.088102930 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:04:41 np0005604943 nova_compute[238883]: 2026-02-02 12:04:41.550 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Feb  2 07:04:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Feb  2 07:04:41 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Feb  2 07:04:42 np0005604943 nova_compute[238883]: 2026-02-02 12:04:42.213 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:04:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1792148997' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1792148997' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 47 KiB/s wr, 556 op/s
Feb  2 07:04:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:04:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/247393777' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/247393777' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 12 KiB/s wr, 356 op/s
Feb  2 07:04:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3334639911' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3334639911' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:45 np0005604943 nova_compute[238883]: 2026-02-02 12:04:45.738 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Feb  2 07:04:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Feb  2 07:04:45 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Feb  2 07:04:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/533845499' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/533845499' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 10 KiB/s wr, 305 op/s
Feb  2 07:04:46 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:46Z|00177|binding|INFO|Releasing lport 54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c from this chassis (sb_readonly=0)
Feb  2 07:04:46 np0005604943 nova_compute[238883]: 2026-02-02 12:04:46.557 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:46 np0005604943 nova_compute[238883]: 2026-02-02 12:04:46.560 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Feb  2 07:04:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Feb  2 07:04:46 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Feb  2 07:04:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2050658231' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:46 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2050658231' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:04:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Feb  2 07:04:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Feb  2 07:04:47 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Feb  2 07:04:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 4.5 KiB/s wr, 171 op/s
Feb  2 07:04:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3267635402' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3267635402' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:04:49 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:49Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:72:f2 10.100.0.14
Feb  2 07:04:49 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:49Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:72:f2 10.100.0.14
Feb  2 07:04:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 148 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 422 KiB/s rd, 2.4 MiB/s wr, 259 op/s
Feb  2 07:04:50 np0005604943 nova_compute[238883]: 2026-02-02 12:04:50.742 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:51 np0005604943 ovn_controller[145056]: 2026-02-02T12:04:51Z|00178|binding|INFO|Releasing lport 54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c from this chassis (sb_readonly=0)
Feb  2 07:04:51 np0005604943 nova_compute[238883]: 2026-02-02 12:04:51.063 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:51 np0005604943 nova_compute[238883]: 2026-02-02 12:04:51.560 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 166 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 769 KiB/s rd, 3.9 MiB/s wr, 305 op/s
Feb  2 07:04:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:04:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Feb  2 07:04:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Feb  2 07:04:52 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Feb  2 07:04:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 167 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 710 KiB/s rd, 3.4 MiB/s wr, 284 op/s
Feb  2 07:04:55 np0005604943 nova_compute[238883]: 2026-02-02 12:04:55.746 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 167 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 531 KiB/s rd, 2.9 MiB/s wr, 138 op/s
Feb  2 07:04:56 np0005604943 nova_compute[238883]: 2026-02-02 12:04:56.563 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Feb  2 07:04:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Feb  2 07:04:56 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Feb  2 07:04:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:04:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Feb  2 07:04:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Feb  2 07:04:57 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Feb  2 07:04:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 167 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 56 KiB/s wr, 31 op/s
Feb  2 07:04:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Feb  2 07:04:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Feb  2 07:04:58 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Feb  2 07:04:59 np0005604943 nova_compute[238883]: 2026-02-02 12:04:59.552 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:04:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:04:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3084250171' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:04:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:04:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3084250171' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:05:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 167 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 29 KiB/s wr, 55 op/s
Feb  2 07:05:00 np0005604943 nova_compute[238883]: 2026-02-02 12:05:00.798 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:01 np0005604943 nova_compute[238883]: 2026-02-02 12:05:01.565 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 167 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 33 KiB/s wr, 69 op/s
Feb  2 07:05:02 np0005604943 nova_compute[238883]: 2026-02-02 12:05:02.548 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:05:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Feb  2 07:05:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Feb  2 07:05:02 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Feb  2 07:05:03 np0005604943 nova_compute[238883]: 2026-02-02 12:05:03.766 238887 DEBUG oslo_concurrency.lockutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:03 np0005604943 nova_compute[238883]: 2026-02-02 12:05:03.767 238887 DEBUG oslo_concurrency.lockutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:03 np0005604943 nova_compute[238883]: 2026-02-02 12:05:03.801 238887 DEBUG nova.compute.manager [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:05:03 np0005604943 nova_compute[238883]: 2026-02-02 12:05:03.886 238887 DEBUG oslo_concurrency.lockutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:03 np0005604943 nova_compute[238883]: 2026-02-02 12:05:03.887 238887 DEBUG oslo_concurrency.lockutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:03 np0005604943 nova_compute[238883]: 2026-02-02 12:05:03.896 238887 DEBUG nova.virt.hardware [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:05:03 np0005604943 nova_compute[238883]: 2026-02-02 12:05:03.897 238887 INFO nova.compute.claims [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:05:04 np0005604943 nova_compute[238883]: 2026-02-02 12:05:04.034 238887 DEBUG oslo_concurrency.processutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 167 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 9.5 KiB/s wr, 82 op/s
Feb  2 07:05:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:05:04 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1839614571' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:05:04 np0005604943 nova_compute[238883]: 2026-02-02 12:05:04.607 238887 DEBUG oslo_concurrency.processutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:04 np0005604943 nova_compute[238883]: 2026-02-02 12:05:04.614 238887 DEBUG nova.compute.provider_tree [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:05:04 np0005604943 nova_compute[238883]: 2026-02-02 12:05:04.742 238887 DEBUG nova.scheduler.client.report [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:05:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Feb  2 07:05:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Feb  2 07:05:04 np0005604943 nova_compute[238883]: 2026-02-02 12:05:04.794 238887 DEBUG oslo_concurrency.lockutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:04 np0005604943 nova_compute[238883]: 2026-02-02 12:05:04.795 238887 DEBUG nova.compute.manager [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:05:04 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Feb  2 07:05:04 np0005604943 nova_compute[238883]: 2026-02-02 12:05:04.896 238887 INFO nova.virt.libvirt.driver [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:05:04 np0005604943 nova_compute[238883]: 2026-02-02 12:05:04.901 238887 DEBUG nova.compute.manager [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:05:04 np0005604943 nova_compute[238883]: 2026-02-02 12:05:04.902 238887 DEBUG nova.network.neutron [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:05:05 np0005604943 nova_compute[238883]: 2026-02-02 12:05:05.046 238887 DEBUG nova.compute.manager [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:05:05 np0005604943 nova_compute[238883]: 2026-02-02 12:05:05.282 238887 INFO nova.virt.block_device [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Booting with volume snapshot 8fbfd2bc-9968-4152-ab9a-b9d1139ca2f3 at /dev/vda#033[00m
Feb  2 07:05:05 np0005604943 nova_compute[238883]: 2026-02-02 12:05:05.480 238887 DEBUG nova.policy [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e3fc9d8415541ecaa0da4968c9fa242', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e66ed51ccbb840f083b8a86476696747', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:05:05 np0005604943 nova_compute[238883]: 2026-02-02 12:05:05.802 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:06 np0005604943 nova_compute[238883]: 2026-02-02 12:05:06.357 238887 DEBUG nova.network.neutron [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Successfully created port: 93006a5f-209d-479d-85bb-9f019bd5ddff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:05:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 167 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 4.2 KiB/s wr, 34 op/s
Feb  2 07:05:06 np0005604943 nova_compute[238883]: 2026-02-02 12:05:06.570 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:05:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/809282951' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:05:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:05:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/809282951' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:05:07 np0005604943 nova_compute[238883]: 2026-02-02 12:05:07.573 238887 DEBUG nova.network.neutron [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Successfully updated port: 93006a5f-209d-479d-85bb-9f019bd5ddff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:05:07 np0005604943 nova_compute[238883]: 2026-02-02 12:05:07.590 238887 DEBUG oslo_concurrency.lockutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "refresh_cache-b2e6a3a8-544c-4442-ab4e-d27954c0de48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:05:07 np0005604943 nova_compute[238883]: 2026-02-02 12:05:07.590 238887 DEBUG oslo_concurrency.lockutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquired lock "refresh_cache-b2e6a3a8-544c-4442-ab4e-d27954c0de48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:05:07 np0005604943 nova_compute[238883]: 2026-02-02 12:05:07.590 238887 DEBUG nova.network.neutron [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:05:07 np0005604943 nova_compute[238883]: 2026-02-02 12:05:07.691 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:05:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Feb  2 07:05:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Feb  2 07:05:07 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Feb  2 07:05:07 np0005604943 nova_compute[238883]: 2026-02-02 12:05:07.724 238887 DEBUG nova.compute.manager [req-e55f7967-bc6e-4890-8e9c-ac512e53f6ab req-3117bb9c-b2ca-455b-bb5b-60c4dc73771a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Received event network-changed-93006a5f-209d-479d-85bb-9f019bd5ddff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:05:07 np0005604943 nova_compute[238883]: 2026-02-02 12:05:07.724 238887 DEBUG nova.compute.manager [req-e55f7967-bc6e-4890-8e9c-ac512e53f6ab req-3117bb9c-b2ca-455b-bb5b-60c4dc73771a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Refreshing instance network info cache due to event network-changed-93006a5f-209d-479d-85bb-9f019bd5ddff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:05:07 np0005604943 nova_compute[238883]: 2026-02-02 12:05:07.724 238887 DEBUG oslo_concurrency.lockutils [req-e55f7967-bc6e-4890-8e9c-ac512e53f6ab req-3117bb9c-b2ca-455b-bb5b-60c4dc73771a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-b2e6a3a8-544c-4442-ab4e-d27954c0de48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:05:07 np0005604943 nova_compute[238883]: 2026-02-02 12:05:07.834 238887 DEBUG nova.network.neutron [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:05:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 167 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 5.8 KiB/s wr, 87 op/s
Feb  2 07:05:08 np0005604943 nova_compute[238883]: 2026-02-02 12:05:08.767 238887 DEBUG nova.network.neutron [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Updating instance_info_cache with network_info: [{"id": "93006a5f-209d-479d-85bb-9f019bd5ddff", "address": "fa:16:3e:d6:f3:3d", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93006a5f-20", "ovs_interfaceid": "93006a5f-209d-479d-85bb-9f019bd5ddff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:05:08 np0005604943 nova_compute[238883]: 2026-02-02 12:05:08.821 238887 DEBUG oslo_concurrency.lockutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Releasing lock "refresh_cache-b2e6a3a8-544c-4442-ab4e-d27954c0de48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:05:08 np0005604943 nova_compute[238883]: 2026-02-02 12:05:08.821 238887 DEBUG nova.compute.manager [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Instance network_info: |[{"id": "93006a5f-209d-479d-85bb-9f019bd5ddff", "address": "fa:16:3e:d6:f3:3d", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93006a5f-20", "ovs_interfaceid": "93006a5f-209d-479d-85bb-9f019bd5ddff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:05:08 np0005604943 nova_compute[238883]: 2026-02-02 12:05:08.822 238887 DEBUG oslo_concurrency.lockutils [req-e55f7967-bc6e-4890-8e9c-ac512e53f6ab req-3117bb9c-b2ca-455b-bb5b-60c4dc73771a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-b2e6a3a8-544c-4442-ab4e-d27954c0de48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:05:08 np0005604943 nova_compute[238883]: 2026-02-02 12:05:08.822 238887 DEBUG nova.network.neutron [req-e55f7967-bc6e-4890-8e9c-ac512e53f6ab req-3117bb9c-b2ca-455b-bb5b-60c4dc73771a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Refreshing network info cache for port 93006a5f-209d-479d-85bb-9f019bd5ddff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:05:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_12:05:09
Feb  2 07:05:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 07:05:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 07:05:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'vms', 'images', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.control']
Feb  2 07:05:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.739 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.827 238887 DEBUG os_brick.utils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.828 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Feb  2 07:05:09 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.838 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.840 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[2010d0a6-96d2-4ebf-8c7b-e2f00448a56e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.844 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.852 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.853 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa7bdb9-8ea1-4a3e-8bf7-49a92bae432e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.855 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.863 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.864 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[a0676050-9685-4717-bde8-a7c694abc580]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.865 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[a964ae4a-1cae-4376-bf00-194cfc62741f]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.866 238887 DEBUG oslo_concurrency.processutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.895 238887 DEBUG oslo_concurrency.processutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.897 238887 DEBUG os_brick.initiator.connectors.lightos [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.898 238887 DEBUG os_brick.initiator.connectors.lightos [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.898 238887 DEBUG os_brick.initiator.connectors.lightos [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.898 238887 DEBUG os_brick.utils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:05:09 np0005604943 nova_compute[238883]: 2026-02-02 12:05:09.898 238887 DEBUG nova.virt.block_device [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Updating existing volume attachment record: d9f70457-c23c-46b4-9254-4def54917661 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:05:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:10.030 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:10.031 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:10.032 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 167 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 5.8 KiB/s wr, 78 op/s
Feb  2 07:05:10 np0005604943 nova_compute[238883]: 2026-02-02 12:05:10.675 238887 DEBUG nova.network.neutron [req-e55f7967-bc6e-4890-8e9c-ac512e53f6ab req-3117bb9c-b2ca-455b-bb5b-60c4dc73771a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Updated VIF entry in instance network info cache for port 93006a5f-209d-479d-85bb-9f019bd5ddff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:05:10 np0005604943 nova_compute[238883]: 2026-02-02 12:05:10.676 238887 DEBUG nova.network.neutron [req-e55f7967-bc6e-4890-8e9c-ac512e53f6ab req-3117bb9c-b2ca-455b-bb5b-60c4dc73771a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Updating instance_info_cache with network_info: [{"id": "93006a5f-209d-479d-85bb-9f019bd5ddff", "address": "fa:16:3e:d6:f3:3d", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93006a5f-20", "ovs_interfaceid": "93006a5f-209d-479d-85bb-9f019bd5ddff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:05:10 np0005604943 nova_compute[238883]: 2026-02-02 12:05:10.697 238887 DEBUG oslo_concurrency.lockutils [req-e55f7967-bc6e-4890-8e9c-ac512e53f6ab req-3117bb9c-b2ca-455b-bb5b-60c4dc73771a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-b2e6a3a8-544c-4442-ab4e-d27954c0de48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:05:10 np0005604943 nova_compute[238883]: 2026-02-02 12:05:10.805 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:05:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/385856515' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:05:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:05:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:11.287 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.287 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:11 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:11.288 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.317 238887 DEBUG nova.compute.manager [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.318 238887 DEBUG nova.virt.libvirt.driver [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.319 238887 INFO nova.virt.libvirt.driver [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Creating image(s)#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.319 238887 DEBUG nova.virt.libvirt.driver [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.319 238887 DEBUG nova.virt.libvirt.driver [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Ensure instance console log exists: /var/lib/nova/instances/b2e6a3a8-544c-4442-ab4e-d27954c0de48/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.320 238887 DEBUG oslo_concurrency.lockutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.320 238887 DEBUG oslo_concurrency.lockutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.320 238887 DEBUG oslo_concurrency.lockutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.322 238887 DEBUG nova.virt.libvirt.driver [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Start _get_guest_xml network_info=[{"id": "93006a5f-209d-479d-85bb-9f019bd5ddff", "address": "fa:16:3e:d6:f3:3d", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93006a5f-20", "ovs_interfaceid": "93006a5f-209d-479d-85bb-9f019bd5ddff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2026-02-02T12:04:56Z,direct_url=<?>,disk_format='qcow2',id=421c3c59-9b2e-48e9-be9c-5972b0d34b00,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-717984886',owner='e66ed51ccbb840f083b8a86476696747',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2026-02-02T12:04:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'attachment_id': 'd9f70457-c23c-46b4-9254-4def54917661', 'delete_on_termination': True, 'guest_format': None, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c666995f-b3ea-40b8-b445-50f26d9b6bec', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c666995f-b3ea-40b8-b445-50f26d9b6bec', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'b2e6a3a8-544c-4442-ab4e-d27954c0de48', 'attached_at': '', 'detached_at': '', 'volume_id': 'c666995f-b3ea-40b8-b445-50f26d9b6bec', 'serial': 'c666995f-b3ea-40b8-b445-50f26d9b6bec'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.327 238887 WARNING nova.virt.libvirt.driver [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.332 238887 DEBUG nova.virt.libvirt.host [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.332 238887 DEBUG nova.virt.libvirt.host [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.336 238887 DEBUG nova.virt.libvirt.host [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.337 238887 DEBUG nova.virt.libvirt.host [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.337 238887 DEBUG nova.virt.libvirt.driver [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.337 238887 DEBUG nova.virt.hardware [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2026-02-02T12:04:56Z,direct_url=<?>,disk_format='qcow2',id=421c3c59-9b2e-48e9-be9c-5972b0d34b00,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-717984886',owner='e66ed51ccbb840f083b8a86476696747',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2026-02-02T12:04:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.338 238887 DEBUG nova.virt.hardware [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.338 238887 DEBUG nova.virt.hardware [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.338 238887 DEBUG nova.virt.hardware [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.338 238887 DEBUG nova.virt.hardware [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.338 238887 DEBUG nova.virt.hardware [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.339 238887 DEBUG nova.virt.hardware [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.339 238887 DEBUG nova.virt.hardware [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.339 238887 DEBUG nova.virt.hardware [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.339 238887 DEBUG nova.virt.hardware [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.339 238887 DEBUG nova.virt.hardware [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.363 238887 DEBUG nova.storage.rbd_utils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image b2e6a3a8-544c-4442-ab4e-d27954c0de48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.367 238887 DEBUG oslo_concurrency.processutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.572 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Feb  2 07:05:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Feb  2 07:05:11 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Feb  2 07:05:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:05:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1868275298' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.908 238887 DEBUG oslo_concurrency.processutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.968 238887 DEBUG nova.virt.libvirt.vif [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:05:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-146622984',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-146622984',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-146622984',id=18,image_ref='421c3c59-9b2e-48e9-be9c-5972b0d34b00',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFo6aZgzdmgq5Xs7apZZZSeH77QPXs3ivcJkISaDCec1l8Xq3E0TW/61SOm+v7JQhl+wSwPBZfZufSXwDEGpOVbRLprl32CQssPm67PIHYznTUlSBm7nl+pRhRqTzDTQHQ==',key_name='tempest-keypair-1283497596',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-ueicvhch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1059348902',image_owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:05:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=b2e6a3a8-544c-4442-ab4e-d27954c0de48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "93006a5f-209d-479d-85bb-9f019bd5ddff", "address": "fa:16:3e:d6:f3:3d", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93006a5f-20", "ovs_interfaceid": "93006a5f-209d-479d-85bb-9f019bd5ddff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.969 238887 DEBUG nova.network.os_vif_util [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "93006a5f-209d-479d-85bb-9f019bd5ddff", "address": "fa:16:3e:d6:f3:3d", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93006a5f-20", "ovs_interfaceid": "93006a5f-209d-479d-85bb-9f019bd5ddff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.970 238887 DEBUG nova.network.os_vif_util [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:f3:3d,bridge_name='br-int',has_traffic_filtering=True,id=93006a5f-209d-479d-85bb-9f019bd5ddff,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93006a5f-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:05:11 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.971 238887 DEBUG nova.objects.instance [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lazy-loading 'pci_devices' on Instance uuid b2e6a3a8-544c-4442-ab4e-d27954c0de48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.998 238887 DEBUG nova.virt.libvirt.driver [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  <uuid>b2e6a3a8-544c-4442-ab4e-d27954c0de48</uuid>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  <name>instance-00000012</name>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-146622984</nova:name>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:05:11</nova:creationTime>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:        <nova:user uuid="5e3fc9d8415541ecaa0da4968c9fa242">tempest-TestVolumeBootPattern-1059348902-project-member</nova:user>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:        <nova:project uuid="e66ed51ccbb840f083b8a86476696747">tempest-TestVolumeBootPattern-1059348902</nova:project>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <nova:root type="image" uuid="421c3c59-9b2e-48e9-be9c-5972b0d34b00"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:        <nova:port uuid="93006a5f-209d-479d-85bb-9f019bd5ddff">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <entry name="serial">b2e6a3a8-544c-4442-ab4e-d27954c0de48</entry>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <entry name="uuid">b2e6a3a8-544c-4442-ab4e-d27954c0de48</entry>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/b2e6a3a8-544c-4442-ab4e-d27954c0de48_disk.config">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="volumes/volume-c666995f-b3ea-40b8-b445-50f26d9b6bec">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <serial>c666995f-b3ea-40b8-b445-50f26d9b6bec</serial>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:d6:f3:3d"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <target dev="tap93006a5f-20"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/b2e6a3a8-544c-4442-ab4e-d27954c0de48/console.log" append="off"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <input type="keyboard" bus="usb"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:05:12 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:05:12 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:05:12 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:05:12 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:11.999 238887 DEBUG nova.compute.manager [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Preparing to wait for external event network-vif-plugged-93006a5f-209d-479d-85bb-9f019bd5ddff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.000 238887 DEBUG oslo_concurrency.lockutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.000 238887 DEBUG oslo_concurrency.lockutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.000 238887 DEBUG oslo_concurrency.lockutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.001 238887 DEBUG nova.virt.libvirt.vif [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:05:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-146622984',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-146622984',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-146622984',id=18,image_ref='421c3c59-9b2e-48e9-be9c-5972b0d34b00',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFo6aZgzdmgq5Xs7apZZZSeH77QPXs3ivcJkISaDCec1l8Xq3E0TW/61SOm+v7JQhl+wSwPBZfZufSXwDEGpOVbRLprl32CQssPm67PIHYznTUlSBm7nl+pRhRqTzDTQHQ==',key_name='tempest-keypair-1283497596',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-ueicvhch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1059348902',image_owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:05:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=b2e6a3a8-544c-4442-ab4e-d27954c0de48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "93006a5f-209d-479d-85bb-9f019bd5ddff", "address": "fa:16:3e:d6:f3:3d", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93006a5f-20", "ovs_interfaceid": "93006a5f-209d-479d-85bb-9f019bd5ddff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.001 238887 DEBUG nova.network.os_vif_util [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "93006a5f-209d-479d-85bb-9f019bd5ddff", "address": "fa:16:3e:d6:f3:3d", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93006a5f-20", "ovs_interfaceid": "93006a5f-209d-479d-85bb-9f019bd5ddff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.002 238887 DEBUG nova.network.os_vif_util [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:f3:3d,bridge_name='br-int',has_traffic_filtering=True,id=93006a5f-209d-479d-85bb-9f019bd5ddff,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93006a5f-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.002 238887 DEBUG os_vif [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:f3:3d,bridge_name='br-int',has_traffic_filtering=True,id=93006a5f-209d-479d-85bb-9f019bd5ddff,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93006a5f-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.003 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.003 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.004 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.007 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.008 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93006a5f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.008 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap93006a5f-20, col_values=(('external_ids', {'iface-id': '93006a5f-209d-479d-85bb-9f019bd5ddff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:f3:3d', 'vm-uuid': 'b2e6a3a8-544c-4442-ab4e-d27954c0de48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.010 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:12 np0005604943 NetworkManager[49093]: <info>  [1770033912.0117] manager: (tap93006a5f-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.012 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.021 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.022 238887 INFO os_vif [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:f3:3d,bridge_name='br-int',has_traffic_filtering=True,id=93006a5f-209d-479d-85bb-9f019bd5ddff,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93006a5f-20')#033[00m
Feb  2 07:05:12 np0005604943 podman[262078]: 2026-02-02 12:05:12.071931054 +0000 UTC m=+0.088562921 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb  2 07:05:12 np0005604943 podman[262077]: 2026-02-02 12:05:12.072063358 +0000 UTC m=+0.091566526 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.078 238887 DEBUG nova.virt.libvirt.driver [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.079 238887 DEBUG nova.virt.libvirt.driver [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.079 238887 DEBUG nova.virt.libvirt.driver [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No VIF found with MAC fa:16:3e:d6:f3:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.079 238887 INFO nova.virt.libvirt.driver [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Using config drive#033[00m
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.106 238887 DEBUG nova.storage.rbd_utils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image b2e6a3a8-544c-4442-ab4e-d27954c0de48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:05:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 167 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 8.2 KiB/s wr, 111 op/s
Feb  2 07:05:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:05:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Feb  2 07:05:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Feb  2 07:05:12 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Feb  2 07:05:12 np0005604943 nova_compute[238883]: 2026-02-02 12:05:12.998 238887 INFO nova.virt.libvirt.driver [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Creating config drive at /var/lib/nova/instances/b2e6a3a8-544c-4442-ab4e-d27954c0de48/disk.config#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.003 238887 DEBUG oslo_concurrency.processutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b2e6a3a8-544c-4442-ab4e-d27954c0de48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpt6edz3qk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.128 238887 DEBUG oslo_concurrency.processutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b2e6a3a8-544c-4442-ab4e-d27954c0de48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpt6edz3qk" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.164 238887 DEBUG nova.storage.rbd_utils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image b2e6a3a8-544c-4442-ab4e-d27954c0de48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.168 238887 DEBUG oslo_concurrency.processutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b2e6a3a8-544c-4442-ab4e-d27954c0de48/disk.config b2e6a3a8-544c-4442-ab4e-d27954c0de48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.298 238887 DEBUG oslo_concurrency.processutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b2e6a3a8-544c-4442-ab4e-d27954c0de48/disk.config b2e6a3a8-544c-4442-ab4e-d27954c0de48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.299 238887 INFO nova.virt.libvirt.driver [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Deleting local config drive /var/lib/nova/instances/b2e6a3a8-544c-4442-ab4e-d27954c0de48/disk.config because it was imported into RBD.#033[00m
Feb  2 07:05:13 np0005604943 kernel: tap93006a5f-20: entered promiscuous mode
Feb  2 07:05:13 np0005604943 NetworkManager[49093]: <info>  [1770033913.3574] manager: (tap93006a5f-20): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Feb  2 07:05:13 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:13Z|00179|binding|INFO|Claiming lport 93006a5f-209d-479d-85bb-9f019bd5ddff for this chassis.
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.360 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:13 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:13Z|00180|binding|INFO|93006a5f-209d-479d-85bb-9f019bd5ddff: Claiming fa:16:3e:d6:f3:3d 10.100.0.12
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.371 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:13 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:13Z|00181|binding|INFO|Setting lport 93006a5f-209d-479d-85bb-9f019bd5ddff ovn-installed in OVS
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.372 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:13 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:13.374 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:f3:3d 10.100.0.12'], port_security=['fa:16:3e:d6:f3:3d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b2e6a3a8-544c-4442-ab4e-d27954c0de48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34290362-cccd-452d-8e7e-22a6057fdb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e66ed51ccbb840f083b8a86476696747', 'neutron:revision_number': '2', 'neutron:security_group_ids': '47811367-fb4b-48f8-b202-cddf3c298120', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c1fa263-7715-4982-bfcc-ab441fef3c03, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=93006a5f-209d-479d-85bb-9f019bd5ddff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:05:13 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:13Z|00182|binding|INFO|Setting lport 93006a5f-209d-479d-85bb-9f019bd5ddff up in Southbound
Feb  2 07:05:13 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:13.376 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 93006a5f-209d-479d-85bb-9f019bd5ddff in datapath 34290362-cccd-452d-8e7e-22a6057fdb60 bound to our chassis#033[00m
Feb  2 07:05:13 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:13.377 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34290362-cccd-452d-8e7e-22a6057fdb60#033[00m
Feb  2 07:05:13 np0005604943 systemd-machined[206973]: New machine qemu-18-instance-00000012.
Feb  2 07:05:13 np0005604943 systemd-udevd[262195]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:05:13 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:13.391 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0e7b02-a6d6-46eb-bf67-583235bb7fb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:13 np0005604943 NetworkManager[49093]: <info>  [1770033913.4031] device (tap93006a5f-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:05:13 np0005604943 NetworkManager[49093]: <info>  [1770033913.4043] device (tap93006a5f-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:05:13 np0005604943 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Feb  2 07:05:13 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:13.416 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[ade793ba-1cc2-46a7-b0cf-78aa2308c750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:13 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:13.420 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[ac40074a-c356-49f9-a194-2aadc5560ffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:13 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:13.450 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ba81b9-d575-4f2f-a47b-887b989b0a49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:05:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4156900301' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:05:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:05:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4156900301' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:05:13 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:13.469 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[8e093eba-ae1c-428b-a697-3510b6429101]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34290362-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:39:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432192, 'reachable_time': 23049, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262207, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:13 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:13.481 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5a0647-dc76-4b42-a23a-c0f7349056c8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap34290362-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 432203, 'tstamp': 432203}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262209, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap34290362-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 432206, 'tstamp': 432206}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262209, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:13 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:13.483 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34290362-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.484 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:13 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:13.486 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34290362-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:13 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:13.486 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:05:13 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:13.486 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34290362-c0, col_values=(('external_ids', {'iface-id': '54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:13 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:13.487 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.593 238887 DEBUG nova.compute.manager [req-7d1c1709-ad22-4e41-970d-1845a6c1bf6d req-782d01cc-a7b7-463c-876d-a68bf7cd04dc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Received event network-vif-plugged-93006a5f-209d-479d-85bb-9f019bd5ddff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.594 238887 DEBUG oslo_concurrency.lockutils [req-7d1c1709-ad22-4e41-970d-1845a6c1bf6d req-782d01cc-a7b7-463c-876d-a68bf7cd04dc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.595 238887 DEBUG oslo_concurrency.lockutils [req-7d1c1709-ad22-4e41-970d-1845a6c1bf6d req-782d01cc-a7b7-463c-876d-a68bf7cd04dc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.595 238887 DEBUG oslo_concurrency.lockutils [req-7d1c1709-ad22-4e41-970d-1845a6c1bf6d req-782d01cc-a7b7-463c-876d-a68bf7cd04dc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.595 238887 DEBUG nova.compute.manager [req-7d1c1709-ad22-4e41-970d-1845a6c1bf6d req-782d01cc-a7b7-463c-876d-a68bf7cd04dc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Processing event network-vif-plugged-93006a5f-209d-479d-85bb-9f019bd5ddff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.805 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033913.804801, b2e6a3a8-544c-4442-ab4e-d27954c0de48 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.806 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] VM Started (Lifecycle Event)#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.810 238887 DEBUG nova.compute.manager [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.814 238887 DEBUG nova.virt.libvirt.driver [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.820 238887 INFO nova.virt.libvirt.driver [-] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Instance spawned successfully.#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.821 238887 INFO nova.compute.manager [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Took 2.50 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.822 238887 DEBUG nova.compute.manager [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.826 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.838 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.865 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.866 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033913.8090026, b2e6a3a8-544c-4442-ab4e-d27954c0de48 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.866 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.896 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.900 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033913.8133652, b2e6a3a8-544c-4442-ab4e-d27954c0de48 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.900 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.909 238887 INFO nova.compute.manager [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Took 10.06 seconds to build instance.#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.918 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.922 238887 DEBUG oslo_concurrency.lockutils [None req-f92f7d40-1dfc-45f4-9e2f-863a3b9c0637 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:13 np0005604943 nova_compute[238883]: 2026-02-02 12:05:13.923 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:05:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 167 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 4.8 KiB/s wr, 58 op/s
Feb  2 07:05:15 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:15.291 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:15 np0005604943 nova_compute[238883]: 2026-02-02 12:05:15.689 238887 DEBUG nova.compute.manager [req-0aafa92c-5e65-4915-ab7e-38859ee50fd8 req-0d4844fa-159c-4bdd-ba1b-55fd5e62ee99 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Received event network-vif-plugged-93006a5f-209d-479d-85bb-9f019bd5ddff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:05:15 np0005604943 nova_compute[238883]: 2026-02-02 12:05:15.690 238887 DEBUG oslo_concurrency.lockutils [req-0aafa92c-5e65-4915-ab7e-38859ee50fd8 req-0d4844fa-159c-4bdd-ba1b-55fd5e62ee99 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:15 np0005604943 nova_compute[238883]: 2026-02-02 12:05:15.690 238887 DEBUG oslo_concurrency.lockutils [req-0aafa92c-5e65-4915-ab7e-38859ee50fd8 req-0d4844fa-159c-4bdd-ba1b-55fd5e62ee99 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:15 np0005604943 nova_compute[238883]: 2026-02-02 12:05:15.690 238887 DEBUG oslo_concurrency.lockutils [req-0aafa92c-5e65-4915-ab7e-38859ee50fd8 req-0d4844fa-159c-4bdd-ba1b-55fd5e62ee99 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:15 np0005604943 nova_compute[238883]: 2026-02-02 12:05:15.690 238887 DEBUG nova.compute.manager [req-0aafa92c-5e65-4915-ab7e-38859ee50fd8 req-0d4844fa-159c-4bdd-ba1b-55fd5e62ee99 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] No waiting events found dispatching network-vif-plugged-93006a5f-209d-479d-85bb-9f019bd5ddff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:05:15 np0005604943 nova_compute[238883]: 2026-02-02 12:05:15.690 238887 WARNING nova.compute.manager [req-0aafa92c-5e65-4915-ab7e-38859ee50fd8 req-0d4844fa-159c-4bdd-ba1b-55fd5e62ee99 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Received unexpected event network-vif-plugged-93006a5f-209d-479d-85bb-9f019bd5ddff for instance with vm_state active and task_state None.#033[00m
Feb  2 07:05:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 167 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.2 KiB/s wr, 35 op/s
Feb  2 07:05:16 np0005604943 nova_compute[238883]: 2026-02-02 12:05:16.574 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Feb  2 07:05:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Feb  2 07:05:16 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Feb  2 07:05:17 np0005604943 nova_compute[238883]: 2026-02-02 12:05:17.011 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:05:17 np0005604943 nova_compute[238883]: 2026-02-02 12:05:17.833 238887 DEBUG nova.compute.manager [req-5a8c154d-cf25-4d35-97d0-34174b3a881a req-a77a1201-9fca-4249-85e0-ef5fc77a81b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Received event network-changed-93006a5f-209d-479d-85bb-9f019bd5ddff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:05:17 np0005604943 nova_compute[238883]: 2026-02-02 12:05:17.833 238887 DEBUG nova.compute.manager [req-5a8c154d-cf25-4d35-97d0-34174b3a881a req-a77a1201-9fca-4249-85e0-ef5fc77a81b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Refreshing instance network info cache due to event network-changed-93006a5f-209d-479d-85bb-9f019bd5ddff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:05:17 np0005604943 nova_compute[238883]: 2026-02-02 12:05:17.833 238887 DEBUG oslo_concurrency.lockutils [req-5a8c154d-cf25-4d35-97d0-34174b3a881a req-a77a1201-9fca-4249-85e0-ef5fc77a81b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-b2e6a3a8-544c-4442-ab4e-d27954c0de48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:05:17 np0005604943 nova_compute[238883]: 2026-02-02 12:05:17.834 238887 DEBUG oslo_concurrency.lockutils [req-5a8c154d-cf25-4d35-97d0-34174b3a881a req-a77a1201-9fca-4249-85e0-ef5fc77a81b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-b2e6a3a8-544c-4442-ab4e-d27954c0de48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:05:17 np0005604943 nova_compute[238883]: 2026-02-02 12:05:17.834 238887 DEBUG nova.network.neutron [req-5a8c154d-cf25-4d35-97d0-34174b3a881a req-a77a1201-9fca-4249-85e0-ef5fc77a81b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Refreshing network info cache for port 93006a5f-209d-479d-85bb-9f019bd5ddff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:05:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Feb  2 07:05:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Feb  2 07:05:17 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Feb  2 07:05:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 167 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 74 KiB/s wr, 159 op/s
Feb  2 07:05:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:05:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1133226252' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:05:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:05:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1133226252' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:05:19 np0005604943 nova_compute[238883]: 2026-02-02 12:05:19.758 238887 DEBUG nova.network.neutron [req-5a8c154d-cf25-4d35-97d0-34174b3a881a req-a77a1201-9fca-4249-85e0-ef5fc77a81b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Updated VIF entry in instance network info cache for port 93006a5f-209d-479d-85bb-9f019bd5ddff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:05:19 np0005604943 nova_compute[238883]: 2026-02-02 12:05:19.758 238887 DEBUG nova.network.neutron [req-5a8c154d-cf25-4d35-97d0-34174b3a881a req-a77a1201-9fca-4249-85e0-ef5fc77a81b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Updating instance_info_cache with network_info: [{"id": "93006a5f-209d-479d-85bb-9f019bd5ddff", "address": "fa:16:3e:d6:f3:3d", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93006a5f-20", "ovs_interfaceid": "93006a5f-209d-479d-85bb-9f019bd5ddff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:05:19 np0005604943 nova_compute[238883]: 2026-02-02 12:05:19.779 238887 DEBUG oslo_concurrency.lockutils [req-5a8c154d-cf25-4d35-97d0-34174b3a881a req-a77a1201-9fca-4249-85e0-ef5fc77a81b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-b2e6a3a8-544c-4442-ab4e-d27954c0de48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:05:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 167 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 58 KiB/s wr, 160 op/s
Feb  2 07:05:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Feb  2 07:05:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Feb  2 07:05:20 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.5732869096524845e-06 of space, bias 1.0, pg target 0.0022719860728957456 quantized to 32 (current 32)
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0011204434161525597 of space, bias 1.0, pg target 0.3361330248457679 quantized to 32 (current 32)
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.0808102974801927e-06 of space, bias 1.0, pg target 0.0006242430892440578 quantized to 32 (current 32)
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669154815812933 of space, bias 1.0, pg target 0.200074644474388 quantized to 32 (current 32)
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0202147885055726e-06 of space, bias 4.0, pg target 0.0012242577462066872 quantized to 16 (current 16)
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:05:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 07:05:21 np0005604943 nova_compute[238883]: 2026-02-02 12:05:21.576 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:22 np0005604943 nova_compute[238883]: 2026-02-02 12:05:22.012 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 168 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 80 KiB/s wr, 272 op/s
Feb  2 07:05:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:05:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3320623658' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:05:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:05:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Feb  2 07:05:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Feb  2 07:05:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Feb  2 07:05:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Feb  2 07:05:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Feb  2 07:05:23 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Feb  2 07:05:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 168 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 152 op/s
Feb  2 07:05:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:05:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1540005608' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:05:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:05:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1540005608' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:05:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 168 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 11 KiB/s wr, 105 op/s
Feb  2 07:05:26 np0005604943 nova_compute[238883]: 2026-02-02 12:05:26.578 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:27 np0005604943 nova_compute[238883]: 2026-02-02 12:05:27.014 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:27 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:27Z|00030|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.12
Feb  2 07:05:27 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:27Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:d6:f3:3d 10.100.0.12
Feb  2 07:05:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:05:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Feb  2 07:05:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Feb  2 07:05:27 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Feb  2 07:05:27 np0005604943 nova_compute[238883]: 2026-02-02 12:05:27.880 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:27 np0005604943 nova_compute[238883]: 2026-02-02 12:05:27.880 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:27 np0005604943 nova_compute[238883]: 2026-02-02 12:05:27.897 238887 DEBUG nova.compute.manager [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:05:27 np0005604943 nova_compute[238883]: 2026-02-02 12:05:27.975 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:27 np0005604943 nova_compute[238883]: 2026-02-02 12:05:27.976 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:27 np0005604943 nova_compute[238883]: 2026-02-02 12:05:27.985 238887 DEBUG nova.virt.hardware [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:05:27 np0005604943 nova_compute[238883]: 2026-02-02 12:05:27.985 238887 INFO nova.compute.claims [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.065 238887 DEBUG nova.scheduler.client.report [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Refreshing inventories for resource provider 30401227-b88f-415d-9c2d-3119bd1baf61 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.082 238887 DEBUG nova.scheduler.client.report [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Updating ProviderTree inventory for provider 30401227-b88f-415d-9c2d-3119bd1baf61 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.082 238887 DEBUG nova.compute.provider_tree [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Updating inventory in ProviderTree for provider 30401227-b88f-415d-9c2d-3119bd1baf61 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.095 238887 DEBUG nova.scheduler.client.report [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Refreshing aggregate associations for resource provider 30401227-b88f-415d-9c2d-3119bd1baf61, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.113 238887 DEBUG nova.scheduler.client.report [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Refreshing trait associations for resource provider 30401227-b88f-415d-9c2d-3119bd1baf61, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.171 238887 DEBUG oslo_concurrency.processutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 176 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 419 KiB/s wr, 99 op/s
Feb  2 07:05:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:05:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/703867876' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.713 238887 DEBUG oslo_concurrency.processutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.719 238887 DEBUG nova.compute.provider_tree [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.736 238887 DEBUG nova.scheduler.client.report [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.768 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.769 238887 DEBUG nova.compute.manager [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.820 238887 DEBUG nova.compute.manager [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.820 238887 DEBUG nova.network.neutron [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.843 238887 INFO nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.863 238887 DEBUG nova.compute.manager [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.945 238887 DEBUG nova.compute.manager [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.946 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.947 238887 INFO nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Creating image(s)#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.967 238887 DEBUG nova.storage.rbd_utils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] rbd image 804c52ce-4b15-4c12-bfe7-efe1281d3dc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:05:28 np0005604943 nova_compute[238883]: 2026-02-02 12:05:28.989 238887 DEBUG nova.storage.rbd_utils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] rbd image 804c52ce-4b15-4c12-bfe7-efe1281d3dc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.012 238887 DEBUG nova.storage.rbd_utils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] rbd image 804c52ce-4b15-4c12-bfe7-efe1281d3dc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.017 238887 DEBUG oslo_concurrency.processutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.050 238887 DEBUG nova.policy [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '084f489a7b4c4fecba7b0942ed1b7203', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '851fb6d80faf43cc9b2fef1913323704', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.112 238887 DEBUG oslo_concurrency.processutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.113 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.114 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.115 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.141 238887 DEBUG nova.storage.rbd_utils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] rbd image 804c52ce-4b15-4c12-bfe7-efe1281d3dc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.145 238887 DEBUG oslo_concurrency.processutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 804c52ce-4b15-4c12-bfe7-efe1281d3dc1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:29 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.345 238887 DEBUG oslo_concurrency.processutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 804c52ce-4b15-4c12-bfe7-efe1281d3dc1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.405 238887 DEBUG nova.storage.rbd_utils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] resizing rbd image 804c52ce-4b15-4c12-bfe7-efe1281d3dc1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.480 238887 DEBUG nova.objects.instance [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lazy-loading 'migration_context' on Instance uuid 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.499 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.499 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Ensure instance console log exists: /var/lib/nova/instances/804c52ce-4b15-4c12-bfe7-efe1281d3dc1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.500 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.500 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.500 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:29 np0005604943 nova_compute[238883]: 2026-02-02 12:05:29.674 238887 DEBUG nova.network.neutron [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Successfully created port: 526abf6f-0054-4f1e-8c8c-761a2476046a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:05:30 np0005604943 nova_compute[238883]: 2026-02-02 12:05:30.335 238887 DEBUG nova.network.neutron [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Successfully updated port: 526abf6f-0054-4f1e-8c8c-761a2476046a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:05:30 np0005604943 nova_compute[238883]: 2026-02-02 12:05:30.346 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "refresh_cache-804c52ce-4b15-4c12-bfe7-efe1281d3dc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:05:30 np0005604943 nova_compute[238883]: 2026-02-02 12:05:30.347 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquired lock "refresh_cache-804c52ce-4b15-4c12-bfe7-efe1281d3dc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:05:30 np0005604943 nova_compute[238883]: 2026-02-02 12:05:30.347 238887 DEBUG nova.network.neutron [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:05:30 np0005604943 podman[262533]: 2026-02-02 12:05:30.379158998 +0000 UTC m=+0.062143979 container exec fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 07:05:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 182 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 775 KiB/s wr, 152 op/s
Feb  2 07:05:30 np0005604943 nova_compute[238883]: 2026-02-02 12:05:30.418 238887 DEBUG nova.compute.manager [req-3bf14ce0-23b5-40a6-8706-b639c0ffd27b req-b15d6fe9-050d-481f-9e26-1d361b934bfa 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Received event network-changed-526abf6f-0054-4f1e-8c8c-761a2476046a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:05:30 np0005604943 nova_compute[238883]: 2026-02-02 12:05:30.419 238887 DEBUG nova.compute.manager [req-3bf14ce0-23b5-40a6-8706-b639c0ffd27b req-b15d6fe9-050d-481f-9e26-1d361b934bfa 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Refreshing instance network info cache due to event network-changed-526abf6f-0054-4f1e-8c8c-761a2476046a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:05:30 np0005604943 nova_compute[238883]: 2026-02-02 12:05:30.419 238887 DEBUG oslo_concurrency.lockutils [req-3bf14ce0-23b5-40a6-8706-b639c0ffd27b req-b15d6fe9-050d-481f-9e26-1d361b934bfa 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-804c52ce-4b15-4c12-bfe7-efe1281d3dc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:05:30 np0005604943 nova_compute[238883]: 2026-02-02 12:05:30.447 238887 DEBUG nova.network.neutron [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:05:30 np0005604943 podman[262533]: 2026-02-02 12:05:30.483668842 +0000 UTC m=+0.166653833 container exec_died fffb528e321276c0c3873a515991dd68a346504106615c6708abcd60682ada04 (image=quay.io/ceph/ceph:v20, name=ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.135 238887 DEBUG nova.network.neutron [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Updating instance_info_cache with network_info: [{"id": "526abf6f-0054-4f1e-8c8c-761a2476046a", "address": "fa:16:3e:53:a6:48", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap526abf6f-00", "ovs_interfaceid": "526abf6f-0054-4f1e-8c8c-761a2476046a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.155 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Releasing lock "refresh_cache-804c52ce-4b15-4c12-bfe7-efe1281d3dc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.155 238887 DEBUG nova.compute.manager [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Instance network_info: |[{"id": "526abf6f-0054-4f1e-8c8c-761a2476046a", "address": "fa:16:3e:53:a6:48", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap526abf6f-00", "ovs_interfaceid": "526abf6f-0054-4f1e-8c8c-761a2476046a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.155 238887 DEBUG oslo_concurrency.lockutils [req-3bf14ce0-23b5-40a6-8706-b639c0ffd27b req-b15d6fe9-050d-481f-9e26-1d361b934bfa 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-804c52ce-4b15-4c12-bfe7-efe1281d3dc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.156 238887 DEBUG nova.network.neutron [req-3bf14ce0-23b5-40a6-8706-b639c0ffd27b req-b15d6fe9-050d-481f-9e26-1d361b934bfa 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Refreshing network info cache for port 526abf6f-0054-4f1e-8c8c-761a2476046a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.158 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Start _get_guest_xml network_info=[{"id": "526abf6f-0054-4f1e-8c8c-761a2476046a", "address": "fa:16:3e:53:a6:48", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap526abf6f-00", "ovs_interfaceid": "526abf6f-0054-4f1e-8c8c-761a2476046a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'image_id': '21b263f0-00f1-47be-b8b1-e3c07da0a6a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.163 238887 WARNING nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.169 238887 DEBUG nova.virt.libvirt.host [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.169 238887 DEBUG nova.virt.libvirt.host [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.176 238887 DEBUG nova.virt.libvirt.host [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.177 238887 DEBUG nova.virt.libvirt.host [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.177 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.177 238887 DEBUG nova.virt.hardware [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.178 238887 DEBUG nova.virt.hardware [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.178 238887 DEBUG nova.virt.hardware [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.178 238887 DEBUG nova.virt.hardware [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.178 238887 DEBUG nova.virt.hardware [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.178 238887 DEBUG nova.virt.hardware [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.178 238887 DEBUG nova.virt.hardware [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.179 238887 DEBUG nova.virt.hardware [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.179 238887 DEBUG nova.virt.hardware [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.179 238887 DEBUG nova.virt.hardware [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.179 238887 DEBUG nova.virt.hardware [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.181 238887 DEBUG oslo_concurrency.processutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:31 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:31Z|00032|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.12
Feb  2 07:05:31 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:31Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:d6:f3:3d 10.100.0.12
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.582 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/870665239' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.765 238887 DEBUG oslo_concurrency.processutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.791 238887 DEBUG nova.storage.rbd_utils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] rbd image 804c52ce-4b15-4c12-bfe7-efe1281d3dc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:05:31 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.798 238887 DEBUG oslo_concurrency.processutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:05:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:31.999 238887 DEBUG nova.network.neutron [req-3bf14ce0-23b5-40a6-8706-b639c0ffd27b req-b15d6fe9-050d-481f-9e26-1d361b934bfa 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Updated VIF entry in instance network info cache for port 526abf6f-0054-4f1e-8c8c-761a2476046a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.000 238887 DEBUG nova.network.neutron [req-3bf14ce0-23b5-40a6-8706-b639c0ffd27b req-b15d6fe9-050d-481f-9e26-1d361b934bfa 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Updating instance_info_cache with network_info: [{"id": "526abf6f-0054-4f1e-8c8c-761a2476046a", "address": "fa:16:3e:53:a6:48", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap526abf6f-00", "ovs_interfaceid": "526abf6f-0054-4f1e-8c8c-761a2476046a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.015 238887 DEBUG oslo_concurrency.lockutils [req-3bf14ce0-23b5-40a6-8706-b639c0ffd27b req-b15d6fe9-050d-481f-9e26-1d361b934bfa 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-804c52ce-4b15-4c12-bfe7-efe1281d3dc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.016 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:32 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:05:32 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:05:32 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:05:32 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:05:32 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:05:32 np0005604943 podman[262921]: 2026-02-02 12:05:32.166853241 +0000 UTC m=+0.040249137 container create 637057815e4be65f2e12f32fa274318499333216504fbf2b30791e4b4716876c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mccarthy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 07:05:32 np0005604943 systemd[1]: Started libpod-conmon-637057815e4be65f2e12f32fa274318499333216504fbf2b30791e4b4716876c.scope.
Feb  2 07:05:32 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:05:32 np0005604943 podman[262921]: 2026-02-02 12:05:32.150880789 +0000 UTC m=+0.024276705 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:05:32 np0005604943 podman[262921]: 2026-02-02 12:05:32.256608532 +0000 UTC m=+0.130004458 container init 637057815e4be65f2e12f32fa274318499333216504fbf2b30791e4b4716876c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 07:05:32 np0005604943 podman[262921]: 2026-02-02 12:05:32.262797949 +0000 UTC m=+0.136193855 container start 637057815e4be65f2e12f32fa274318499333216504fbf2b30791e4b4716876c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mccarthy, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 07:05:32 np0005604943 podman[262921]: 2026-02-02 12:05:32.266446418 +0000 UTC m=+0.139842334 container attach 637057815e4be65f2e12f32fa274318499333216504fbf2b30791e4b4716876c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 07:05:32 np0005604943 loving_mccarthy[262938]: 167 167
Feb  2 07:05:32 np0005604943 systemd[1]: libpod-637057815e4be65f2e12f32fa274318499333216504fbf2b30791e4b4716876c.scope: Deactivated successfully.
Feb  2 07:05:32 np0005604943 podman[262921]: 2026-02-02 12:05:32.278170214 +0000 UTC m=+0.151566110 container died 637057815e4be65f2e12f32fa274318499333216504fbf2b30791e4b4716876c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mccarthy, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 07:05:32 np0005604943 systemd[1]: var-lib-containers-storage-overlay-e3e12fc34a29f840b02d5a7014d74cb4095ad7716cfe7df1dc4dadaf80101881-merged.mount: Deactivated successfully.
Feb  2 07:05:32 np0005604943 podman[262921]: 2026-02-02 12:05:32.318548793 +0000 UTC m=+0.191944689 container remove 637057815e4be65f2e12f32fa274318499333216504fbf2b30791e4b4716876c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mccarthy, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 07:05:32 np0005604943 systemd[1]: libpod-conmon-637057815e4be65f2e12f32fa274318499333216504fbf2b30791e4b4716876c.scope: Deactivated successfully.
Feb  2 07:05:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:05:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3310998103' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.371 238887 DEBUG oslo_concurrency.processutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.373 238887 DEBUG nova.virt.libvirt.vif [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:05:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1363330718',display_name='tempest-TestEncryptedCinderVolumes-server-1363330718',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1363330718',id=19,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBojyyZc1pB4qbhccFknnAVyH2qYCGB8sXr6VXf4RggmuyiwiRN8sR4YyL37CEKqQGLHnWQ85K+Sg330iXkE8rCxhD0x5sAmjwWVf2+FF2jQxgasqZQCdwAdLrujQSitwA==',key_name='tempest-keypair-1925156515',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851fb6d80faf43cc9b2fef1913323704',ramdisk_id='',reservation_id='r-chevi327',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1976450145',owner_user_name='tempest-TestEncryptedCinderVolumes-1976450145-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:05:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='084f489a7b4c4fecba7b0942ed1b7203',uuid=804c52ce-4b15-4c12-bfe7-efe1281d3dc1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "526abf6f-0054-4f1e-8c8c-761a2476046a", "address": "fa:16:3e:53:a6:48", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap526abf6f-00", "ovs_interfaceid": "526abf6f-0054-4f1e-8c8c-761a2476046a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.373 238887 DEBUG nova.network.os_vif_util [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converting VIF {"id": "526abf6f-0054-4f1e-8c8c-761a2476046a", "address": "fa:16:3e:53:a6:48", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap526abf6f-00", "ovs_interfaceid": "526abf6f-0054-4f1e-8c8c-761a2476046a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.374 238887 DEBUG nova.network.os_vif_util [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:a6:48,bridge_name='br-int',has_traffic_filtering=True,id=526abf6f-0054-4f1e-8c8c-761a2476046a,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap526abf6f-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.375 238887 DEBUG nova.objects.instance [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lazy-loading 'pci_devices' on Instance uuid 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:05:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 221 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.6 MiB/s wr, 139 op/s
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.402 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  <uuid>804c52ce-4b15-4c12-bfe7-efe1281d3dc1</uuid>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  <name>instance-00000013</name>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-1363330718</nova:name>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:05:31</nova:creationTime>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:        <nova:user uuid="084f489a7b4c4fecba7b0942ed1b7203">tempest-TestEncryptedCinderVolumes-1976450145-project-member</nova:user>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:        <nova:project uuid="851fb6d80faf43cc9b2fef1913323704">tempest-TestEncryptedCinderVolumes-1976450145</nova:project>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <nova:root type="image" uuid="21b263f0-00f1-47be-b8b1-e3c07da0a6a2"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:        <nova:port uuid="526abf6f-0054-4f1e-8c8c-761a2476046a">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <entry name="serial">804c52ce-4b15-4c12-bfe7-efe1281d3dc1</entry>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <entry name="uuid">804c52ce-4b15-4c12-bfe7-efe1281d3dc1</entry>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/804c52ce-4b15-4c12-bfe7-efe1281d3dc1_disk">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/804c52ce-4b15-4c12-bfe7-efe1281d3dc1_disk.config">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:53:a6:48"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <target dev="tap526abf6f-00"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/804c52ce-4b15-4c12-bfe7-efe1281d3dc1/console.log" append="off"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:05:32 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:05:32 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:05:32 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:05:32 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.403 238887 DEBUG nova.compute.manager [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Preparing to wait for external event network-vif-plugged-526abf6f-0054-4f1e-8c8c-761a2476046a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.403 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.403 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.404 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.404 238887 DEBUG nova.virt.libvirt.vif [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:05:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1363330718',display_name='tempest-TestEncryptedCinderVolumes-server-1363330718',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1363330718',id=19,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBojyyZc1pB4qbhccFknnAVyH2qYCGB8sXr6VXf4RggmuyiwiRN8sR4YyL37CEKqQGLHnWQ85K+Sg330iXkE8rCxhD0x5sAmjwWVf2+FF2jQxgasqZQCdwAdLrujQSitwA==',key_name='tempest-keypair-1925156515',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851fb6d80faf43cc9b2fef1913323704',ramdisk_id='',reservation_id='r-chevi327',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1976450145',owner_user_name='tempest-TestEncryptedCinderVolumes-1976450145-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:05:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='084f489a7b4c4fecba7b0942ed1b7203',uuid=804c52ce-4b15-4c12-bfe7-efe1281d3dc1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "526abf6f-0054-4f1e-8c8c-761a2476046a", "address": "fa:16:3e:53:a6:48", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap526abf6f-00", "ovs_interfaceid": "526abf6f-0054-4f1e-8c8c-761a2476046a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.405 238887 DEBUG nova.network.os_vif_util [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converting VIF {"id": "526abf6f-0054-4f1e-8c8c-761a2476046a", "address": "fa:16:3e:53:a6:48", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap526abf6f-00", "ovs_interfaceid": "526abf6f-0054-4f1e-8c8c-761a2476046a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.406 238887 DEBUG nova.network.os_vif_util [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:a6:48,bridge_name='br-int',has_traffic_filtering=True,id=526abf6f-0054-4f1e-8c8c-761a2476046a,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap526abf6f-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.406 238887 DEBUG os_vif [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:a6:48,bridge_name='br-int',has_traffic_filtering=True,id=526abf6f-0054-4f1e-8c8c-761a2476046a,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap526abf6f-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.407 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.407 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.408 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.412 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.412 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap526abf6f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.413 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap526abf6f-00, col_values=(('external_ids', {'iface-id': '526abf6f-0054-4f1e-8c8c-761a2476046a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:a6:48', 'vm-uuid': '804c52ce-4b15-4c12-bfe7-efe1281d3dc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.414 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:32 np0005604943 NetworkManager[49093]: <info>  [1770033932.4156] manager: (tap526abf6f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.419 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.421 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.422 238887 INFO os_vif [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:a6:48,bridge_name='br-int',has_traffic_filtering=True,id=526abf6f-0054-4f1e-8c8c-761a2476046a,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap526abf6f-00')#033[00m
Feb  2 07:05:32 np0005604943 podman[262964]: 2026-02-02 12:05:32.468123569 +0000 UTC m=+0.045195291 container create 6a52a14eaae7e94098c3534f9caf83248eafb1df036bc0b98d506a05e4b6d9c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hermann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.469 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.469 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.469 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] No VIF found with MAC fa:16:3e:53:a6:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.470 238887 INFO nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Using config drive#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.500 238887 DEBUG nova.storage.rbd_utils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] rbd image 804c52ce-4b15-4c12-bfe7-efe1281d3dc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:05:32 np0005604943 systemd[1]: Started libpod-conmon-6a52a14eaae7e94098c3534f9caf83248eafb1df036bc0b98d506a05e4b6d9c9.scope.
Feb  2 07:05:32 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:05:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55888adef8653774c7c7f40d6f6b01e1c271f9bc382dbf2c212957e226bd1cc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:05:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55888adef8653774c7c7f40d6f6b01e1c271f9bc382dbf2c212957e226bd1cc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:05:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55888adef8653774c7c7f40d6f6b01e1c271f9bc382dbf2c212957e226bd1cc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:05:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55888adef8653774c7c7f40d6f6b01e1c271f9bc382dbf2c212957e226bd1cc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:05:32 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55888adef8653774c7c7f40d6f6b01e1c271f9bc382dbf2c212957e226bd1cc8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 07:05:32 np0005604943 podman[262964]: 2026-02-02 12:05:32.538498557 +0000 UTC m=+0.115570289 container init 6a52a14eaae7e94098c3534f9caf83248eafb1df036bc0b98d506a05e4b6d9c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hermann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 07:05:32 np0005604943 podman[262964]: 2026-02-02 12:05:32.451175521 +0000 UTC m=+0.028247233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:05:32 np0005604943 podman[262964]: 2026-02-02 12:05:32.549457593 +0000 UTC m=+0.126529305 container start 6a52a14eaae7e94098c3534f9caf83248eafb1df036bc0b98d506a05e4b6d9c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 07:05:32 np0005604943 podman[262964]: 2026-02-02 12:05:32.555167847 +0000 UTC m=+0.132239779 container attach 6a52a14eaae7e94098c3534f9caf83248eafb1df036bc0b98d506a05e4b6d9c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hermann, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:05:32 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:32Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:f3:3d 10.100.0.12
Feb  2 07:05:32 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:32Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:f3:3d 10.100.0.12
Feb  2 07:05:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.866 238887 INFO nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Creating config drive at /var/lib/nova/instances/804c52ce-4b15-4c12-bfe7-efe1281d3dc1/disk.config#033[00m
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.871 238887 DEBUG oslo_concurrency.processutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/804c52ce-4b15-4c12-bfe7-efe1281d3dc1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmp9uyzui execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:32 np0005604943 nice_hermann[263001]: --> passed data devices: 0 physical, 3 LVM
Feb  2 07:05:32 np0005604943 nice_hermann[263001]: --> All data devices are unavailable
Feb  2 07:05:32 np0005604943 systemd[1]: libpod-6a52a14eaae7e94098c3534f9caf83248eafb1df036bc0b98d506a05e4b6d9c9.scope: Deactivated successfully.
Feb  2 07:05:32 np0005604943 podman[262964]: 2026-02-02 12:05:32.994700324 +0000 UTC m=+0.571772126 container died 6a52a14eaae7e94098c3534f9caf83248eafb1df036bc0b98d506a05e4b6d9c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:05:32 np0005604943 nova_compute[238883]: 2026-02-02 12:05:32.996 238887 DEBUG oslo_concurrency.processutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/804c52ce-4b15-4c12-bfe7-efe1281d3dc1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmp9uyzui" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:33 np0005604943 systemd[1]: var-lib-containers-storage-overlay-55888adef8653774c7c7f40d6f6b01e1c271f9bc382dbf2c212957e226bd1cc8-merged.mount: Deactivated successfully.
Feb  2 07:05:33 np0005604943 nova_compute[238883]: 2026-02-02 12:05:33.034 238887 DEBUG nova.storage.rbd_utils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] rbd image 804c52ce-4b15-4c12-bfe7-efe1281d3dc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:05:33 np0005604943 nova_compute[238883]: 2026-02-02 12:05:33.041 238887 DEBUG oslo_concurrency.processutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/804c52ce-4b15-4c12-bfe7-efe1281d3dc1/disk.config 804c52ce-4b15-4c12-bfe7-efe1281d3dc1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:33 np0005604943 podman[262964]: 2026-02-02 12:05:33.044526548 +0000 UTC m=+0.621598260 container remove 6a52a14eaae7e94098c3534f9caf83248eafb1df036bc0b98d506a05e4b6d9c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:05:33 np0005604943 systemd[1]: libpod-conmon-6a52a14eaae7e94098c3534f9caf83248eafb1df036bc0b98d506a05e4b6d9c9.scope: Deactivated successfully.
Feb  2 07:05:33 np0005604943 nova_compute[238883]: 2026-02-02 12:05:33.160 238887 DEBUG oslo_concurrency.processutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/804c52ce-4b15-4c12-bfe7-efe1281d3dc1/disk.config 804c52ce-4b15-4c12-bfe7-efe1281d3dc1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:33 np0005604943 nova_compute[238883]: 2026-02-02 12:05:33.161 238887 INFO nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Deleting local config drive /var/lib/nova/instances/804c52ce-4b15-4c12-bfe7-efe1281d3dc1/disk.config because it was imported into RBD.#033[00m
Feb  2 07:05:33 np0005604943 kernel: tap526abf6f-00: entered promiscuous mode
Feb  2 07:05:33 np0005604943 NetworkManager[49093]: <info>  [1770033933.2156] manager: (tap526abf6f-00): new Tun device (/org/freedesktop/NetworkManager/Devices/100)
Feb  2 07:05:33 np0005604943 nova_compute[238883]: 2026-02-02 12:05:33.217 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:33 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:33Z|00183|binding|INFO|Claiming lport 526abf6f-0054-4f1e-8c8c-761a2476046a for this chassis.
Feb  2 07:05:33 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:33Z|00184|binding|INFO|526abf6f-0054-4f1e-8c8c-761a2476046a: Claiming fa:16:3e:53:a6:48 10.100.0.6
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.226 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:a6:48 10.100.0.6'], port_security=['fa:16:3e:53:a6:48 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '804c52ce-4b15-4c12-bfe7-efe1281d3dc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb13b2a6-b763-41ef-a5c4-123372e94249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851fb6d80faf43cc9b2fef1913323704', 'neutron:revision_number': '2', 'neutron:security_group_ids': '70b41b0b-c892-46d7-b5d9-14c26fc19c78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10f2dc12-4c00-4783-968f-4cacec86630e, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=526abf6f-0054-4f1e-8c8c-761a2476046a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:05:33 np0005604943 nova_compute[238883]: 2026-02-02 12:05:33.228 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.228 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 526abf6f-0054-4f1e-8c8c-761a2476046a in datapath fb13b2a6-b763-41ef-a5c4-123372e94249 bound to our chassis#033[00m
Feb  2 07:05:33 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:33Z|00185|binding|INFO|Setting lport 526abf6f-0054-4f1e-8c8c-761a2476046a ovn-installed in OVS
Feb  2 07:05:33 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:33Z|00186|binding|INFO|Setting lport 526abf6f-0054-4f1e-8c8c-761a2476046a up in Southbound
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.230 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fb13b2a6-b763-41ef-a5c4-123372e94249#033[00m
Feb  2 07:05:33 np0005604943 nova_compute[238883]: 2026-02-02 12:05:33.231 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.241 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ce51b5bc-8761-4306-9d72-1623ef2d845c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.241 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfb13b2a6-b1 in ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.243 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfb13b2a6-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.243 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fc560c8d-10bb-4bf4-b68e-00cc7e719628]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.244 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc99007-efbd-49a8-b23e-eae2965bf175]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:33 np0005604943 systemd-machined[206973]: New machine qemu-19-instance-00000013.
Feb  2 07:05:33 np0005604943 systemd-udevd[263137]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.261 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[4867ea87-c9b9-4123-92a8-37e038ce8745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:33 np0005604943 systemd[1]: Started Virtual Machine qemu-19-instance-00000013.
Feb  2 07:05:33 np0005604943 NetworkManager[49093]: <info>  [1770033933.2702] device (tap526abf6f-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:05:33 np0005604943 NetworkManager[49093]: <info>  [1770033933.2709] device (tap526abf6f-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.278 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd105ac-8ca2-4a5a-8114-79841f5b5673]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.305 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[08295386-8676-4458-b7b3-80200b53e196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:33 np0005604943 systemd-udevd[263140]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:05:33 np0005604943 NetworkManager[49093]: <info>  [1770033933.3142] manager: (tapfb13b2a6-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/101)
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.312 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[01afd4dd-ef0b-4255-bd09-af36c57af91f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.341 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[7157b1ec-0c4b-4ae8-a816-dc2b1ab4097c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.344 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[6530087e-f1e1-4acd-ae9f-1e4a7388e89f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:33 np0005604943 NetworkManager[49093]: <info>  [1770033933.3620] device (tapfb13b2a6-b0): carrier: link connected
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.367 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[8d0d566a-0025-4176-9a78-283ddc6b25ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.382 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[af34fcfc-856f-4a22-85c5-45080a332dc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb13b2a6-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:41:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437871, 'reachable_time': 21886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263168, 'error': None, 'target': 'ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.395 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d2434423-bc85-40f5-8cde-96655979cbc6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed1:4144'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437871, 'tstamp': 437871}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263176, 'error': None, 'target': 'ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:33 np0005604943 nova_compute[238883]: 2026-02-02 12:05:33.408 238887 DEBUG nova.compute.manager [req-3a6c2aae-ecfd-44d2-820c-519ce41ac408 req-87174522-5ce9-48fb-bd10-21ddb7402ec4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Received event network-vif-plugged-526abf6f-0054-4f1e-8c8c-761a2476046a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:05:33 np0005604943 nova_compute[238883]: 2026-02-02 12:05:33.409 238887 DEBUG oslo_concurrency.lockutils [req-3a6c2aae-ecfd-44d2-820c-519ce41ac408 req-87174522-5ce9-48fb-bd10-21ddb7402ec4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:33 np0005604943 nova_compute[238883]: 2026-02-02 12:05:33.409 238887 DEBUG oslo_concurrency.lockutils [req-3a6c2aae-ecfd-44d2-820c-519ce41ac408 req-87174522-5ce9-48fb-bd10-21ddb7402ec4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:33 np0005604943 nova_compute[238883]: 2026-02-02 12:05:33.409 238887 DEBUG oslo_concurrency.lockutils [req-3a6c2aae-ecfd-44d2-820c-519ce41ac408 req-87174522-5ce9-48fb-bd10-21ddb7402ec4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:33 np0005604943 nova_compute[238883]: 2026-02-02 12:05:33.410 238887 DEBUG nova.compute.manager [req-3a6c2aae-ecfd-44d2-820c-519ce41ac408 req-87174522-5ce9-48fb-bd10-21ddb7402ec4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Processing event network-vif-plugged-526abf6f-0054-4f1e-8c8c-761a2476046a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.413 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[84477990-c4f5-463e-9ea7-2439326217d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb13b2a6-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:41:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437871, 'reachable_time': 21886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263182, 'error': None, 'target': 'ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.440 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[050eb91a-4802-435a-9faa-21225dc19f1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:33 np0005604943 podman[263183]: 2026-02-02 12:05:33.469578816 +0000 UTC m=+0.041295466 container create 3ac158df94f0850ed249ee8782f3614008d7151e4f2ebb164c825fe0dda81607 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_robinson, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.497 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[3a47398c-dfd1-4b34-96e3-ae82683ca449]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.498 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb13b2a6-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.498 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.499 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb13b2a6-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:33 np0005604943 NetworkManager[49093]: <info>  [1770033933.5013] manager: (tapfb13b2a6-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Feb  2 07:05:33 np0005604943 kernel: tapfb13b2a6-b0: entered promiscuous mode
Feb  2 07:05:33 np0005604943 nova_compute[238883]: 2026-02-02 12:05:33.500 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:33 np0005604943 nova_compute[238883]: 2026-02-02 12:05:33.503 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.503 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfb13b2a6-b0, col_values=(('external_ids', {'iface-id': '1d9983aa-de5e-40a5-bc99-8bde08c14b08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:33 np0005604943 nova_compute[238883]: 2026-02-02 12:05:33.504 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:33 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:33Z|00187|binding|INFO|Releasing lport 1d9983aa-de5e-40a5-bc99-8bde08c14b08 from this chassis (sb_readonly=0)
Feb  2 07:05:33 np0005604943 nova_compute[238883]: 2026-02-02 12:05:33.511 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.513 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fb13b2a6-b763-41ef-a5c4-123372e94249.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fb13b2a6-b763-41ef-a5c4-123372e94249.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.513 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8ae37d-64bb-40b8-bb64-59eb12d08b2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.514 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-fb13b2a6-b763-41ef-a5c4-123372e94249
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/fb13b2a6-b763-41ef-a5c4-123372e94249.pid.haproxy
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID fb13b2a6-b763-41ef-a5c4-123372e94249
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 07:05:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:33.515 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249', 'env', 'PROCESS_TAG=haproxy-fb13b2a6-b763-41ef-a5c4-123372e94249', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fb13b2a6-b763-41ef-a5c4-123372e94249.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 07:05:33 np0005604943 systemd[1]: Started libpod-conmon-3ac158df94f0850ed249ee8782f3614008d7151e4f2ebb164c825fe0dda81607.scope.
Feb  2 07:05:33 np0005604943 podman[263183]: 2026-02-02 12:05:33.449650898 +0000 UTC m=+0.021367578 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:05:33 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:05:33 np0005604943 podman[263183]: 2026-02-02 12:05:33.581085664 +0000 UTC m=+0.152802324 container init 3ac158df94f0850ed249ee8782f3614008d7151e4f2ebb164c825fe0dda81607 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_robinson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 07:05:33 np0005604943 podman[263183]: 2026-02-02 12:05:33.588071673 +0000 UTC m=+0.159788313 container start 3ac158df94f0850ed249ee8782f3614008d7151e4f2ebb164c825fe0dda81607 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_robinson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:05:33 np0005604943 podman[263183]: 2026-02-02 12:05:33.591214517 +0000 UTC m=+0.162931167 container attach 3ac158df94f0850ed249ee8782f3614008d7151e4f2ebb164c825fe0dda81607 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_robinson, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:05:33 np0005604943 sweet_robinson[263208]: 167 167
Feb  2 07:05:33 np0005604943 systemd[1]: libpod-3ac158df94f0850ed249ee8782f3614008d7151e4f2ebb164c825fe0dda81607.scope: Deactivated successfully.
Feb  2 07:05:33 np0005604943 podman[263183]: 2026-02-02 12:05:33.595418331 +0000 UTC m=+0.167134981 container died 3ac158df94f0850ed249ee8782f3614008d7151e4f2ebb164c825fe0dda81607 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_robinson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 07:05:33 np0005604943 systemd[1]: var-lib-containers-storage-overlay-ee404c76083d5918b323f294f053068e5b48b0d0c4f1781bbbc073a49892396c-merged.mount: Deactivated successfully.
Feb  2 07:05:33 np0005604943 podman[263183]: 2026-02-02 12:05:33.627228189 +0000 UTC m=+0.198944839 container remove 3ac158df94f0850ed249ee8782f3614008d7151e4f2ebb164c825fe0dda81607 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_robinson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:05:33 np0005604943 systemd[1]: libpod-conmon-3ac158df94f0850ed249ee8782f3614008d7151e4f2ebb164c825fe0dda81607.scope: Deactivated successfully.
Feb  2 07:05:33 np0005604943 podman[263232]: 2026-02-02 12:05:33.764694297 +0000 UTC m=+0.034358788 container create ff7d58f0d69abbca755c5c1395c206819e3a968d038f1555df2854fc59363493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_williams, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 07:05:33 np0005604943 systemd[1]: Started libpod-conmon-ff7d58f0d69abbca755c5c1395c206819e3a968d038f1555df2854fc59363493.scope.
Feb  2 07:05:33 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:05:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22b95b2e83e041123da50aa99b9a13d251327099bcdd8d6290109b3488a12e54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:05:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22b95b2e83e041123da50aa99b9a13d251327099bcdd8d6290109b3488a12e54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:05:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22b95b2e83e041123da50aa99b9a13d251327099bcdd8d6290109b3488a12e54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:05:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22b95b2e83e041123da50aa99b9a13d251327099bcdd8d6290109b3488a12e54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:05:33 np0005604943 podman[263232]: 2026-02-02 12:05:33.833138104 +0000 UTC m=+0.102802645 container init ff7d58f0d69abbca755c5c1395c206819e3a968d038f1555df2854fc59363493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_williams, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 07:05:33 np0005604943 podman[263232]: 2026-02-02 12:05:33.84040658 +0000 UTC m=+0.110071071 container start ff7d58f0d69abbca755c5c1395c206819e3a968d038f1555df2854fc59363493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_williams, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:05:33 np0005604943 podman[263232]: 2026-02-02 12:05:33.843968975 +0000 UTC m=+0.113633486 container attach ff7d58f0d69abbca755c5c1395c206819e3a968d038f1555df2854fc59363493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_williams, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:05:33 np0005604943 podman[263232]: 2026-02-02 12:05:33.75146598 +0000 UTC m=+0.021130491 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:05:33 np0005604943 podman[263270]: 2026-02-02 12:05:33.87490088 +0000 UTC m=+0.062079276 container create 0d77a412f738add4a7f7a56a7e2e20c23269e06ca86553e3136599a7ea9e2406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:05:33 np0005604943 systemd[1]: Started libpod-conmon-0d77a412f738add4a7f7a56a7e2e20c23269e06ca86553e3136599a7ea9e2406.scope.
Feb  2 07:05:33 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:05:33 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acabbbcc7edf9cf5edb35bf26a5ac66fbb8ff8cc26def29aa4acab7acf2a92d7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 07:05:33 np0005604943 podman[263270]: 2026-02-02 12:05:33.852140506 +0000 UTC m=+0.039318892 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 07:05:33 np0005604943 podman[263270]: 2026-02-02 12:05:33.951971619 +0000 UTC m=+0.139150005 container init 0d77a412f738add4a7f7a56a7e2e20c23269e06ca86553e3136599a7ea9e2406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 07:05:33 np0005604943 podman[263270]: 2026-02-02 12:05:33.96050775 +0000 UTC m=+0.147686126 container start 0d77a412f738add4a7f7a56a7e2e20c23269e06ca86553e3136599a7ea9e2406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true)
Feb  2 07:05:33 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[263288]: [NOTICE]   (263294) : New worker (263296) forked
Feb  2 07:05:33 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[263288]: [NOTICE]   (263294) : Loading success.
Feb  2 07:05:34 np0005604943 sweet_williams[263271]: {
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:    "0": [
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:        {
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "devices": [
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "/dev/loop3"
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            ],
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "lv_name": "ceph_lv0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "lv_size": "21470642176",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "name": "ceph_lv0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "tags": {
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.cluster_name": "ceph",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.crush_device_class": "",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.encrypted": "0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.objectstore": "bluestore",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.osd_id": "0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.type": "block",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.vdo": "0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.with_tpm": "0"
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            },
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "type": "block",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "vg_name": "ceph_vg0"
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:        }
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:    ],
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:    "1": [
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:        {
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "devices": [
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "/dev/loop4"
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            ],
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "lv_name": "ceph_lv1",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "lv_size": "21470642176",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "name": "ceph_lv1",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "tags": {
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.cluster_name": "ceph",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.crush_device_class": "",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.encrypted": "0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.objectstore": "bluestore",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.osd_id": "1",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.type": "block",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.vdo": "0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.with_tpm": "0"
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            },
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "type": "block",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "vg_name": "ceph_vg1"
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:        }
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:    ],
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:    "2": [
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:        {
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "devices": [
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "/dev/loop5"
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            ],
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "lv_name": "ceph_lv2",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "lv_size": "21470642176",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "name": "ceph_lv2",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "tags": {
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.cluster_name": "ceph",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.crush_device_class": "",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.encrypted": "0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.objectstore": "bluestore",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.osd_id": "2",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.type": "block",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.vdo": "0",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:                "ceph.with_tpm": "0"
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            },
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "type": "block",
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:            "vg_name": "ceph_vg2"
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:        }
Feb  2 07:05:34 np0005604943 sweet_williams[263271]:    ]
Feb  2 07:05:34 np0005604943 sweet_williams[263271]: }
Feb  2 07:05:34 np0005604943 systemd[1]: libpod-ff7d58f0d69abbca755c5c1395c206819e3a968d038f1555df2854fc59363493.scope: Deactivated successfully.
Feb  2 07:05:34 np0005604943 podman[263345]: 2026-02-02 12:05:34.234316356 +0000 UTC m=+0.035569850 container died ff7d58f0d69abbca755c5c1395c206819e3a968d038f1555df2854fc59363493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 07:05:34 np0005604943 systemd[1]: var-lib-containers-storage-overlay-22b95b2e83e041123da50aa99b9a13d251327099bcdd8d6290109b3488a12e54-merged.mount: Deactivated successfully.
Feb  2 07:05:34 np0005604943 podman[263345]: 2026-02-02 12:05:34.278060986 +0000 UTC m=+0.079314430 container remove ff7d58f0d69abbca755c5c1395c206819e3a968d038f1555df2854fc59363493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_williams, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 07:05:34 np0005604943 systemd[1]: libpod-conmon-ff7d58f0d69abbca755c5c1395c206819e3a968d038f1555df2854fc59363493.scope: Deactivated successfully.
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.297 238887 DEBUG nova.compute.manager [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.299 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033934.2965157, 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.299 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] VM Started (Lifecycle Event)#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.306 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.311 238887 INFO nova.virt.libvirt.driver [-] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Instance spawned successfully.#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.312 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.319 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.322 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.329 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.330 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.330 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.330 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.331 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.331 238887 DEBUG nova.virt.libvirt.driver [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.341 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.342 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033934.296649, 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.342 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.366 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.369 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033934.303951, 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.370 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.391 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.395 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:05:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 232 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 133 op/s
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.406 238887 INFO nova.compute.manager [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Took 5.46 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.407 238887 DEBUG nova.compute.manager [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.430 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.461 238887 INFO nova.compute.manager [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Took 6.51 seconds to build instance.#033[00m
Feb  2 07:05:34 np0005604943 nova_compute[238883]: 2026-02-02 12:05:34.479 238887 DEBUG oslo_concurrency.lockutils [None req-72a4c5e8-b0f5-4d8f-91f4-85096771a3f9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:34 np0005604943 podman[263426]: 2026-02-02 12:05:34.69370277 +0000 UTC m=+0.038142580 container create ff82bbb64e68d1a41bbca70432fc39f714f2d52372a676bb43dfd42c43e985b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_khayyam, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:05:34 np0005604943 systemd[1]: Started libpod-conmon-ff82bbb64e68d1a41bbca70432fc39f714f2d52372a676bb43dfd42c43e985b8.scope.
Feb  2 07:05:34 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:05:34 np0005604943 podman[263426]: 2026-02-02 12:05:34.677135023 +0000 UTC m=+0.021574963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:05:34 np0005604943 podman[263426]: 2026-02-02 12:05:34.775474646 +0000 UTC m=+0.119914476 container init ff82bbb64e68d1a41bbca70432fc39f714f2d52372a676bb43dfd42c43e985b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_khayyam, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 07:05:34 np0005604943 podman[263426]: 2026-02-02 12:05:34.781095868 +0000 UTC m=+0.125535678 container start ff82bbb64e68d1a41bbca70432fc39f714f2d52372a676bb43dfd42c43e985b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 07:05:34 np0005604943 podman[263426]: 2026-02-02 12:05:34.784039947 +0000 UTC m=+0.128479757 container attach ff82bbb64e68d1a41bbca70432fc39f714f2d52372a676bb43dfd42c43e985b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_khayyam, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 07:05:34 np0005604943 hungry_khayyam[263442]: 167 167
Feb  2 07:05:34 np0005604943 systemd[1]: libpod-ff82bbb64e68d1a41bbca70432fc39f714f2d52372a676bb43dfd42c43e985b8.scope: Deactivated successfully.
Feb  2 07:05:34 np0005604943 conmon[263442]: conmon ff82bbb64e68d1a41bbc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff82bbb64e68d1a41bbca70432fc39f714f2d52372a676bb43dfd42c43e985b8.scope/container/memory.events
Feb  2 07:05:34 np0005604943 podman[263426]: 2026-02-02 12:05:34.786713159 +0000 UTC m=+0.131152959 container died ff82bbb64e68d1a41bbca70432fc39f714f2d52372a676bb43dfd42c43e985b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_khayyam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:05:34 np0005604943 systemd[1]: var-lib-containers-storage-overlay-74436849b01ceb194646dbc0c00e67c912133a6bb39683c3c9efffdd843bfc7c-merged.mount: Deactivated successfully.
Feb  2 07:05:34 np0005604943 podman[263426]: 2026-02-02 12:05:34.819930455 +0000 UTC m=+0.164370265 container remove ff82bbb64e68d1a41bbca70432fc39f714f2d52372a676bb43dfd42c43e985b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_khayyam, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:05:34 np0005604943 systemd[1]: libpod-conmon-ff82bbb64e68d1a41bbca70432fc39f714f2d52372a676bb43dfd42c43e985b8.scope: Deactivated successfully.
Feb  2 07:05:34 np0005604943 podman[263466]: 2026-02-02 12:05:34.960328893 +0000 UTC m=+0.038961182 container create c74737d103c1a9d0a1078644f90ec9f897dccdec27d3f15127e0bb5406613184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 07:05:34 np0005604943 systemd[1]: Started libpod-conmon-c74737d103c1a9d0a1078644f90ec9f897dccdec27d3f15127e0bb5406613184.scope.
Feb  2 07:05:35 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:05:35 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696a971d265ed28aedfd8fd167e36b56d75e246bf59a15376dad2db41b7edb99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:05:35 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696a971d265ed28aedfd8fd167e36b56d75e246bf59a15376dad2db41b7edb99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:05:35 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696a971d265ed28aedfd8fd167e36b56d75e246bf59a15376dad2db41b7edb99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:05:35 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696a971d265ed28aedfd8fd167e36b56d75e246bf59a15376dad2db41b7edb99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:05:35 np0005604943 podman[263466]: 2026-02-02 12:05:34.944700351 +0000 UTC m=+0.023332670 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:05:35 np0005604943 podman[263466]: 2026-02-02 12:05:35.048228244 +0000 UTC m=+0.126860573 container init c74737d103c1a9d0a1078644f90ec9f897dccdec27d3f15127e0bb5406613184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:05:35 np0005604943 podman[263466]: 2026-02-02 12:05:35.055785998 +0000 UTC m=+0.134418287 container start c74737d103c1a9d0a1078644f90ec9f897dccdec27d3f15127e0bb5406613184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:05:35 np0005604943 podman[263466]: 2026-02-02 12:05:35.059825487 +0000 UTC m=+0.138457826 container attach c74737d103c1a9d0a1078644f90ec9f897dccdec27d3f15127e0bb5406613184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 07:05:35 np0005604943 nova_compute[238883]: 2026-02-02 12:05:35.549 238887 DEBUG nova.compute.manager [req-c8281c12-3888-43dd-b5a6-cad43b38fdc1 req-bd2f31cb-c234-4cfc-8e00-faf1f1459ad2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Received event network-vif-plugged-526abf6f-0054-4f1e-8c8c-761a2476046a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:05:35 np0005604943 nova_compute[238883]: 2026-02-02 12:05:35.551 238887 DEBUG oslo_concurrency.lockutils [req-c8281c12-3888-43dd-b5a6-cad43b38fdc1 req-bd2f31cb-c234-4cfc-8e00-faf1f1459ad2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:35 np0005604943 nova_compute[238883]: 2026-02-02 12:05:35.551 238887 DEBUG oslo_concurrency.lockutils [req-c8281c12-3888-43dd-b5a6-cad43b38fdc1 req-bd2f31cb-c234-4cfc-8e00-faf1f1459ad2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:35 np0005604943 nova_compute[238883]: 2026-02-02 12:05:35.551 238887 DEBUG oslo_concurrency.lockutils [req-c8281c12-3888-43dd-b5a6-cad43b38fdc1 req-bd2f31cb-c234-4cfc-8e00-faf1f1459ad2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:35 np0005604943 nova_compute[238883]: 2026-02-02 12:05:35.551 238887 DEBUG nova.compute.manager [req-c8281c12-3888-43dd-b5a6-cad43b38fdc1 req-bd2f31cb-c234-4cfc-8e00-faf1f1459ad2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] No waiting events found dispatching network-vif-plugged-526abf6f-0054-4f1e-8c8c-761a2476046a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:05:35 np0005604943 nova_compute[238883]: 2026-02-02 12:05:35.552 238887 WARNING nova.compute.manager [req-c8281c12-3888-43dd-b5a6-cad43b38fdc1 req-bd2f31cb-c234-4cfc-8e00-faf1f1459ad2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Received unexpected event network-vif-plugged-526abf6f-0054-4f1e-8c8c-761a2476046a for instance with vm_state active and task_state None.#033[00m
Feb  2 07:05:35 np0005604943 nova_compute[238883]: 2026-02-02 12:05:35.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:05:35 np0005604943 lvm[263559]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:05:35 np0005604943 lvm[263559]: VG ceph_vg0 finished
Feb  2 07:05:35 np0005604943 lvm[263561]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 07:05:35 np0005604943 lvm[263561]: VG ceph_vg1 finished
Feb  2 07:05:35 np0005604943 lvm[263562]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 07:05:35 np0005604943 lvm[263562]: VG ceph_vg2 finished
Feb  2 07:05:35 np0005604943 great_chaplygin[263483]: {}
Feb  2 07:05:35 np0005604943 systemd[1]: libpod-c74737d103c1a9d0a1078644f90ec9f897dccdec27d3f15127e0bb5406613184.scope: Deactivated successfully.
Feb  2 07:05:35 np0005604943 systemd[1]: libpod-c74737d103c1a9d0a1078644f90ec9f897dccdec27d3f15127e0bb5406613184.scope: Consumed 1.214s CPU time.
Feb  2 07:05:35 np0005604943 podman[263466]: 2026-02-02 12:05:35.937754762 +0000 UTC m=+1.016387071 container died c74737d103c1a9d0a1078644f90ec9f897dccdec27d3f15127e0bb5406613184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 07:05:35 np0005604943 systemd[1]: var-lib-containers-storage-overlay-696a971d265ed28aedfd8fd167e36b56d75e246bf59a15376dad2db41b7edb99-merged.mount: Deactivated successfully.
Feb  2 07:05:35 np0005604943 podman[263466]: 2026-02-02 12:05:35.992079168 +0000 UTC m=+1.070711457 container remove c74737d103c1a9d0a1078644f90ec9f897dccdec27d3f15127e0bb5406613184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 07:05:35 np0005604943 systemd[1]: libpod-conmon-c74737d103c1a9d0a1078644f90ec9f897dccdec27d3f15127e0bb5406613184.scope: Deactivated successfully.
Feb  2 07:05:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:05:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:05:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:05:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:05:36 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:05:36 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:05:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 232 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 133 op/s
Feb  2 07:05:36 np0005604943 nova_compute[238883]: 2026-02-02 12:05:36.583 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:36 np0005604943 nova_compute[238883]: 2026-02-02 12:05:36.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:05:36 np0005604943 nova_compute[238883]: 2026-02-02 12:05:36.642 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 07:05:36 np0005604943 nova_compute[238883]: 2026-02-02 12:05:36.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 07:05:36 np0005604943 nova_compute[238883]: 2026-02-02 12:05:36.844 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "refresh_cache-49fa37c8-ff56-455b-9ce3-0bc67080ed52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:05:36 np0005604943 nova_compute[238883]: 2026-02-02 12:05:36.844 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquired lock "refresh_cache-49fa37c8-ff56-455b-9ce3-0bc67080ed52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:05:36 np0005604943 nova_compute[238883]: 2026-02-02 12:05:36.845 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 07:05:36 np0005604943 nova_compute[238883]: 2026-02-02 12:05:36.845 238887 DEBUG nova.objects.instance [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 49fa37c8-ff56-455b-9ce3-0bc67080ed52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:05:37 np0005604943 nova_compute[238883]: 2026-02-02 12:05:37.416 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:05:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Feb  2 07:05:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Feb  2 07:05:37 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Feb  2 07:05:37 np0005604943 nova_compute[238883]: 2026-02-02 12:05:37.772 238887 DEBUG nova.compute.manager [req-eef01af4-bfa2-4f69-ab88-e9f788c5196d req-5f6a8f5b-f9dd-4f54-be25-3df224b6be2c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Received event network-changed-526abf6f-0054-4f1e-8c8c-761a2476046a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:05:37 np0005604943 nova_compute[238883]: 2026-02-02 12:05:37.773 238887 DEBUG nova.compute.manager [req-eef01af4-bfa2-4f69-ab88-e9f788c5196d req-5f6a8f5b-f9dd-4f54-be25-3df224b6be2c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Refreshing instance network info cache due to event network-changed-526abf6f-0054-4f1e-8c8c-761a2476046a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:05:37 np0005604943 nova_compute[238883]: 2026-02-02 12:05:37.773 238887 DEBUG oslo_concurrency.lockutils [req-eef01af4-bfa2-4f69-ab88-e9f788c5196d req-5f6a8f5b-f9dd-4f54-be25-3df224b6be2c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-804c52ce-4b15-4c12-bfe7-efe1281d3dc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:05:37 np0005604943 nova_compute[238883]: 2026-02-02 12:05:37.773 238887 DEBUG oslo_concurrency.lockutils [req-eef01af4-bfa2-4f69-ab88-e9f788c5196d req-5f6a8f5b-f9dd-4f54-be25-3df224b6be2c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-804c52ce-4b15-4c12-bfe7-efe1281d3dc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:05:37 np0005604943 nova_compute[238883]: 2026-02-02 12:05:37.773 238887 DEBUG nova.network.neutron [req-eef01af4-bfa2-4f69-ab88-e9f788c5196d req-5f6a8f5b-f9dd-4f54-be25-3df224b6be2c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Refreshing network info cache for port 526abf6f-0054-4f1e-8c8c-761a2476046a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.125 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Updating instance_info_cache with network_info: [{"id": "41c28d19-861c-496e-ac87-5f0a4a987967", "address": "fa:16:3e:03:72:f2", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c28d19-86", "ovs_interfaceid": "41c28d19-861c-496e-ac87-5f0a4a987967", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.140 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Releasing lock "refresh_cache-49fa37c8-ff56-455b-9ce3-0bc67080ed52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.140 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.141 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.141 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.141 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.160 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.160 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.160 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.161 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.161 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 232 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 154 op/s
Feb  2 07:05:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:05:38 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1484635613' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.694 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.766 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.766 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.769 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.769 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.772 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.772 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.915 238887 DEBUG nova.network.neutron [req-eef01af4-bfa2-4f69-ab88-e9f788c5196d req-5f6a8f5b-f9dd-4f54-be25-3df224b6be2c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Updated VIF entry in instance network info cache for port 526abf6f-0054-4f1e-8c8c-761a2476046a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.916 238887 DEBUG nova.network.neutron [req-eef01af4-bfa2-4f69-ab88-e9f788c5196d req-5f6a8f5b-f9dd-4f54-be25-3df224b6be2c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Updating instance_info_cache with network_info: [{"id": "526abf6f-0054-4f1e-8c8c-761a2476046a", "address": "fa:16:3e:53:a6:48", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap526abf6f-00", "ovs_interfaceid": "526abf6f-0054-4f1e-8c8c-761a2476046a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.923 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.924 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3951MB free_disk=59.96691075246781GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.924 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.924 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:38 np0005604943 nova_compute[238883]: 2026-02-02 12:05:38.934 238887 DEBUG oslo_concurrency.lockutils [req-eef01af4-bfa2-4f69-ab88-e9f788c5196d req-5f6a8f5b-f9dd-4f54-be25-3df224b6be2c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-804c52ce-4b15-4c12-bfe7-efe1281d3dc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:05:39 np0005604943 nova_compute[238883]: 2026-02-02 12:05:39.016 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance 49fa37c8-ff56-455b-9ce3-0bc67080ed52 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 07:05:39 np0005604943 nova_compute[238883]: 2026-02-02 12:05:39.016 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance b2e6a3a8-544c-4442-ab4e-d27954c0de48 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 07:05:39 np0005604943 nova_compute[238883]: 2026-02-02 12:05:39.016 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 07:05:39 np0005604943 nova_compute[238883]: 2026-02-02 12:05:39.017 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 07:05:39 np0005604943 nova_compute[238883]: 2026-02-02 12:05:39.017 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 07:05:39 np0005604943 nova_compute[238883]: 2026-02-02 12:05:39.084 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:05:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2955565143' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:05:39 np0005604943 nova_compute[238883]: 2026-02-02 12:05:39.641 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:39 np0005604943 nova_compute[238883]: 2026-02-02 12:05:39.646 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:05:39 np0005604943 nova_compute[238883]: 2026-02-02 12:05:39.662 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:05:39 np0005604943 nova_compute[238883]: 2026-02-02 12:05:39.683 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 07:05:39 np0005604943 nova_compute[238883]: 2026-02-02 12:05:39.683 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:40 np0005604943 nova_compute[238883]: 2026-02-02 12:05:40.184 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:05:40 np0005604943 nova_compute[238883]: 2026-02-02 12:05:40.184 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:05:40 np0005604943 nova_compute[238883]: 2026-02-02 12:05:40.184 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 07:05:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 232 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 124 op/s
Feb  2 07:05:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:05:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4002058040' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:05:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:05:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4002058040' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:05:40 np0005604943 nova_compute[238883]: 2026-02-02 12:05:40.636 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:05:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:05:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:05:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:05:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:05:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:05:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:05:41 np0005604943 nova_compute[238883]: 2026-02-02 12:05:41.585 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 232 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 644 KiB/s wr, 118 op/s
Feb  2 07:05:42 np0005604943 nova_compute[238883]: 2026-02-02 12:05:42.460 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:42 np0005604943 nova_compute[238883]: 2026-02-02 12:05:42.658 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:05:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:05:43 np0005604943 podman[263647]: 2026-02-02 12:05:43.084515477 +0000 UTC m=+0.091158290 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  2 07:05:43 np0005604943 podman[263646]: 2026-02-02 12:05:43.087316723 +0000 UTC m=+0.094842790 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 07:05:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:05:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3754159294' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:05:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:05:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3754159294' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:05:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 232 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.2 KiB/s wr, 107 op/s
Feb  2 07:05:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:05:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2551963917' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:05:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:05:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2551963917' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:05:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 232 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.2 KiB/s wr, 107 op/s
Feb  2 07:05:46 np0005604943 nova_compute[238883]: 2026-02-02 12:05:46.589 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:47 np0005604943 nova_compute[238883]: 2026-02-02 12:05:47.462 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:05:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 247 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 784 KiB/s rd, 1.7 MiB/s wr, 84 op/s
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.414 238887 DEBUG oslo_concurrency.lockutils [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.415 238887 DEBUG oslo_concurrency.lockutils [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.415 238887 DEBUG oslo_concurrency.lockutils [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.416 238887 DEBUG oslo_concurrency.lockutils [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.416 238887 DEBUG oslo_concurrency.lockutils [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.418 238887 INFO nova.compute.manager [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Terminating instance#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.419 238887 DEBUG nova.compute.manager [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:05:48 np0005604943 kernel: tap93006a5f-20 (unregistering): left promiscuous mode
Feb  2 07:05:48 np0005604943 NetworkManager[49093]: <info>  [1770033948.4725] device (tap93006a5f-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.486 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:48 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:48Z|00188|binding|INFO|Releasing lport 93006a5f-209d-479d-85bb-9f019bd5ddff from this chassis (sb_readonly=0)
Feb  2 07:05:48 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:48Z|00189|binding|INFO|Setting lport 93006a5f-209d-479d-85bb-9f019bd5ddff down in Southbound
Feb  2 07:05:48 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:48Z|00190|binding|INFO|Removing iface tap93006a5f-20 ovn-installed in OVS
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.495 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:48.500 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:f3:3d 10.100.0.12'], port_security=['fa:16:3e:d6:f3:3d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b2e6a3a8-544c-4442-ab4e-d27954c0de48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34290362-cccd-452d-8e7e-22a6057fdb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e66ed51ccbb840f083b8a86476696747', 'neutron:revision_number': '4', 'neutron:security_group_ids': '47811367-fb4b-48f8-b202-cddf3c298120', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.194'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c1fa263-7715-4982-bfcc-ab441fef3c03, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=93006a5f-209d-479d-85bb-9f019bd5ddff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:05:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:48.502 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 93006a5f-209d-479d-85bb-9f019bd5ddff in datapath 34290362-cccd-452d-8e7e-22a6057fdb60 unbound from our chassis#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.501 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:48.503 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34290362-cccd-452d-8e7e-22a6057fdb60#033[00m
Feb  2 07:05:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:48.520 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[92ec4bcf-082b-479a-a897-1203f8b8b8bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:48 np0005604943 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Feb  2 07:05:48 np0005604943 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 14.725s CPU time.
Feb  2 07:05:48 np0005604943 systemd-machined[206973]: Machine qemu-18-instance-00000012 terminated.
Feb  2 07:05:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:48.548 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[65ce3660-80ac-46f5-8394-59745f737faa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:48.551 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[ae71de06-d4e4-4b8e-b0eb-6ccd4104e6ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:48.578 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[019c6f49-dd82-40c8-b010-3d0fd132a679]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:48 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:48Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:a6:48 10.100.0.6
Feb  2 07:05:48 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:48Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:a6:48 10.100.0.6
Feb  2 07:05:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:48.596 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[602bfbc9-7424-4d59-8d33-ae872a728954]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34290362-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:39:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432192, 'reachable_time': 23049, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263705, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:48.610 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a359460e-96b1-4977-b30e-379031299c6d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap34290362-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 432203, 'tstamp': 432203}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263706, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap34290362-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 432206, 'tstamp': 432206}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263706, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:48.612 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34290362-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.614 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.620 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:48.621 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34290362-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:48.621 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:05:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:48.622 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34290362-c0, col_values=(('external_ids', {'iface-id': '54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:48 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:48.622 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.638 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.642 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.652 238887 INFO nova.virt.libvirt.driver [-] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Instance destroyed successfully.#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.653 238887 DEBUG nova.objects.instance [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lazy-loading 'resources' on Instance uuid b2e6a3a8-544c-4442-ab4e-d27954c0de48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.667 238887 DEBUG nova.virt.libvirt.vif [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:05:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-146622984',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-146622984',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-146622984',id=18,image_ref='421c3c59-9b2e-48e9-be9c-5972b0d34b00',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFo6aZgzdmgq5Xs7apZZZSeH77QPXs3ivcJkISaDCec1l8Xq3E0TW/61SOm+v7JQhl+wSwPBZfZufSXwDEGpOVbRLprl32CQssPm67PIHYznTUlSBm7nl+pRhRqTzDTQHQ==',key_name='tempest-keypair-1283497596',keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:05:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-ueicvhch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1059348902',image_owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:05:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=b2e6a3a8-544c-4442-ab4e-d27954c0de48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "93006a5f-209d-479d-85bb-9f019bd5ddff", "address": "fa:16:3e:d6:f3:3d", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93006a5f-20", "ovs_interfaceid": "93006a5f-209d-479d-85bb-9f019bd5ddff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.667 238887 DEBUG nova.network.os_vif_util [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "93006a5f-209d-479d-85bb-9f019bd5ddff", "address": "fa:16:3e:d6:f3:3d", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap93006a5f-20", "ovs_interfaceid": "93006a5f-209d-479d-85bb-9f019bd5ddff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.668 238887 DEBUG nova.network.os_vif_util [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:f3:3d,bridge_name='br-int',has_traffic_filtering=True,id=93006a5f-209d-479d-85bb-9f019bd5ddff,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93006a5f-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.669 238887 DEBUG os_vif [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:f3:3d,bridge_name='br-int',has_traffic_filtering=True,id=93006a5f-209d-479d-85bb-9f019bd5ddff,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93006a5f-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.671 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.672 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93006a5f-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.674 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.675 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.678 238887 INFO os_vif [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:f3:3d,bridge_name='br-int',has_traffic_filtering=True,id=93006a5f-209d-479d-85bb-9f019bd5ddff,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap93006a5f-20')#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.720 238887 DEBUG nova.compute.manager [req-a38c9317-b8c3-4661-87e8-4252043d1a4c req-b3173a7a-da7d-4c4c-a9fa-176402ec8f77 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Received event network-vif-unplugged-93006a5f-209d-479d-85bb-9f019bd5ddff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.721 238887 DEBUG oslo_concurrency.lockutils [req-a38c9317-b8c3-4661-87e8-4252043d1a4c req-b3173a7a-da7d-4c4c-a9fa-176402ec8f77 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.721 238887 DEBUG oslo_concurrency.lockutils [req-a38c9317-b8c3-4661-87e8-4252043d1a4c req-b3173a7a-da7d-4c4c-a9fa-176402ec8f77 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.721 238887 DEBUG oslo_concurrency.lockutils [req-a38c9317-b8c3-4661-87e8-4252043d1a4c req-b3173a7a-da7d-4c4c-a9fa-176402ec8f77 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.722 238887 DEBUG nova.compute.manager [req-a38c9317-b8c3-4661-87e8-4252043d1a4c req-b3173a7a-da7d-4c4c-a9fa-176402ec8f77 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] No waiting events found dispatching network-vif-unplugged-93006a5f-209d-479d-85bb-9f019bd5ddff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.722 238887 DEBUG nova.compute.manager [req-a38c9317-b8c3-4661-87e8-4252043d1a4c req-b3173a7a-da7d-4c4c-a9fa-176402ec8f77 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Received event network-vif-unplugged-93006a5f-209d-479d-85bb-9f019bd5ddff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.805 238887 INFO nova.virt.libvirt.driver [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Deleting instance files /var/lib/nova/instances/b2e6a3a8-544c-4442-ab4e-d27954c0de48_del#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.806 238887 INFO nova.virt.libvirt.driver [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Deletion of /var/lib/nova/instances/b2e6a3a8-544c-4442-ab4e-d27954c0de48_del complete#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.854 238887 INFO nova.compute.manager [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Took 0.43 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.854 238887 DEBUG oslo.service.loopingcall [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.855 238887 DEBUG nova.compute.manager [-] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:05:48 np0005604943 nova_compute[238883]: 2026-02-02 12:05:48.855 238887 DEBUG nova.network.neutron [-] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:05:49 np0005604943 nova_compute[238883]: 2026-02-02 12:05:49.632 238887 DEBUG nova.network.neutron [-] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:05:49 np0005604943 nova_compute[238883]: 2026-02-02 12:05:49.654 238887 INFO nova.compute.manager [-] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Took 0.80 seconds to deallocate network for instance.#033[00m
Feb  2 07:05:49 np0005604943 nova_compute[238883]: 2026-02-02 12:05:49.906 238887 DEBUG nova.compute.manager [req-e2b4ea72-08f8-4071-a9bb-8d695eeac3be req-b07d983c-bfd9-445e-9440-c7d274b29641 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Received event network-vif-deleted-93006a5f-209d-479d-85bb-9f019bd5ddff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:05:50 np0005604943 nova_compute[238883]: 2026-02-02 12:05:50.033 238887 INFO nova.compute.manager [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Took 0.38 seconds to detach 1 volumes for instance.#033[00m
Feb  2 07:05:50 np0005604943 nova_compute[238883]: 2026-02-02 12:05:50.035 238887 DEBUG nova.compute.manager [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Deleting volume: c666995f-b3ea-40b8-b445-50f26d9b6bec _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Feb  2 07:05:50 np0005604943 nova_compute[238883]: 2026-02-02 12:05:50.344 238887 DEBUG oslo_concurrency.lockutils [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:50 np0005604943 nova_compute[238883]: 2026-02-02 12:05:50.345 238887 DEBUG oslo_concurrency.lockutils [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 257 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 821 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Feb  2 07:05:50 np0005604943 nova_compute[238883]: 2026-02-02 12:05:50.458 238887 DEBUG oslo_concurrency.processutils [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:50 np0005604943 nova_compute[238883]: 2026-02-02 12:05:50.846 238887 DEBUG nova.compute.manager [req-3f62754a-773f-4cd7-a238-2046f9c17938 req-67a7d9c0-afef-42c0-8a77-fc65a6fb8618 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Received event network-vif-plugged-93006a5f-209d-479d-85bb-9f019bd5ddff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:05:50 np0005604943 nova_compute[238883]: 2026-02-02 12:05:50.848 238887 DEBUG oslo_concurrency.lockutils [req-3f62754a-773f-4cd7-a238-2046f9c17938 req-67a7d9c0-afef-42c0-8a77-fc65a6fb8618 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:50 np0005604943 nova_compute[238883]: 2026-02-02 12:05:50.849 238887 DEBUG oslo_concurrency.lockutils [req-3f62754a-773f-4cd7-a238-2046f9c17938 req-67a7d9c0-afef-42c0-8a77-fc65a6fb8618 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:50 np0005604943 nova_compute[238883]: 2026-02-02 12:05:50.849 238887 DEBUG oslo_concurrency.lockutils [req-3f62754a-773f-4cd7-a238-2046f9c17938 req-67a7d9c0-afef-42c0-8a77-fc65a6fb8618 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:50 np0005604943 nova_compute[238883]: 2026-02-02 12:05:50.850 238887 DEBUG nova.compute.manager [req-3f62754a-773f-4cd7-a238-2046f9c17938 req-67a7d9c0-afef-42c0-8a77-fc65a6fb8618 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] No waiting events found dispatching network-vif-plugged-93006a5f-209d-479d-85bb-9f019bd5ddff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:05:50 np0005604943 nova_compute[238883]: 2026-02-02 12:05:50.850 238887 WARNING nova.compute.manager [req-3f62754a-773f-4cd7-a238-2046f9c17938 req-67a7d9c0-afef-42c0-8a77-fc65a6fb8618 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Received unexpected event network-vif-plugged-93006a5f-209d-479d-85bb-9f019bd5ddff for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:05:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:05:50 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/421201821' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:05:51 np0005604943 nova_compute[238883]: 2026-02-02 12:05:51.021 238887 DEBUG oslo_concurrency.processutils [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:51 np0005604943 nova_compute[238883]: 2026-02-02 12:05:51.029 238887 DEBUG nova.compute.provider_tree [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:05:51 np0005604943 nova_compute[238883]: 2026-02-02 12:05:51.164 238887 DEBUG nova.scheduler.client.report [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:05:51 np0005604943 nova_compute[238883]: 2026-02-02 12:05:51.190 238887 DEBUG oslo_concurrency.lockutils [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:05:51 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1160333764' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:05:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:05:51 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1160333764' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:05:51 np0005604943 nova_compute[238883]: 2026-02-02 12:05:51.216 238887 INFO nova.scheduler.client.report [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Deleted allocations for instance b2e6a3a8-544c-4442-ab4e-d27954c0de48#033[00m
Feb  2 07:05:51 np0005604943 nova_compute[238883]: 2026-02-02 12:05:51.266 238887 DEBUG oslo_concurrency.lockutils [None req-6518c141-af63-4365-8284-b283481ad1a6 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "b2e6a3a8-544c-4442-ab4e-d27954c0de48" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:51 np0005604943 nova_compute[238883]: 2026-02-02 12:05:51.591 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Feb  2 07:05:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Feb  2 07:05:51 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Feb  2 07:05:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 253 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 383 KiB/s rd, 2.6 MiB/s wr, 127 op/s
Feb  2 07:05:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:05:52 np0005604943 nova_compute[238883]: 2026-02-02 12:05:52.899 238887 DEBUG oslo_concurrency.lockutils [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:52 np0005604943 nova_compute[238883]: 2026-02-02 12:05:52.899 238887 DEBUG oslo_concurrency.lockutils [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:52 np0005604943 nova_compute[238883]: 2026-02-02 12:05:52.900 238887 DEBUG oslo_concurrency.lockutils [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:52 np0005604943 nova_compute[238883]: 2026-02-02 12:05:52.900 238887 DEBUG oslo_concurrency.lockutils [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:52 np0005604943 nova_compute[238883]: 2026-02-02 12:05:52.900 238887 DEBUG oslo_concurrency.lockutils [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:52 np0005604943 nova_compute[238883]: 2026-02-02 12:05:52.901 238887 INFO nova.compute.manager [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Terminating instance#033[00m
Feb  2 07:05:52 np0005604943 nova_compute[238883]: 2026-02-02 12:05:52.902 238887 DEBUG nova.compute.manager [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:05:52 np0005604943 kernel: tap41c28d19-86 (unregistering): left promiscuous mode
Feb  2 07:05:52 np0005604943 NetworkManager[49093]: <info>  [1770033952.9478] device (tap41c28d19-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:05:52 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:52Z|00191|binding|INFO|Releasing lport 41c28d19-861c-496e-ac87-5f0a4a987967 from this chassis (sb_readonly=0)
Feb  2 07:05:52 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:52Z|00192|binding|INFO|Setting lport 41c28d19-861c-496e-ac87-5f0a4a987967 down in Southbound
Feb  2 07:05:52 np0005604943 ovn_controller[145056]: 2026-02-02T12:05:52Z|00193|binding|INFO|Removing iface tap41c28d19-86 ovn-installed in OVS
Feb  2 07:05:52 np0005604943 nova_compute[238883]: 2026-02-02 12:05:52.959 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:52 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:52.965 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:72:f2 10.100.0.14'], port_security=['fa:16:3e:03:72:f2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '49fa37c8-ff56-455b-9ce3-0bc67080ed52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34290362-cccd-452d-8e7e-22a6057fdb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e66ed51ccbb840f083b8a86476696747', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dec0c13f-4257-499f-8319-0d7aea717815', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c1fa263-7715-4982-bfcc-ab441fef3c03, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=41c28d19-861c-496e-ac87-5f0a4a987967) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:05:52 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:52.966 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 41c28d19-861c-496e-ac87-5f0a4a987967 in datapath 34290362-cccd-452d-8e7e-22a6057fdb60 unbound from our chassis#033[00m
Feb  2 07:05:52 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:52.968 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34290362-cccd-452d-8e7e-22a6057fdb60, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:05:52 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:52.969 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b2149ceb-5fba-43ad-a85d-42770d973f52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:52 np0005604943 nova_compute[238883]: 2026-02-02 12:05:52.969 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:52 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:52.970 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 namespace which is not needed anymore#033[00m
Feb  2 07:05:53 np0005604943 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Feb  2 07:05:53 np0005604943 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 14.594s CPU time.
Feb  2 07:05:53 np0005604943 systemd-machined[206973]: Machine qemu-17-instance-00000011 terminated.
Feb  2 07:05:53 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[261945]: [NOTICE]   (261949) : haproxy version is 2.8.14-c23fe91
Feb  2 07:05:53 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[261945]: [NOTICE]   (261949) : path to executable is /usr/sbin/haproxy
Feb  2 07:05:53 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[261945]: [ALERT]    (261949) : Current worker (261951) exited with code 143 (Terminated)
Feb  2 07:05:53 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[261945]: [WARNING]  (261949) : All workers exited. Exiting... (0)
Feb  2 07:05:53 np0005604943 systemd[1]: libpod-be27336f3d7020d9eb75fa3952be1efac520b2dfae573b91897d49bc7d107e44.scope: Deactivated successfully.
Feb  2 07:05:53 np0005604943 podman[263783]: 2026-02-02 12:05:53.112278849 +0000 UTC m=+0.045529990 container died be27336f3d7020d9eb75fa3952be1efac520b2dfae573b91897d49bc7d107e44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.138 238887 INFO nova.virt.libvirt.driver [-] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Instance destroyed successfully.#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.139 238887 DEBUG nova.objects.instance [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lazy-loading 'resources' on Instance uuid 49fa37c8-ff56-455b-9ce3-0bc67080ed52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:05:53 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be27336f3d7020d9eb75fa3952be1efac520b2dfae573b91897d49bc7d107e44-userdata-shm.mount: Deactivated successfully.
Feb  2 07:05:53 np0005604943 systemd[1]: var-lib-containers-storage-overlay-90d825938c429ed5ae861a4a4772a57336c43e79f85087c4003cd7fbafe1468c-merged.mount: Deactivated successfully.
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.158 238887 DEBUG nova.compute.manager [req-689a6c1f-b473-4589-8e26-ba5f8f523754 req-6cc88777-55e4-4607-9dcf-769717790e02 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Received event network-vif-unplugged-41c28d19-861c-496e-ac87-5f0a4a987967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.158 238887 DEBUG oslo_concurrency.lockutils [req-689a6c1f-b473-4589-8e26-ba5f8f523754 req-6cc88777-55e4-4607-9dcf-769717790e02 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.159 238887 DEBUG oslo_concurrency.lockutils [req-689a6c1f-b473-4589-8e26-ba5f8f523754 req-6cc88777-55e4-4607-9dcf-769717790e02 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.159 238887 DEBUG oslo_concurrency.lockutils [req-689a6c1f-b473-4589-8e26-ba5f8f523754 req-6cc88777-55e4-4607-9dcf-769717790e02 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.159 238887 DEBUG nova.compute.manager [req-689a6c1f-b473-4589-8e26-ba5f8f523754 req-6cc88777-55e4-4607-9dcf-769717790e02 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] No waiting events found dispatching network-vif-unplugged-41c28d19-861c-496e-ac87-5f0a4a987967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.159 238887 DEBUG nova.compute.manager [req-689a6c1f-b473-4589-8e26-ba5f8f523754 req-6cc88777-55e4-4607-9dcf-769717790e02 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Received event network-vif-unplugged-41c28d19-861c-496e-ac87-5f0a4a987967 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.161 238887 DEBUG nova.virt.libvirt.vif [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-857108296',display_name='tempest-TestVolumeBootPattern-volume-backed-server-857108296',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-857108296',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBACjE8dh8V4cWVkX+yw8FrLRJLdPBbPG6UdwUbgn0Rgy2SUN5h0MPu7kenkNTdDGKiMuhvvLOA289aOvZUc8b0RlFCKC9xUSfOOYeEtIvthB/OR92xZN54m1j4SqVjCg9g==',key_name='tempest-keypair-1934849553',keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:04:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-0kq0995v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:04:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=49fa37c8-ff56-455b-9ce3-0bc67080ed52,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41c28d19-861c-496e-ac87-5f0a4a987967", "address": "fa:16:3e:03:72:f2", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c28d19-86", "ovs_interfaceid": "41c28d19-861c-496e-ac87-5f0a4a987967", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.161 238887 DEBUG nova.network.os_vif_util [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "41c28d19-861c-496e-ac87-5f0a4a987967", "address": "fa:16:3e:03:72:f2", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c28d19-86", "ovs_interfaceid": "41c28d19-861c-496e-ac87-5f0a4a987967", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.162 238887 DEBUG nova.network.os_vif_util [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:72:f2,bridge_name='br-int',has_traffic_filtering=True,id=41c28d19-861c-496e-ac87-5f0a4a987967,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c28d19-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.162 238887 DEBUG os_vif [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:72:f2,bridge_name='br-int',has_traffic_filtering=True,id=41c28d19-861c-496e-ac87-5f0a4a987967,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c28d19-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.163 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.164 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41c28d19-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:53 np0005604943 podman[263783]: 2026-02-02 12:05:53.164908499 +0000 UTC m=+0.098159630 container cleanup be27336f3d7020d9eb75fa3952be1efac520b2dfae573b91897d49bc7d107e44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.166 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.168 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.170 238887 INFO os_vif [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:72:f2,bridge_name='br-int',has_traffic_filtering=True,id=41c28d19-861c-496e-ac87-5f0a4a987967,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c28d19-86')#033[00m
Feb  2 07:05:53 np0005604943 systemd[1]: libpod-conmon-be27336f3d7020d9eb75fa3952be1efac520b2dfae573b91897d49bc7d107e44.scope: Deactivated successfully.
Feb  2 07:05:53 np0005604943 podman[263823]: 2026-02-02 12:05:53.229742058 +0000 UTC m=+0.044083551 container remove be27336f3d7020d9eb75fa3952be1efac520b2dfae573b91897d49bc7d107e44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 07:05:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:53.236 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[af069e38-3ccf-490f-ae19-14226653da0f]: (4, ('Mon Feb  2 12:05:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 (be27336f3d7020d9eb75fa3952be1efac520b2dfae573b91897d49bc7d107e44)\nbe27336f3d7020d9eb75fa3952be1efac520b2dfae573b91897d49bc7d107e44\nMon Feb  2 12:05:53 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 (be27336f3d7020d9eb75fa3952be1efac520b2dfae573b91897d49bc7d107e44)\nbe27336f3d7020d9eb75fa3952be1efac520b2dfae573b91897d49bc7d107e44\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:53.238 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[38708a2d-de18-41f3-9051-ae8c32cf5d48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:53.239 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34290362-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:05:53 np0005604943 kernel: tap34290362-c0: left promiscuous mode
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.241 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.249 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:53.252 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[23b9fa52-b846-4bce-8c3c-94e1d3f1e31e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:53.274 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[64211b4c-c544-49ae-93c3-6607e857f94c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:53.275 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fcd16ef5-e02c-4877-b373-5efdd2132288]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:53.292 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[de52afd7-65f4-460a-a308-3d4789aac5e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432185, 'reachable_time': 21959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263855, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:53.296 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 07:05:53 np0005604943 systemd[1]: run-netns-ovnmeta\x2d34290362\x2dcccd\x2d452d\x2d8e7e\x2d22a6057fdb60.mount: Deactivated successfully.
Feb  2 07:05:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:05:53.296 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[8b92e9ae-4e11-4fb3-aef4-9030eb96c0cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.321 238887 INFO nova.virt.libvirt.driver [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Deleting instance files /var/lib/nova/instances/49fa37c8-ff56-455b-9ce3-0bc67080ed52_del#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.321 238887 INFO nova.virt.libvirt.driver [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Deletion of /var/lib/nova/instances/49fa37c8-ff56-455b-9ce3-0bc67080ed52_del complete#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.381 238887 INFO nova.compute.manager [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Took 0.48 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.382 238887 DEBUG oslo.service.loopingcall [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.382 238887 DEBUG nova.compute.manager [-] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:05:53 np0005604943 nova_compute[238883]: 2026-02-02 12:05:53.382 238887 DEBUG nova.network.neutron [-] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:05:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 247 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 400 KiB/s rd, 2.6 MiB/s wr, 148 op/s
Feb  2 07:05:54 np0005604943 nova_compute[238883]: 2026-02-02 12:05:54.738 238887 DEBUG nova.network.neutron [-] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:05:54 np0005604943 nova_compute[238883]: 2026-02-02 12:05:54.753 238887 INFO nova.compute.manager [-] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Took 1.37 seconds to deallocate network for instance.#033[00m
Feb  2 07:05:54 np0005604943 nova_compute[238883]: 2026-02-02 12:05:54.901 238887 INFO nova.compute.manager [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Took 0.15 seconds to detach 1 volumes for instance.#033[00m
Feb  2 07:05:54 np0005604943 nova_compute[238883]: 2026-02-02 12:05:54.903 238887 DEBUG nova.compute.manager [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Deleting volume: 1645c0c1-d976-4f9f-ad42-eca5c2c0ddb0 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Feb  2 07:05:55 np0005604943 nova_compute[238883]: 2026-02-02 12:05:55.059 238887 DEBUG oslo_concurrency.lockutils [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:55 np0005604943 nova_compute[238883]: 2026-02-02 12:05:55.060 238887 DEBUG oslo_concurrency.lockutils [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:55 np0005604943 nova_compute[238883]: 2026-02-02 12:05:55.125 238887 DEBUG oslo_concurrency.processutils [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:05:55 np0005604943 nova_compute[238883]: 2026-02-02 12:05:55.233 238887 DEBUG nova.compute.manager [req-c1a38b1d-d00f-48fd-b3f4-170f7d15786b req-8d362592-0522-4f48-87f2-82bb92c5d68a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Received event network-vif-plugged-41c28d19-861c-496e-ac87-5f0a4a987967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:05:55 np0005604943 nova_compute[238883]: 2026-02-02 12:05:55.234 238887 DEBUG oslo_concurrency.lockutils [req-c1a38b1d-d00f-48fd-b3f4-170f7d15786b req-8d362592-0522-4f48-87f2-82bb92c5d68a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:05:55 np0005604943 nova_compute[238883]: 2026-02-02 12:05:55.234 238887 DEBUG oslo_concurrency.lockutils [req-c1a38b1d-d00f-48fd-b3f4-170f7d15786b req-8d362592-0522-4f48-87f2-82bb92c5d68a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:05:55 np0005604943 nova_compute[238883]: 2026-02-02 12:05:55.235 238887 DEBUG oslo_concurrency.lockutils [req-c1a38b1d-d00f-48fd-b3f4-170f7d15786b req-8d362592-0522-4f48-87f2-82bb92c5d68a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:55 np0005604943 nova_compute[238883]: 2026-02-02 12:05:55.235 238887 DEBUG nova.compute.manager [req-c1a38b1d-d00f-48fd-b3f4-170f7d15786b req-8d362592-0522-4f48-87f2-82bb92c5d68a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] No waiting events found dispatching network-vif-plugged-41c28d19-861c-496e-ac87-5f0a4a987967 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:05:55 np0005604943 nova_compute[238883]: 2026-02-02 12:05:55.235 238887 WARNING nova.compute.manager [req-c1a38b1d-d00f-48fd-b3f4-170f7d15786b req-8d362592-0522-4f48-87f2-82bb92c5d68a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Received unexpected event network-vif-plugged-41c28d19-861c-496e-ac87-5f0a4a987967 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:05:55 np0005604943 nova_compute[238883]: 2026-02-02 12:05:55.236 238887 DEBUG nova.compute.manager [req-c1a38b1d-d00f-48fd-b3f4-170f7d15786b req-8d362592-0522-4f48-87f2-82bb92c5d68a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Received event network-vif-deleted-41c28d19-861c-496e-ac87-5f0a4a987967 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:05:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:05:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2873156921' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:05:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:05:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2873156921' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:05:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:05:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4175957919' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:05:55 np0005604943 nova_compute[238883]: 2026-02-02 12:05:55.685 238887 DEBUG oslo_concurrency.processutils [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:05:55 np0005604943 nova_compute[238883]: 2026-02-02 12:05:55.691 238887 DEBUG nova.compute.provider_tree [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:05:55 np0005604943 nova_compute[238883]: 2026-02-02 12:05:55.712 238887 DEBUG nova.scheduler.client.report [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:05:55 np0005604943 nova_compute[238883]: 2026-02-02 12:05:55.734 238887 DEBUG oslo_concurrency.lockutils [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:55 np0005604943 nova_compute[238883]: 2026-02-02 12:05:55.773 238887 INFO nova.scheduler.client.report [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Deleted allocations for instance 49fa37c8-ff56-455b-9ce3-0bc67080ed52#033[00m
Feb  2 07:05:55 np0005604943 nova_compute[238883]: 2026-02-02 12:05:55.838 238887 DEBUG oslo_concurrency.lockutils [None req-d7f57e49-2ea4-4793-994d-cbddfb00176b 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "49fa37c8-ff56-455b-9ce3-0bc67080ed52" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:05:56 np0005604943 nova_compute[238883]: 2026-02-02 12:05:56.264 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 247 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 400 KiB/s rd, 2.6 MiB/s wr, 148 op/s
Feb  2 07:05:56 np0005604943 nova_compute[238883]: 2026-02-02 12:05:56.593 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:05:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Feb  2 07:05:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Feb  2 07:05:57 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Feb  2 07:05:58 np0005604943 nova_compute[238883]: 2026-02-02 12:05:58.167 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:05:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 202 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 237 KiB/s rd, 119 KiB/s wr, 118 op/s
Feb  2 07:05:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Feb  2 07:05:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Feb  2 07:05:58 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Feb  2 07:06:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 23 KiB/s wr, 76 op/s
Feb  2 07:06:00 np0005604943 nova_compute[238883]: 2026-02-02 12:06:00.939 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:01 np0005604943 nova_compute[238883]: 2026-02-02 12:06:01.596 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 21 KiB/s wr, 49 op/s
Feb  2 07:06:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:06:03 np0005604943 nova_compute[238883]: 2026-02-02 12:06:03.170 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:03 np0005604943 nova_compute[238883]: 2026-02-02 12:06:03.651 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033948.6498792, b2e6a3a8-544c-4442-ab4e-d27954c0de48 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:06:03 np0005604943 nova_compute[238883]: 2026-02-02 12:06:03.652 238887 INFO nova.compute.manager [-] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:06:03 np0005604943 nova_compute[238883]: 2026-02-02 12:06:03.676 238887 DEBUG nova.compute.manager [None req-0f002479-9d8c-45a2-b243-70ed9cb63ad3 - - - - - -] [instance: b2e6a3a8-544c-4442-ab4e-d27954c0de48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:06:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 22 KiB/s wr, 73 op/s
Feb  2 07:06:05 np0005604943 nova_compute[238883]: 2026-02-02 12:06:05.873 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.2 KiB/s wr, 46 op/s
Feb  2 07:06:06 np0005604943 nova_compute[238883]: 2026-02-02 12:06:06.626 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:06:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Feb  2 07:06:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Feb  2 07:06:07 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Feb  2 07:06:08 np0005604943 nova_compute[238883]: 2026-02-02 12:06:08.133 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033953.1311207, 49fa37c8-ff56-455b-9ce3-0bc67080ed52 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:06:08 np0005604943 nova_compute[238883]: 2026-02-02 12:06:08.133 238887 INFO nova.compute.manager [-] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:06:08 np0005604943 nova_compute[238883]: 2026-02-02 12:06:08.160 238887 DEBUG nova.compute.manager [None req-e1adedd1-30dc-4ee4-b181-b79058cfda4f - - - - - -] [instance: 49fa37c8-ff56-455b-9ce3-0bc67080ed52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:06:08 np0005604943 nova_compute[238883]: 2026-02-02 12:06:08.173 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 32 op/s
Feb  2 07:06:08 np0005604943 nova_compute[238883]: 2026-02-02 12:06:08.926 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_12:06:09
Feb  2 07:06:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 07:06:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 07:06:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'backups', '.mgr', 'vms', 'default.rgw.control']
Feb  2 07:06:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 07:06:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:10.031 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:10.032 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:10.032 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 KiB/s wr, 31 op/s
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:06:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.614 238887 DEBUG oslo_concurrency.lockutils [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.614 238887 DEBUG oslo_concurrency.lockutils [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.632 238887 DEBUG nova.objects.instance [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lazy-loading 'flavor' on Instance uuid 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.634 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.673 238887 DEBUG oslo_concurrency.lockutils [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.840 238887 DEBUG oslo_concurrency.lockutils [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.841 238887 DEBUG oslo_concurrency.lockutils [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.842 238887 INFO nova.compute.manager [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Attaching volume aefa5d1b-8cee-4d85-b079-e06a2af3a859 to /dev/vdb#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.960 238887 DEBUG os_brick.utils [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.961 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.971 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.972 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[35d994ec-cf1a-49dc-bbe6-8a5010cdee32]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.973 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.980 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.981 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[dd167b72-6edc-4c2c-abcf-e1f9315cd16a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.982 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:11 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Feb  2 07:06:11 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:06:11.987641) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 07:06:11 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Feb  2 07:06:11 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033971987672, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2611, "num_deletes": 516, "total_data_size": 3435523, "memory_usage": 3502536, "flush_reason": "Manual Compaction"}
Feb  2 07:06:11 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.991 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.991 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[3f783651-d63e-438e-97d0-5a2667378ce2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.993 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[dfadb56a-7ddc-4997-996b-970b24206e50]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:11 np0005604943 nova_compute[238883]: 2026-02-02 12:06:11.993 238887 DEBUG oslo_concurrency.processutils [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:11 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033971999777, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2767537, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26773, "largest_seqno": 29383, "table_properties": {"data_size": 2756962, "index_size": 6173, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 27643, "raw_average_key_size": 21, "raw_value_size": 2732936, "raw_average_value_size": 2083, "num_data_blocks": 269, "num_entries": 1312, "num_filter_entries": 1312, "num_deletions": 516, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770033806, "oldest_key_time": 1770033806, "file_creation_time": 1770033971, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Feb  2 07:06:11 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 12169 microseconds, and 6114 cpu microseconds.
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:06:11.999809) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2767537 bytes OK
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:06:11.999826) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:06:12.001270) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:06:12.001281) EVENT_LOG_v1 {"time_micros": 1770033972001278, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:06:12.001301) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3423354, prev total WAL file size 3423354, number of live WAL files 2.
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:06:12.002096) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2702KB)], [59(10MB)]
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033972002120, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 13773836, "oldest_snapshot_seqno": -1}
Feb  2 07:06:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:12.007 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:06:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:12.008 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.017 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.020 238887 DEBUG oslo_concurrency.processutils [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.022 238887 DEBUG os_brick.initiator.connectors.lightos [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.022 238887 DEBUG os_brick.initiator.connectors.lightos [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.022 238887 DEBUG os_brick.initiator.connectors.lightos [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.022 238887 DEBUG os_brick.utils [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] <== get_connector_properties: return (62ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.023 238887 DEBUG nova.virt.block_device [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Updating existing volume attachment record: 653a15b4-7b23-4e24-87e7-e941affee220 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5970 keys, 9236668 bytes, temperature: kUnknown
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033972041768, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9236668, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9192129, "index_size": 28493, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 149276, "raw_average_key_size": 25, "raw_value_size": 9080167, "raw_average_value_size": 1520, "num_data_blocks": 1149, "num_entries": 5970, "num_filter_entries": 5970, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770033972, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:06:12.042079) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9236668 bytes
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:06:12.043471) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 346.5 rd, 232.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 10.5 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(8.3) write-amplify(3.3) OK, records in: 6967, records dropped: 997 output_compression: NoCompression
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:06:12.043491) EVENT_LOG_v1 {"time_micros": 1770033972043481, "job": 32, "event": "compaction_finished", "compaction_time_micros": 39756, "compaction_time_cpu_micros": 18069, "output_level": 6, "num_output_files": 1, "total_output_size": 9236668, "num_input_records": 6967, "num_output_records": 5970, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033972043852, "job": 32, "event": "table_file_deletion", "file_number": 61}
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770033972045178, "job": 32, "event": "table_file_deletion", "file_number": 59}
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:06:12.002041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:06:12.045254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:06:12.045264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:06:12.045265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:06:12.045266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:06:12.045268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:06:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 190 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 46 op/s
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2286360234' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:06:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.874 238887 DEBUG os_brick.encryptors [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Using volume encryption metadata '{'encryption_key_id': '9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-aefa5d1b-8cee-4d85-b079-e06a2af3a859', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'aefa5d1b-8cee-4d85-b079-e06a2af3a859', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '804c52ce-4b15-4c12-bfe7-efe1281d3dc1', 'attached_at': '', 'detached_at': '', 'volume_id': 'aefa5d1b-8cee-4d85-b079-e06a2af3a859', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.880 238887 DEBUG barbicanclient.client [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.896 238887 DEBUG barbicanclient.v1.secrets [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.897 238887 INFO barbicanclient.base [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.925 238887 DEBUG barbicanclient.client [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.925 238887 INFO barbicanclient.base [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.949 238887 DEBUG barbicanclient.client [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.950 238887 INFO barbicanclient.base [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.969 238887 DEBUG barbicanclient.client [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.970 238887 INFO barbicanclient.base [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.991 238887 DEBUG barbicanclient.client [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:12 np0005604943 nova_compute[238883]: 2026-02-02 12:06:12.991 238887 INFO barbicanclient.base [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.012 238887 DEBUG barbicanclient.client [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.013 238887 INFO barbicanclient.base [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.033 238887 DEBUG barbicanclient.client [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.034 238887 INFO barbicanclient.base [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.051 238887 DEBUG barbicanclient.client [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.052 238887 INFO barbicanclient.base [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.072 238887 DEBUG barbicanclient.client [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.073 238887 INFO barbicanclient.base [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.129 238887 DEBUG barbicanclient.client [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.130 238887 INFO barbicanclient.base [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.176 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.188 238887 DEBUG barbicanclient.client [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.189 238887 INFO barbicanclient.base [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.214 238887 DEBUG barbicanclient.client [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.215 238887 INFO barbicanclient.base [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.237 238887 DEBUG barbicanclient.client [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.238 238887 INFO barbicanclient.base [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.260 238887 DEBUG barbicanclient.client [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.261 238887 INFO barbicanclient.base [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.279 238887 DEBUG barbicanclient.client [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.280 238887 INFO barbicanclient.base [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.303 238887 DEBUG barbicanclient.client [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.305 238887 DEBUG nova.virt.libvirt.host [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 07:06:13 np0005604943 nova_compute[238883]:  <usage type="volume">
Feb  2 07:06:13 np0005604943 nova_compute[238883]:    <volume>aefa5d1b-8cee-4d85-b079-e06a2af3a859</volume>
Feb  2 07:06:13 np0005604943 nova_compute[238883]:  </usage>
Feb  2 07:06:13 np0005604943 nova_compute[238883]: </secret>
Feb  2 07:06:13 np0005604943 nova_compute[238883]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.317 238887 DEBUG nova.objects.instance [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lazy-loading 'flavor' on Instance uuid 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.340 238887 DEBUG nova.virt.libvirt.driver [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Attempting to attach volume aefa5d1b-8cee-4d85-b079-e06a2af3a859 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 07:06:13 np0005604943 nova_compute[238883]: 2026-02-02 12:06:13.343 238887 DEBUG nova.virt.libvirt.guest [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 07:06:13 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:06:13 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-aefa5d1b-8cee-4d85-b079-e06a2af3a859">
Feb  2 07:06:13 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:06:13 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:06:13 np0005604943 nova_compute[238883]:  <auth username="openstack">
Feb  2 07:06:13 np0005604943 nova_compute[238883]:    <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:06:13 np0005604943 nova_compute[238883]:  </auth>
Feb  2 07:06:13 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:06:13 np0005604943 nova_compute[238883]:  <serial>aefa5d1b-8cee-4d85-b079-e06a2af3a859</serial>
Feb  2 07:06:13 np0005604943 nova_compute[238883]:  <encryption format="luks">
Feb  2 07:06:13 np0005604943 nova_compute[238883]:    <secret type="passphrase" uuid="d9a9e304-6b83-4f94-8eb4-2ce58b940681"/>
Feb  2 07:06:13 np0005604943 nova_compute[238883]:  </encryption>
Feb  2 07:06:13 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:06:13 np0005604943 nova_compute[238883]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 07:06:14 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:14.011 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:14 np0005604943 podman[263907]: 2026-02-02 12:06:14.035335693 +0000 UTC m=+0.051226543 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 07:06:14 np0005604943 podman[263906]: 2026-02-02 12:06:14.060582685 +0000 UTC m=+0.077205145 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 07:06:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 214 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 52 op/s
Feb  2 07:06:14 np0005604943 nova_compute[238883]: 2026-02-02 12:06:14.487 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:15 np0005604943 nova_compute[238883]: 2026-02-02 12:06:15.188 238887 DEBUG oslo_concurrency.lockutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "117c0603-9127-4e21-9fc6-df67391a5b24" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:15 np0005604943 nova_compute[238883]: 2026-02-02 12:06:15.189 238887 DEBUG oslo_concurrency.lockutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "117c0603-9127-4e21-9fc6-df67391a5b24" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:15 np0005604943 nova_compute[238883]: 2026-02-02 12:06:15.206 238887 DEBUG nova.compute.manager [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:06:15 np0005604943 nova_compute[238883]: 2026-02-02 12:06:15.277 238887 DEBUG oslo_concurrency.lockutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:15 np0005604943 nova_compute[238883]: 2026-02-02 12:06:15.278 238887 DEBUG oslo_concurrency.lockutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:15 np0005604943 nova_compute[238883]: 2026-02-02 12:06:15.287 238887 DEBUG nova.virt.hardware [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:06:15 np0005604943 nova_compute[238883]: 2026-02-02 12:06:15.288 238887 INFO nova.compute.claims [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:06:15 np0005604943 nova_compute[238883]: 2026-02-02 12:06:15.415 238887 DEBUG oslo_concurrency.processutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:15 np0005604943 nova_compute[238883]: 2026-02-02 12:06:15.742 238887 DEBUG nova.virt.libvirt.driver [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:06:15 np0005604943 nova_compute[238883]: 2026-02-02 12:06:15.743 238887 DEBUG nova.virt.libvirt.driver [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:06:15 np0005604943 nova_compute[238883]: 2026-02-02 12:06:15.743 238887 DEBUG nova.virt.libvirt.driver [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:06:15 np0005604943 nova_compute[238883]: 2026-02-02 12:06:15.743 238887 DEBUG nova.virt.libvirt.driver [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] No VIF found with MAC fa:16:3e:53:a6:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:06:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:06:15 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/456481351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:06:15 np0005604943 nova_compute[238883]: 2026-02-02 12:06:15.954 238887 DEBUG oslo_concurrency.processutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:15 np0005604943 nova_compute[238883]: 2026-02-02 12:06:15.960 238887 DEBUG nova.compute.provider_tree [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:06:15 np0005604943 nova_compute[238883]: 2026-02-02 12:06:15.989 238887 DEBUG nova.scheduler.client.report [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:06:15 np0005604943 nova_compute[238883]: 2026-02-02 12:06:15.996 238887 DEBUG oslo_concurrency.lockutils [None req-796c2d06-9e77-4575-8fdd-3856f14f7dcd 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.015 238887 DEBUG oslo_concurrency.lockutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.016 238887 DEBUG nova.compute.manager [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.069 238887 DEBUG nova.compute.manager [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.070 238887 DEBUG nova.network.neutron [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.092 238887 INFO nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.112 238887 DEBUG nova.compute.manager [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.151 238887 INFO nova.virt.block_device [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Booting with volume 35e93bdb-d5a0-4f55-9db9-c7fbfb691c9c at /dev/vda#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.326 238887 DEBUG os_brick.utils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.327 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.338 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.338 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[eca032cb-192b-4124-bd86-dc39e53d2740]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.340 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.346 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.347 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[4af66298-1f1e-4fe8-ab63-d941c08f2f5e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.348 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.354 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.354 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b1db8f-dd77-42cf-b921-c274307f3864]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.355 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[65333efc-b656-4101-823d-e93664fb346a]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.355 238887 DEBUG oslo_concurrency.processutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.375 238887 DEBUG oslo_concurrency.processutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.377 238887 DEBUG os_brick.initiator.connectors.lightos [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.378 238887 DEBUG os_brick.initiator.connectors.lightos [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.378 238887 DEBUG os_brick.initiator.connectors.lightos [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.378 238887 DEBUG os_brick.utils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] <== get_connector_properties: return (52ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.379 238887 DEBUG nova.virt.block_device [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Updating existing volume attachment record: d26619a0-2abb-44ff-805d-5494b9925df2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:06:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 214 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 52 op/s
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.483 238887 DEBUG nova.policy [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e3fc9d8415541ecaa0da4968c9fa242', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e66ed51ccbb840f083b8a86476696747', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.633 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.732 238887 DEBUG oslo_concurrency.lockutils [None req-019eabe4-b81d-49d6-9af1-925212953f81 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.733 238887 DEBUG oslo_concurrency.lockutils [None req-019eabe4-b81d-49d6-9af1-925212953f81 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.763 238887 INFO nova.compute.manager [None req-019eabe4-b81d-49d6-9af1-925212953f81 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Detaching volume aefa5d1b-8cee-4d85-b079-e06a2af3a859#033[00m
Feb  2 07:06:16 np0005604943 nova_compute[238883]: 2026-02-02 12:06:16.929 238887 INFO nova.virt.block_device [None req-019eabe4-b81d-49d6-9af1-925212953f81 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Attempting to driver detach volume aefa5d1b-8cee-4d85-b079-e06a2af3a859 from mountpoint /dev/vdb#033[00m
Feb  2 07:06:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:06:17 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2049077656' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.055 238887 DEBUG os_brick.encryptors [None req-019eabe4-b81d-49d6-9af1-925212953f81 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Using volume encryption metadata '{'encryption_key_id': '9de16092-6cc5-4a61-8ba8-41ec1d6ad5b9', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-aefa5d1b-8cee-4d85-b079-e06a2af3a859', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'aefa5d1b-8cee-4d85-b079-e06a2af3a859', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '804c52ce-4b15-4c12-bfe7-efe1281d3dc1', 'attached_at': '', 'detached_at': '', 'volume_id': 'aefa5d1b-8cee-4d85-b079-e06a2af3a859', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.064 238887 DEBUG nova.virt.libvirt.driver [None req-019eabe4-b81d-49d6-9af1-925212953f81 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Attempting to detach device vdb from instance 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.065 238887 DEBUG nova.virt.libvirt.guest [None req-019eabe4-b81d-49d6-9af1-925212953f81 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 07:06:17 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:06:17 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-aefa5d1b-8cee-4d85-b079-e06a2af3a859">
Feb  2 07:06:17 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:06:17 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:06:17 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:06:17 np0005604943 nova_compute[238883]:  <serial>aefa5d1b-8cee-4d85-b079-e06a2af3a859</serial>
Feb  2 07:06:17 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 07:06:17 np0005604943 nova_compute[238883]:  <encryption format="luks">
Feb  2 07:06:17 np0005604943 nova_compute[238883]:    <secret type="passphrase" uuid="d9a9e304-6b83-4f94-8eb4-2ce58b940681"/>
Feb  2 07:06:17 np0005604943 nova_compute[238883]:  </encryption>
Feb  2 07:06:17 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:06:17 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.076 238887 INFO nova.virt.libvirt.driver [None req-019eabe4-b81d-49d6-9af1-925212953f81 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Successfully detached device vdb from instance 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 from the persistent domain config.#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.077 238887 DEBUG nova.virt.libvirt.driver [None req-019eabe4-b81d-49d6-9af1-925212953f81 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.077 238887 DEBUG nova.virt.libvirt.guest [None req-019eabe4-b81d-49d6-9af1-925212953f81 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 07:06:17 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:06:17 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-aefa5d1b-8cee-4d85-b079-e06a2af3a859">
Feb  2 07:06:17 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:06:17 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:06:17 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:06:17 np0005604943 nova_compute[238883]:  <serial>aefa5d1b-8cee-4d85-b079-e06a2af3a859</serial>
Feb  2 07:06:17 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 07:06:17 np0005604943 nova_compute[238883]:  <encryption format="luks">
Feb  2 07:06:17 np0005604943 nova_compute[238883]:    <secret type="passphrase" uuid="d9a9e304-6b83-4f94-8eb4-2ce58b940681"/>
Feb  2 07:06:17 np0005604943 nova_compute[238883]:  </encryption>
Feb  2 07:06:17 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:06:17 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.128 238887 DEBUG nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Received event <DeviceRemovedEvent: 1770033977.1280499, 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.129 238887 DEBUG nova.virt.libvirt.driver [None req-019eabe4-b81d-49d6-9af1-925212953f81 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.132 238887 INFO nova.virt.libvirt.driver [None req-019eabe4-b81d-49d6-9af1-925212953f81 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Successfully detached device vdb from instance 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 from the live domain config.#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.340 238887 DEBUG nova.objects.instance [None req-019eabe4-b81d-49d6-9af1-925212953f81 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lazy-loading 'flavor' on Instance uuid 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.396 238887 DEBUG oslo_concurrency.lockutils [None req-019eabe4-b81d-49d6-9af1-925212953f81 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.424 238887 DEBUG nova.compute.manager [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.425 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.426 238887 INFO nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Creating image(s)#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.426 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.426 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Ensure instance console log exists: /var/lib/nova/instances/117c0603-9127-4e21-9fc6-df67391a5b24/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.427 238887 DEBUG oslo_concurrency.lockutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.427 238887 DEBUG oslo_concurrency.lockutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.427 238887 DEBUG oslo_concurrency.lockutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:06:17 np0005604943 nova_compute[238883]: 2026-02-02 12:06:17.945 238887 DEBUG nova.network.neutron [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Successfully created port: 24bdd88c-5f95-463b-940e-03c2b17e5e19 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.211 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.305 238887 DEBUG oslo_concurrency.lockutils [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.306 238887 DEBUG oslo_concurrency.lockutils [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.306 238887 DEBUG oslo_concurrency.lockutils [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.307 238887 DEBUG oslo_concurrency.lockutils [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.307 238887 DEBUG oslo_concurrency.lockutils [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.308 238887 INFO nova.compute.manager [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Terminating instance#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.309 238887 DEBUG nova.compute.manager [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:06:18 np0005604943 kernel: tap526abf6f-00 (unregistering): left promiscuous mode
Feb  2 07:06:18 np0005604943 NetworkManager[49093]: <info>  [1770033978.3609] device (tap526abf6f-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:06:18 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:18Z|00194|binding|INFO|Releasing lport 526abf6f-0054-4f1e-8c8c-761a2476046a from this chassis (sb_readonly=0)
Feb  2 07:06:18 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:18Z|00195|binding|INFO|Setting lport 526abf6f-0054-4f1e-8c8c-761a2476046a down in Southbound
Feb  2 07:06:18 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:18Z|00196|binding|INFO|Removing iface tap526abf6f-00 ovn-installed in OVS
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.369 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.373 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:18.379 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:a6:48 10.100.0.6'], port_security=['fa:16:3e:53:a6:48 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '804c52ce-4b15-4c12-bfe7-efe1281d3dc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb13b2a6-b763-41ef-a5c4-123372e94249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851fb6d80faf43cc9b2fef1913323704', 'neutron:revision_number': '4', 'neutron:security_group_ids': '70b41b0b-c892-46d7-b5d9-14c26fc19c78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10f2dc12-4c00-4783-968f-4cacec86630e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=526abf6f-0054-4f1e-8c8c-761a2476046a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:06:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:18.380 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 526abf6f-0054-4f1e-8c8c-761a2476046a in datapath fb13b2a6-b763-41ef-a5c4-123372e94249 unbound from our chassis#033[00m
Feb  2 07:06:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:18.382 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fb13b2a6-b763-41ef-a5c4-123372e94249, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:06:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:18.383 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb69dc6-5c88-4f9a-9f4c-f50d7f358471]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:18.384 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249 namespace which is not needed anymore#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.384 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 214 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 820 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Feb  2 07:06:18 np0005604943 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Deactivated successfully.
Feb  2 07:06:18 np0005604943 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Consumed 18.416s CPU time.
Feb  2 07:06:18 np0005604943 systemd-machined[206973]: Machine qemu-19-instance-00000013 terminated.
Feb  2 07:06:18 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[263288]: [NOTICE]   (263294) : haproxy version is 2.8.14-c23fe91
Feb  2 07:06:18 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[263288]: [NOTICE]   (263294) : path to executable is /usr/sbin/haproxy
Feb  2 07:06:18 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[263288]: [WARNING]  (263294) : Exiting Master process...
Feb  2 07:06:18 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[263288]: [ALERT]    (263294) : Current worker (263296) exited with code 143 (Terminated)
Feb  2 07:06:18 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[263288]: [WARNING]  (263294) : All workers exited. Exiting... (0)
Feb  2 07:06:18 np0005604943 systemd[1]: libpod-0d77a412f738add4a7f7a56a7e2e20c23269e06ca86553e3136599a7ea9e2406.scope: Deactivated successfully.
Feb  2 07:06:18 np0005604943 podman[264004]: 2026-02-02 12:06:18.48697183 +0000 UTC m=+0.041138110 container died 0d77a412f738add4a7f7a56a7e2e20c23269e06ca86553e3136599a7ea9e2406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:06:18 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d77a412f738add4a7f7a56a7e2e20c23269e06ca86553e3136599a7ea9e2406-userdata-shm.mount: Deactivated successfully.
Feb  2 07:06:18 np0005604943 systemd[1]: var-lib-containers-storage-overlay-acabbbcc7edf9cf5edb35bf26a5ac66fbb8ff8cc26def29aa4acab7acf2a92d7-merged.mount: Deactivated successfully.
Feb  2 07:06:18 np0005604943 podman[264004]: 2026-02-02 12:06:18.534527993 +0000 UTC m=+0.088694263 container cleanup 0d77a412f738add4a7f7a56a7e2e20c23269e06ca86553e3136599a7ea9e2406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb  2 07:06:18 np0005604943 systemd[1]: libpod-conmon-0d77a412f738add4a7f7a56a7e2e20c23269e06ca86553e3136599a7ea9e2406.scope: Deactivated successfully.
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.546 238887 INFO nova.virt.libvirt.driver [-] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Instance destroyed successfully.#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.546 238887 DEBUG nova.objects.instance [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lazy-loading 'resources' on Instance uuid 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.568 238887 DEBUG nova.virt.libvirt.vif [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:05:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1363330718',display_name='tempest-TestEncryptedCinderVolumes-server-1363330718',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1363330718',id=19,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBojyyZc1pB4qbhccFknnAVyH2qYCGB8sXr6VXf4RggmuyiwiRN8sR4YyL37CEKqQGLHnWQ85K+Sg330iXkE8rCxhD0x5sAmjwWVf2+FF2jQxgasqZQCdwAdLrujQSitwA==',key_name='tempest-keypair-1925156515',keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:05:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='851fb6d80faf43cc9b2fef1913323704',ramdisk_id='',reservation_id='r-chevi327',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1976450145',owner_user_name='tempest-TestEncryptedCinderVolumes-1976450145-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:05:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='084f489a7b4c4fecba7b0942ed1b7203',uuid=804c52ce-4b15-4c12-bfe7-efe1281d3dc1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "526abf6f-0054-4f1e-8c8c-761a2476046a", "address": "fa:16:3e:53:a6:48", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap526abf6f-00", "ovs_interfaceid": "526abf6f-0054-4f1e-8c8c-761a2476046a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.569 238887 DEBUG nova.network.os_vif_util [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converting VIF {"id": "526abf6f-0054-4f1e-8c8c-761a2476046a", "address": "fa:16:3e:53:a6:48", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap526abf6f-00", "ovs_interfaceid": "526abf6f-0054-4f1e-8c8c-761a2476046a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.570 238887 DEBUG nova.network.os_vif_util [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:a6:48,bridge_name='br-int',has_traffic_filtering=True,id=526abf6f-0054-4f1e-8c8c-761a2476046a,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap526abf6f-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.570 238887 DEBUG os_vif [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:a6:48,bridge_name='br-int',has_traffic_filtering=True,id=526abf6f-0054-4f1e-8c8c-761a2476046a,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap526abf6f-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.572 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.572 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap526abf6f-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.574 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.575 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.577 238887 INFO os_vif [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:a6:48,bridge_name='br-int',has_traffic_filtering=True,id=526abf6f-0054-4f1e-8c8c-761a2476046a,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap526abf6f-00')#033[00m
Feb  2 07:06:18 np0005604943 podman[264044]: 2026-02-02 12:06:18.594506091 +0000 UTC m=+0.041696446 container remove 0d77a412f738add4a7f7a56a7e2e20c23269e06ca86553e3136599a7ea9e2406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb  2 07:06:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:18.598 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[328298da-0bc0-4c9c-8ae6-8e61d844d6f1]: (4, ('Mon Feb  2 12:06:18 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249 (0d77a412f738add4a7f7a56a7e2e20c23269e06ca86553e3136599a7ea9e2406)\n0d77a412f738add4a7f7a56a7e2e20c23269e06ca86553e3136599a7ea9e2406\nMon Feb  2 12:06:18 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249 (0d77a412f738add4a7f7a56a7e2e20c23269e06ca86553e3136599a7ea9e2406)\n0d77a412f738add4a7f7a56a7e2e20c23269e06ca86553e3136599a7ea9e2406\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:18.600 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[976143ef-07b4-4d77-a78d-8c36886f359c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:18.601 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb13b2a6-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:18 np0005604943 kernel: tapfb13b2a6-b0: left promiscuous mode
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.605 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.610 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:18.613 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[8516a283-75e5-4b0d-adce-a05183f3314a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:18.631 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[657fe36b-1f92-4164-a2f7-b91b2adef288]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:18.633 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fdee38d7-f6c8-4fda-bb44-e3748c74ff30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:18.644 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[045598af-a1ac-4ada-ba0b-5045dadb89bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437865, 'reachable_time': 23970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264078, 'error': None, 'target': 'ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:18 np0005604943 systemd[1]: run-netns-ovnmeta\x2dfb13b2a6\x2db763\x2d41ef\x2da5c4\x2d123372e94249.mount: Deactivated successfully.
Feb  2 07:06:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:18.648 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 07:06:18 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:18.648 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[5868dc51-31e0-44a4-abc3-bb25bac581c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.799 238887 DEBUG nova.network.neutron [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Successfully updated port: 24bdd88c-5f95-463b-940e-03c2b17e5e19 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.813 238887 DEBUG oslo_concurrency.lockutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "refresh_cache-117c0603-9127-4e21-9fc6-df67391a5b24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.814 238887 DEBUG oslo_concurrency.lockutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquired lock "refresh_cache-117c0603-9127-4e21-9fc6-df67391a5b24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.814 238887 DEBUG nova.network.neutron [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.852 238887 INFO nova.virt.libvirt.driver [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Deleting instance files /var/lib/nova/instances/804c52ce-4b15-4c12-bfe7-efe1281d3dc1_del#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.853 238887 INFO nova.virt.libvirt.driver [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Deletion of /var/lib/nova/instances/804c52ce-4b15-4c12-bfe7-efe1281d3dc1_del complete#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.896 238887 DEBUG nova.compute.manager [req-39620320-a33e-483c-84de-4b64408cf541 req-61f7d89a-fb73-4b11-8a70-03d86454722d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Received event network-changed-24bdd88c-5f95-463b-940e-03c2b17e5e19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.896 238887 DEBUG nova.compute.manager [req-39620320-a33e-483c-84de-4b64408cf541 req-61f7d89a-fb73-4b11-8a70-03d86454722d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Refreshing instance network info cache due to event network-changed-24bdd88c-5f95-463b-940e-03c2b17e5e19. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.896 238887 DEBUG oslo_concurrency.lockutils [req-39620320-a33e-483c-84de-4b64408cf541 req-61f7d89a-fb73-4b11-8a70-03d86454722d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-117c0603-9127-4e21-9fc6-df67391a5b24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.904 238887 INFO nova.compute.manager [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.904 238887 DEBUG oslo.service.loopingcall [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.905 238887 DEBUG nova.compute.manager [-] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:06:18 np0005604943 nova_compute[238883]: 2026-02-02 12:06:18.905 238887 DEBUG nova.network.neutron [-] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.007 238887 DEBUG nova.network.neutron [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.740 238887 DEBUG nova.network.neutron [-] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.765 238887 INFO nova.compute.manager [-] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Took 0.86 seconds to deallocate network for instance.#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.821 238887 DEBUG oslo_concurrency.lockutils [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.821 238887 DEBUG oslo_concurrency.lockutils [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.835 238887 DEBUG nova.compute.manager [req-6b310bf8-1031-456b-8355-fa9bd196a0fb req-f21fda71-3f04-4dea-a3c8-8fcb354412e0 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Received event network-vif-deleted-526abf6f-0054-4f1e-8c8c-761a2476046a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.890 238887 DEBUG oslo_concurrency.processutils [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.928 238887 DEBUG nova.network.neutron [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Updating instance_info_cache with network_info: [{"id": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "address": "fa:16:3e:a9:59:79", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bdd88c-5f", "ovs_interfaceid": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.946 238887 DEBUG oslo_concurrency.lockutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Releasing lock "refresh_cache-117c0603-9127-4e21-9fc6-df67391a5b24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.946 238887 DEBUG nova.compute.manager [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Instance network_info: |[{"id": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "address": "fa:16:3e:a9:59:79", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bdd88c-5f", "ovs_interfaceid": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.947 238887 DEBUG oslo_concurrency.lockutils [req-39620320-a33e-483c-84de-4b64408cf541 req-61f7d89a-fb73-4b11-8a70-03d86454722d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-117c0603-9127-4e21-9fc6-df67391a5b24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.947 238887 DEBUG nova.network.neutron [req-39620320-a33e-483c-84de-4b64408cf541 req-61f7d89a-fb73-4b11-8a70-03d86454722d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Refreshing network info cache for port 24bdd88c-5f95-463b-940e-03c2b17e5e19 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.950 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Start _get_guest_xml network_info=[{"id": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "address": "fa:16:3e:a9:59:79", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bdd88c-5f", "ovs_interfaceid": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'attachment_id': 'd26619a0-2abb-44ff-805d-5494b9925df2', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-35e93bdb-d5a0-4f55-9db9-c7fbfb691c9c', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '35e93bdb-d5a0-4f55-9db9-c7fbfb691c9c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '117c0603-9127-4e21-9fc6-df67391a5b24', 'attached_at': '', 'detached_at': '', 'volume_id': '35e93bdb-d5a0-4f55-9db9-c7fbfb691c9c', 'serial': '35e93bdb-d5a0-4f55-9db9-c7fbfb691c9c'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.955 238887 WARNING nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.966 238887 DEBUG nova.virt.libvirt.host [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.967 238887 DEBUG nova.virt.libvirt.host [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.971 238887 DEBUG nova.virt.libvirt.host [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.971 238887 DEBUG nova.virt.libvirt.host [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.972 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.972 238887 DEBUG nova.virt.hardware [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.973 238887 DEBUG nova.virt.hardware [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.973 238887 DEBUG nova.virt.hardware [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.973 238887 DEBUG nova.virt.hardware [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.973 238887 DEBUG nova.virt.hardware [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.973 238887 DEBUG nova.virt.hardware [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.974 238887 DEBUG nova.virt.hardware [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.974 238887 DEBUG nova.virt.hardware [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.974 238887 DEBUG nova.virt.hardware [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.974 238887 DEBUG nova.virt.hardware [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.974 238887 DEBUG nova.virt.hardware [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.994 238887 DEBUG nova.storage.rbd_utils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image 117c0603-9127-4e21-9fc6-df67391a5b24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:06:19 np0005604943 nova_compute[238883]: 2026-02-02 12:06:19.999 238887 DEBUG oslo_concurrency.processutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 214 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 730 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Feb  2 07:06:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:06:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1263377930' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.450 238887 DEBUG oslo_concurrency.processutils [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.458 238887 DEBUG nova.compute.provider_tree [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.487 238887 DEBUG nova.scheduler.client.report [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:06:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:06:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3630495130' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.515 238887 DEBUG oslo_concurrency.lockutils [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.519 238887 DEBUG oslo_concurrency.processutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.547 238887 DEBUG nova.virt.libvirt.vif [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:06:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1199863715',display_name='tempest-TestVolumeBootPattern-server-1199863715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1199863715',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO8pS3TOdyjX/N+jIFJqRkOzhDpnnQvMuyVIbWIYhdDa58/4gu4+MtK78TaoPi0KBaxHL0lWzg2GYnnuAmOLK3vOMGsshwGNfMmLTGNRIjuKqnaNrr1v/EHYLJ6m8LkFkQ==',key_name='tempest-TestVolumeBootPattern-656783760',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-jvabnf70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:06:16Z,user_data=None,user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=117c0603-9127-4e21-9fc6-df67391a5b24,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "address": "fa:16:3e:a9:59:79", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bdd88c-5f", "ovs_interfaceid": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.548 238887 DEBUG nova.network.os_vif_util [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "address": "fa:16:3e:a9:59:79", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bdd88c-5f", "ovs_interfaceid": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.550 238887 DEBUG nova.network.os_vif_util [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:59:79,bridge_name='br-int',has_traffic_filtering=True,id=24bdd88c-5f95-463b-940e-03c2b17e5e19,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24bdd88c-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.553 238887 DEBUG nova.objects.instance [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lazy-loading 'pci_devices' on Instance uuid 117c0603-9127-4e21-9fc6-df67391a5b24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.561 238887 INFO nova.scheduler.client.report [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Deleted allocations for instance 804c52ce-4b15-4c12-bfe7-efe1281d3dc1#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.577 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  <uuid>117c0603-9127-4e21-9fc6-df67391a5b24</uuid>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  <name>instance-00000014</name>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <nova:name>tempest-TestVolumeBootPattern-server-1199863715</nova:name>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:06:19</nova:creationTime>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:        <nova:user uuid="5e3fc9d8415541ecaa0da4968c9fa242">tempest-TestVolumeBootPattern-1059348902-project-member</nova:user>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:        <nova:project uuid="e66ed51ccbb840f083b8a86476696747">tempest-TestVolumeBootPattern-1059348902</nova:project>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:        <nova:port uuid="24bdd88c-5f95-463b-940e-03c2b17e5e19">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <entry name="serial">117c0603-9127-4e21-9fc6-df67391a5b24</entry>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <entry name="uuid">117c0603-9127-4e21-9fc6-df67391a5b24</entry>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/117c0603-9127-4e21-9fc6-df67391a5b24_disk.config">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="volumes/volume-35e93bdb-d5a0-4f55-9db9-c7fbfb691c9c">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <serial>35e93bdb-d5a0-4f55-9db9-c7fbfb691c9c</serial>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:a9:59:79"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <target dev="tap24bdd88c-5f"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/117c0603-9127-4e21-9fc6-df67391a5b24/console.log" append="off"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:06:20 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:06:20 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:06:20 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:06:20 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.579 238887 DEBUG nova.compute.manager [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Preparing to wait for external event network-vif-plugged-24bdd88c-5f95-463b-940e-03c2b17e5e19 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.580 238887 DEBUG oslo_concurrency.lockutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.581 238887 DEBUG oslo_concurrency.lockutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.581 238887 DEBUG oslo_concurrency.lockutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.582 238887 DEBUG nova.virt.libvirt.vif [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:06:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1199863715',display_name='tempest-TestVolumeBootPattern-server-1199863715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1199863715',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO8pS3TOdyjX/N+jIFJqRkOzhDpnnQvMuyVIbWIYhdDa58/4gu4+MtK78TaoPi0KBaxHL0lWzg2GYnnuAmOLK3vOMGsshwGNfMmLTGNRIjuKqnaNrr1v/EHYLJ6m8LkFkQ==',key_name='tempest-TestVolumeBootPattern-656783760',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-jvabnf70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:06:16Z,user_data=None,user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=117c0603-9127-4e21-9fc6-df67391a5b24,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "address": "fa:16:3e:a9:59:79", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bdd88c-5f", "ovs_interfaceid": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.582 238887 DEBUG nova.network.os_vif_util [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "address": "fa:16:3e:a9:59:79", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bdd88c-5f", "ovs_interfaceid": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.583 238887 DEBUG nova.network.os_vif_util [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:59:79,bridge_name='br-int',has_traffic_filtering=True,id=24bdd88c-5f95-463b-940e-03c2b17e5e19,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24bdd88c-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.583 238887 DEBUG os_vif [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:59:79,bridge_name='br-int',has_traffic_filtering=True,id=24bdd88c-5f95-463b-940e-03c2b17e5e19,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24bdd88c-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.584 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.585 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.585 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.590 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.590 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24bdd88c-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.591 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24bdd88c-5f, col_values=(('external_ids', {'iface-id': '24bdd88c-5f95-463b-940e-03c2b17e5e19', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:59:79', 'vm-uuid': '117c0603-9127-4e21-9fc6-df67391a5b24'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:20 np0005604943 NetworkManager[49093]: <info>  [1770033980.5939] manager: (tap24bdd88c-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.594 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.598 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.599 238887 INFO os_vif [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:59:79,bridge_name='br-int',has_traffic_filtering=True,id=24bdd88c-5f95-463b-940e-03c2b17e5e19,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24bdd88c-5f')#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.653 238887 DEBUG oslo_concurrency.lockutils [None req-2ae78c43-2fe1-4885-881a-0a699a976293 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "804c52ce-4b15-4c12-bfe7-efe1281d3dc1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.658 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.658 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.658 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No VIF found with MAC fa:16:3e:a9:59:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.659 238887 INFO nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Using config drive#033[00m
Feb  2 07:06:20 np0005604943 nova_compute[238883]: 2026-02-02 12:06:20.683 238887 DEBUG nova.storage.rbd_utils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image 117c0603-9127-4e21-9fc6-df67391a5b24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.021 238887 INFO nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Creating config drive at /var/lib/nova/instances/117c0603-9127-4e21-9fc6-df67391a5b24/disk.config#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.025 238887 DEBUG oslo_concurrency.processutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/117c0603-9127-4e21-9fc6-df67391a5b24/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmphsxlff9k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.156 238887 DEBUG oslo_concurrency.processutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/117c0603-9127-4e21-9fc6-df67391a5b24/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmphsxlff9k" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.178 238887 DEBUG nova.storage.rbd_utils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image 117c0603-9127-4e21-9fc6-df67391a5b24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.181 238887 DEBUG oslo_concurrency.processutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/117c0603-9127-4e21-9fc6-df67391a5b24/disk.config 117c0603-9127-4e21-9fc6-df67391a5b24_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.310 238887 DEBUG oslo_concurrency.processutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/117c0603-9127-4e21-9fc6-df67391a5b24/disk.config 117c0603-9127-4e21-9fc6-df67391a5b24_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.312 238887 INFO nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Deleting local config drive /var/lib/nova/instances/117c0603-9127-4e21-9fc6-df67391a5b24/disk.config because it was imported into RBD.#033[00m
Feb  2 07:06:21 np0005604943 kernel: tap24bdd88c-5f: entered promiscuous mode
Feb  2 07:06:21 np0005604943 NetworkManager[49093]: <info>  [1770033981.3625] manager: (tap24bdd88c-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Feb  2 07:06:21 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:21Z|00197|binding|INFO|Claiming lport 24bdd88c-5f95-463b-940e-03c2b17e5e19 for this chassis.
Feb  2 07:06:21 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:21Z|00198|binding|INFO|24bdd88c-5f95-463b-940e-03c2b17e5e19: Claiming fa:16:3e:a9:59:79 10.100.0.6
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.363 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.371 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:59:79 10.100.0.6'], port_security=['fa:16:3e:a9:59:79 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '117c0603-9127-4e21-9fc6-df67391a5b24', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34290362-cccd-452d-8e7e-22a6057fdb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e66ed51ccbb840f083b8a86476696747', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f445a686-10d3-4653-b101-b0c161d236b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c1fa263-7715-4982-bfcc-ab441fef3c03, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=24bdd88c-5f95-463b-940e-03c2b17e5e19) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.371 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:21 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:21Z|00199|binding|INFO|Setting lport 24bdd88c-5f95-463b-940e-03c2b17e5e19 ovn-installed in OVS
Feb  2 07:06:21 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:21Z|00200|binding|INFO|Setting lport 24bdd88c-5f95-463b-940e-03c2b17e5e19 up in Southbound
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.372 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 24bdd88c-5f95-463b-940e-03c2b17e5e19 in datapath 34290362-cccd-452d-8e7e-22a6057fdb60 bound to our chassis#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.372 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.374 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34290362-cccd-452d-8e7e-22a6057fdb60#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.375 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.381 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[54ea2683-aadc-4064-90d8-2a3f5991e161]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.382 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap34290362-c1 in ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.384 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap34290362-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.384 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2971ab-4735-49ab-8e2e-b4d14a305c3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.386 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[5226baa7-9a78-44ea-9d3c-a64fc0a7189e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:21 np0005604943 systemd-machined[206973]: New machine qemu-20-instance-00000014.
Feb  2 07:06:21 np0005604943 systemd-udevd[264216]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.398 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[388d5df0-d784-494e-a71a-195bc07c8e3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:21 np0005604943 NetworkManager[49093]: <info>  [1770033981.4043] device (tap24bdd88c-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:06:21 np0005604943 systemd[1]: Started Virtual Machine qemu-20-instance-00000014.
Feb  2 07:06:21 np0005604943 NetworkManager[49093]: <info>  [1770033981.4060] device (tap24bdd88c-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.411 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[60705a30-886f-4386-badd-a77b4a2753d3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.439 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[c78befe6-ac33-426e-86cd-05cea8a3972d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:21 np0005604943 systemd-udevd[264220]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:06:21 np0005604943 NetworkManager[49093]: <info>  [1770033981.4459] manager: (tap34290362-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/105)
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.444 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d390d227-a55a-4daf-ae9d-7e314e72c48d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.460 238887 DEBUG nova.network.neutron [req-39620320-a33e-483c-84de-4b64408cf541 req-61f7d89a-fb73-4b11-8a70-03d86454722d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Updated VIF entry in instance network info cache for port 24bdd88c-5f95-463b-940e-03c2b17e5e19. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.461 238887 DEBUG nova.network.neutron [req-39620320-a33e-483c-84de-4b64408cf541 req-61f7d89a-fb73-4b11-8a70-03d86454722d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Updating instance_info_cache with network_info: [{"id": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "address": "fa:16:3e:a9:59:79", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bdd88c-5f", "ovs_interfaceid": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.479 238887 DEBUG oslo_concurrency.lockutils [req-39620320-a33e-483c-84de-4b64408cf541 req-61f7d89a-fb73-4b11-8a70-03d86454722d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-117c0603-9127-4e21-9fc6-df67391a5b24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.484 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8d37ca-2c73-4995-8ee9-51d81713bb30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.488 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[a15fb9c5-208e-4e62-9be8-25817c89bac7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:21 np0005604943 NetworkManager[49093]: <info>  [1770033981.5120] device (tap34290362-c0): carrier: link connected
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.517 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[fa530279-de42-461e-a0c1-e03c42d548e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.536 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c96769e6-6904-4686-bdcf-b7c591fcd638]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34290362-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:39:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442686, 'reachable_time': 38803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264248, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.552 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[3f829f36-0736-4539-9592-baeacd6fe1cd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb3:39d2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442686, 'tstamp': 442686}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264249, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007621843600448566 of space, bias 1.0, pg target 0.228655308013457 quantized to 32 (current 32)
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0007093669365740477 of space, bias 1.0, pg target 0.21281008097221432 quantized to 32 (current 32)
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.0485336662402273e-06 of space, bias 1.0, pg target 0.0006145600998720682 quantized to 32 (current 32)
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006667443486778983 of space, bias 1.0, pg target 0.2000233046033695 quantized to 32 (current 32)
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0765087110049062e-06 of space, bias 4.0, pg target 0.0012918104532058873 quantized to 16 (current 16)
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:06:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.577 238887 DEBUG nova.compute.manager [req-4a46edc1-e28d-4696-95b4-c3dd7e422357 req-5b440aff-be33-4d6f-9097-c916ed779abc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Received event network-vif-plugged-24bdd88c-5f95-463b-940e-03c2b17e5e19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.578 238887 DEBUG oslo_concurrency.lockutils [req-4a46edc1-e28d-4696-95b4-c3dd7e422357 req-5b440aff-be33-4d6f-9097-c916ed779abc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.578 238887 DEBUG oslo_concurrency.lockutils [req-4a46edc1-e28d-4696-95b4-c3dd7e422357 req-5b440aff-be33-4d6f-9097-c916ed779abc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.578 238887 DEBUG oslo_concurrency.lockutils [req-4a46edc1-e28d-4696-95b4-c3dd7e422357 req-5b440aff-be33-4d6f-9097-c916ed779abc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.579 238887 DEBUG nova.compute.manager [req-4a46edc1-e28d-4696-95b4-c3dd7e422357 req-5b440aff-be33-4d6f-9097-c916ed779abc 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Processing event network-vif-plugged-24bdd88c-5f95-463b-940e-03c2b17e5e19 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.580 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[f99b4ba0-af99-4aa2-ba80-607bd08414d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34290362-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:39:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442686, 'reachable_time': 38803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264250, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.612 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[734451bd-2350-4279-9499-7f405e5026fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.635 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.671 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fd272cde-3352-44f6-a20c-9cbf405d8b01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.672 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34290362-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.673 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.673 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34290362-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:21 np0005604943 kernel: tap34290362-c0: entered promiscuous mode
Feb  2 07:06:21 np0005604943 NetworkManager[49093]: <info>  [1770033981.6761] manager: (tap34290362-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.675 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.678 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34290362-c0, col_values=(('external_ids', {'iface-id': '54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.679 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:21 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:21Z|00201|binding|INFO|Releasing lport 54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c from this chassis (sb_readonly=0)
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.680 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/34290362-cccd-452d-8e7e-22a6057fdb60.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/34290362-cccd-452d-8e7e-22a6057fdb60.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 07:06:21 np0005604943 nova_compute[238883]: 2026-02-02 12:06:21.686 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.685 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd921c0-26fd-49eb-9cd1-94812970c476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.687 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-34290362-cccd-452d-8e7e-22a6057fdb60
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/34290362-cccd-452d-8e7e-22a6057fdb60.pid.haproxy
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID 34290362-cccd-452d-8e7e-22a6057fdb60
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 07:06:21 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:21.687 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'env', 'PROCESS_TAG=haproxy-34290362-cccd-452d-8e7e-22a6057fdb60', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/34290362-cccd-452d-8e7e-22a6057fdb60.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 07:06:22 np0005604943 podman[264282]: 2026-02-02 12:06:22.054399782 +0000 UTC m=+0.045398846 container create aab24abd82d13437d80188a6aa31e7aa0e7a3f0b90e1f36f934bdbd778a761cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:06:22 np0005604943 systemd[1]: Started libpod-conmon-aab24abd82d13437d80188a6aa31e7aa0e7a3f0b90e1f36f934bdbd778a761cd.scope.
Feb  2 07:06:22 np0005604943 podman[264282]: 2026-02-02 12:06:22.03208667 +0000 UTC m=+0.023085774 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 07:06:22 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:06:22 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e633287686008b206bacce5782b87eeaeee5089955e291482128e84e45282f6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 07:06:22 np0005604943 podman[264282]: 2026-02-02 12:06:22.148922472 +0000 UTC m=+0.139921556 container init aab24abd82d13437d80188a6aa31e7aa0e7a3f0b90e1f36f934bdbd778a761cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:06:22 np0005604943 podman[264282]: 2026-02-02 12:06:22.153517626 +0000 UTC m=+0.144516690 container start aab24abd82d13437d80188a6aa31e7aa0e7a3f0b90e1f36f934bdbd778a761cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127)
Feb  2 07:06:22 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[264298]: [NOTICE]   (264302) : New worker (264304) forked
Feb  2 07:06:22 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[264298]: [NOTICE]   (264302) : Loading success.
Feb  2 07:06:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 156 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Feb  2 07:06:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:06:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3446722534' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:06:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:06:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3446722534' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.566 238887 DEBUG nova.compute.manager [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.569 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033982.5655847, 117c0603-9127-4e21-9fc6-df67391a5b24 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.569 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] VM Started (Lifecycle Event)#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.573 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.577 238887 INFO nova.virt.libvirt.driver [-] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Instance spawned successfully.#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.577 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.592 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.599 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.603 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.603 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.604 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.604 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.604 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.605 238887 DEBUG nova.virt.libvirt.driver [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.640 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.640 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033982.567094, 117c0603-9127-4e21-9fc6-df67391a5b24 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.640 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.668 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.672 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770033982.571771, 117c0603-9127-4e21-9fc6-df67391a5b24 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.672 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.681 238887 INFO nova.compute.manager [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Took 5.26 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.682 238887 DEBUG nova.compute.manager [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.692 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.695 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.727 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:06:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.754 238887 INFO nova.compute.manager [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Took 7.50 seconds to build instance.#033[00m
Feb  2 07:06:22 np0005604943 nova_compute[238883]: 2026-02-02 12:06:22.776 238887 DEBUG oslo_concurrency.lockutils [None req-46d7e61c-3e95-42fe-ab0d-4f8b738af9d4 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "117c0603-9127-4e21-9fc6-df67391a5b24" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:23 np0005604943 nova_compute[238883]: 2026-02-02 12:06:23.688 238887 DEBUG nova.compute.manager [req-d8bffd91-bada-4d12-96e7-323d71c1752b req-25da574d-c818-4bf5-ab73-d4f05691ad9d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Received event network-vif-plugged-24bdd88c-5f95-463b-940e-03c2b17e5e19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:06:23 np0005604943 nova_compute[238883]: 2026-02-02 12:06:23.689 238887 DEBUG oslo_concurrency.lockutils [req-d8bffd91-bada-4d12-96e7-323d71c1752b req-25da574d-c818-4bf5-ab73-d4f05691ad9d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:23 np0005604943 nova_compute[238883]: 2026-02-02 12:06:23.689 238887 DEBUG oslo_concurrency.lockutils [req-d8bffd91-bada-4d12-96e7-323d71c1752b req-25da574d-c818-4bf5-ab73-d4f05691ad9d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:23 np0005604943 nova_compute[238883]: 2026-02-02 12:06:23.689 238887 DEBUG oslo_concurrency.lockutils [req-d8bffd91-bada-4d12-96e7-323d71c1752b req-25da574d-c818-4bf5-ab73-d4f05691ad9d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:23 np0005604943 nova_compute[238883]: 2026-02-02 12:06:23.689 238887 DEBUG nova.compute.manager [req-d8bffd91-bada-4d12-96e7-323d71c1752b req-25da574d-c818-4bf5-ab73-d4f05691ad9d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] No waiting events found dispatching network-vif-plugged-24bdd88c-5f95-463b-940e-03c2b17e5e19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:06:23 np0005604943 nova_compute[238883]: 2026-02-02 12:06:23.690 238887 WARNING nova.compute.manager [req-d8bffd91-bada-4d12-96e7-323d71c1752b req-25da574d-c818-4bf5-ab73-d4f05691ad9d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Received unexpected event network-vif-plugged-24bdd88c-5f95-463b-940e-03c2b17e5e19 for instance with vm_state active and task_state None.#033[00m
Feb  2 07:06:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:06:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/14279216' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:06:23 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:06:23 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/14279216' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:06:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 962 KiB/s rd, 966 KiB/s wr, 104 op/s
Feb  2 07:06:25 np0005604943 nova_compute[238883]: 2026-02-02 12:06:25.594 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:26 np0005604943 nova_compute[238883]: 2026-02-02 12:06:26.160 238887 DEBUG nova.compute.manager [req-38de78a7-a49a-4148-bf47-dc79576ff93d req-2885d41e-50b5-4b70-a01e-adba15693598 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Received event network-changed-24bdd88c-5f95-463b-940e-03c2b17e5e19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:06:26 np0005604943 nova_compute[238883]: 2026-02-02 12:06:26.160 238887 DEBUG nova.compute.manager [req-38de78a7-a49a-4148-bf47-dc79576ff93d req-2885d41e-50b5-4b70-a01e-adba15693598 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Refreshing instance network info cache due to event network-changed-24bdd88c-5f95-463b-940e-03c2b17e5e19. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:06:26 np0005604943 nova_compute[238883]: 2026-02-02 12:06:26.160 238887 DEBUG oslo_concurrency.lockutils [req-38de78a7-a49a-4148-bf47-dc79576ff93d req-2885d41e-50b5-4b70-a01e-adba15693598 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-117c0603-9127-4e21-9fc6-df67391a5b24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:06:26 np0005604943 nova_compute[238883]: 2026-02-02 12:06:26.161 238887 DEBUG oslo_concurrency.lockutils [req-38de78a7-a49a-4148-bf47-dc79576ff93d req-2885d41e-50b5-4b70-a01e-adba15693598 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-117c0603-9127-4e21-9fc6-df67391a5b24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:06:26 np0005604943 nova_compute[238883]: 2026-02-02 12:06:26.161 238887 DEBUG nova.network.neutron [req-38de78a7-a49a-4148-bf47-dc79576ff93d req-2885d41e-50b5-4b70-a01e-adba15693598 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Refreshing network info cache for port 24bdd88c-5f95-463b-940e-03c2b17e5e19 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:06:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 947 KiB/s rd, 16 KiB/s wr, 83 op/s
Feb  2 07:06:26 np0005604943 nova_compute[238883]: 2026-02-02 12:06:26.637 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:06:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 16 KiB/s wr, 132 op/s
Feb  2 07:06:28 np0005604943 nova_compute[238883]: 2026-02-02 12:06:28.638 238887 DEBUG nova.network.neutron [req-38de78a7-a49a-4148-bf47-dc79576ff93d req-2885d41e-50b5-4b70-a01e-adba15693598 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Updated VIF entry in instance network info cache for port 24bdd88c-5f95-463b-940e-03c2b17e5e19. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:06:28 np0005604943 nova_compute[238883]: 2026-02-02 12:06:28.639 238887 DEBUG nova.network.neutron [req-38de78a7-a49a-4148-bf47-dc79576ff93d req-2885d41e-50b5-4b70-a01e-adba15693598 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Updating instance_info_cache with network_info: [{"id": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "address": "fa:16:3e:a9:59:79", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bdd88c-5f", "ovs_interfaceid": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:06:28 np0005604943 nova_compute[238883]: 2026-02-02 12:06:28.678 238887 DEBUG oslo_concurrency.lockutils [req-38de78a7-a49a-4148-bf47-dc79576ff93d req-2885d41e-50b5-4b70-a01e-adba15693598 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-117c0603-9127-4e21-9fc6-df67391a5b24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:06:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 14 KiB/s wr, 133 op/s
Feb  2 07:06:30 np0005604943 nova_compute[238883]: 2026-02-02 12:06:30.597 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:31 np0005604943 nova_compute[238883]: 2026-02-02 12:06:31.639 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 14 KiB/s wr, 142 op/s
Feb  2 07:06:32 np0005604943 nova_compute[238883]: 2026-02-02 12:06:32.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:06:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:06:33 np0005604943 nova_compute[238883]: 2026-02-02 12:06:33.543 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770033978.542073, 804c52ce-4b15-4c12-bfe7-efe1281d3dc1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:06:33 np0005604943 nova_compute[238883]: 2026-02-02 12:06:33.544 238887 INFO nova.compute.manager [-] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:06:33 np0005604943 nova_compute[238883]: 2026-02-02 12:06:33.564 238887 DEBUG nova.compute.manager [None req-80201a84-37b3-41e5-bc4a-616eee9103d9 - - - - - -] [instance: 804c52ce-4b15-4c12-bfe7-efe1281d3dc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:06:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 135 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 22 KiB/s wr, 118 op/s
Feb  2 07:06:35 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:35Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:59:79 10.100.0.6
Feb  2 07:06:35 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:35Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:59:79 10.100.0.6
Feb  2 07:06:35 np0005604943 nova_compute[238883]: 2026-02-02 12:06:35.600 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:35 np0005604943 nova_compute[238883]: 2026-02-02 12:06:35.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:06:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 135 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 22 KiB/s wr, 65 op/s
Feb  2 07:06:36 np0005604943 nova_compute[238883]: 2026-02-02 12:06:36.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:06:36 np0005604943 nova_compute[238883]: 2026-02-02 12:06:36.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 07:06:36 np0005604943 nova_compute[238883]: 2026-02-02 12:06:36.670 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:36 np0005604943 nova_compute[238883]: 2026-02-02 12:06:36.691 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 07:06:36 np0005604943 nova_compute[238883]: 2026-02-02 12:06:36.691 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:06:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:06:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:06:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 07:06:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:06:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 07:06:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:06:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 07:06:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 07:06:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 07:06:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:06:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:06:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:06:37 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:06:37 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:06:37 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:06:37 np0005604943 podman[264499]: 2026-02-02 12:06:37.143342922 +0000 UTC m=+0.053158884 container create 73e3d522ed6a0e5b391bd4463d7bd1b9f6ee2c7f0d468eeeffd690c2229c677d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hodgkin, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Feb  2 07:06:37 np0005604943 systemd[1]: Started libpod-conmon-73e3d522ed6a0e5b391bd4463d7bd1b9f6ee2c7f0d468eeeffd690c2229c677d.scope.
Feb  2 07:06:37 np0005604943 podman[264499]: 2026-02-02 12:06:37.111660378 +0000 UTC m=+0.021476370 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:06:37 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:06:37 np0005604943 podman[264499]: 2026-02-02 12:06:37.245369715 +0000 UTC m=+0.155185707 container init 73e3d522ed6a0e5b391bd4463d7bd1b9f6ee2c7f0d468eeeffd690c2229c677d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Feb  2 07:06:37 np0005604943 podman[264499]: 2026-02-02 12:06:37.252098756 +0000 UTC m=+0.161914718 container start 73e3d522ed6a0e5b391bd4463d7bd1b9f6ee2c7f0d468eeeffd690c2229c677d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 07:06:37 np0005604943 podman[264499]: 2026-02-02 12:06:37.255712024 +0000 UTC m=+0.165527986 container attach 73e3d522ed6a0e5b391bd4463d7bd1b9f6ee2c7f0d468eeeffd690c2229c677d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hodgkin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 07:06:37 np0005604943 systemd[1]: libpod-73e3d522ed6a0e5b391bd4463d7bd1b9f6ee2c7f0d468eeeffd690c2229c677d.scope: Deactivated successfully.
Feb  2 07:06:37 np0005604943 elated_hodgkin[264515]: 167 167
Feb  2 07:06:37 np0005604943 conmon[264515]: conmon 73e3d522ed6a0e5b391b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-73e3d522ed6a0e5b391bd4463d7bd1b9f6ee2c7f0d468eeeffd690c2229c677d.scope/container/memory.events
Feb  2 07:06:37 np0005604943 podman[264499]: 2026-02-02 12:06:37.262342712 +0000 UTC m=+0.172158694 container died 73e3d522ed6a0e5b391bd4463d7bd1b9f6ee2c7f0d468eeeffd690c2229c677d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hodgkin, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:06:37 np0005604943 systemd[1]: var-lib-containers-storage-overlay-bd85c0706a45ffd63da00853206d64cd113967f2bef0b2a1c5a8c7fe048339cb-merged.mount: Deactivated successfully.
Feb  2 07:06:37 np0005604943 podman[264499]: 2026-02-02 12:06:37.304567702 +0000 UTC m=+0.214383664 container remove 73e3d522ed6a0e5b391bd4463d7bd1b9f6ee2c7f0d468eeeffd690c2229c677d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hodgkin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:06:37 np0005604943 systemd[1]: libpod-conmon-73e3d522ed6a0e5b391bd4463d7bd1b9f6ee2c7f0d468eeeffd690c2229c677d.scope: Deactivated successfully.
Feb  2 07:06:37 np0005604943 podman[264541]: 2026-02-02 12:06:37.448584308 +0000 UTC m=+0.041167362 container create 997227cd2c3bdc1979196e438f5eb5f3f08976acd2c00be9d7fccb52439c5d7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 07:06:37 np0005604943 systemd[1]: Started libpod-conmon-997227cd2c3bdc1979196e438f5eb5f3f08976acd2c00be9d7fccb52439c5d7e.scope.
Feb  2 07:06:37 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:06:37 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9aeffe50eb6a5bc14991011dc822bf545687dd9e5efff317efdcab90a26b37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:06:37 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9aeffe50eb6a5bc14991011dc822bf545687dd9e5efff317efdcab90a26b37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:06:37 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9aeffe50eb6a5bc14991011dc822bf545687dd9e5efff317efdcab90a26b37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:06:37 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9aeffe50eb6a5bc14991011dc822bf545687dd9e5efff317efdcab90a26b37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:06:37 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9aeffe50eb6a5bc14991011dc822bf545687dd9e5efff317efdcab90a26b37/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 07:06:37 np0005604943 podman[264541]: 2026-02-02 12:06:37.42978613 +0000 UTC m=+0.022369204 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:06:37 np0005604943 podman[264541]: 2026-02-02 12:06:37.532495261 +0000 UTC m=+0.125078335 container init 997227cd2c3bdc1979196e438f5eb5f3f08976acd2c00be9d7fccb52439c5d7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:06:37 np0005604943 podman[264541]: 2026-02-02 12:06:37.538942785 +0000 UTC m=+0.131525839 container start 997227cd2c3bdc1979196e438f5eb5f3f08976acd2c00be9d7fccb52439c5d7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackwell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 07:06:37 np0005604943 podman[264541]: 2026-02-02 12:06:37.542371947 +0000 UTC m=+0.134955021 container attach 997227cd2c3bdc1979196e438f5eb5f3f08976acd2c00be9d7fccb52439c5d7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 07:06:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:06:37 np0005604943 intelligent_blackwell[264558]: --> passed data devices: 0 physical, 3 LVM
Feb  2 07:06:37 np0005604943 intelligent_blackwell[264558]: --> All data devices are unavailable
Feb  2 07:06:38 np0005604943 systemd[1]: libpod-997227cd2c3bdc1979196e438f5eb5f3f08976acd2c00be9d7fccb52439c5d7e.scope: Deactivated successfully.
Feb  2 07:06:38 np0005604943 podman[264541]: 2026-02-02 12:06:38.019226132 +0000 UTC m=+0.611809206 container died 997227cd2c3bdc1979196e438f5eb5f3f08976acd2c00be9d7fccb52439c5d7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackwell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 07:06:38 np0005604943 systemd[1]: var-lib-containers-storage-overlay-da9aeffe50eb6a5bc14991011dc822bf545687dd9e5efff317efdcab90a26b37-merged.mount: Deactivated successfully.
Feb  2 07:06:38 np0005604943 podman[264541]: 2026-02-02 12:06:38.068703557 +0000 UTC m=+0.661286611 container remove 997227cd2c3bdc1979196e438f5eb5f3f08976acd2c00be9d7fccb52439c5d7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_blackwell, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb  2 07:06:38 np0005604943 systemd[1]: libpod-conmon-997227cd2c3bdc1979196e438f5eb5f3f08976acd2c00be9d7fccb52439c5d7e.scope: Deactivated successfully.
Feb  2 07:06:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 161 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 1.5 MiB/s wr, 121 op/s
Feb  2 07:06:38 np0005604943 podman[264651]: 2026-02-02 12:06:38.504461383 +0000 UTC m=+0.038113970 container create 8fb74cb71f23e7c49d0a32cac01991a3608b6e3e40c2e1e44068388992cf51c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Feb  2 07:06:38 np0005604943 systemd[1]: Started libpod-conmon-8fb74cb71f23e7c49d0a32cac01991a3608b6e3e40c2e1e44068388992cf51c4.scope.
Feb  2 07:06:38 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:06:38 np0005604943 podman[264651]: 2026-02-02 12:06:38.488007609 +0000 UTC m=+0.021660226 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:06:38 np0005604943 podman[264651]: 2026-02-02 12:06:38.587607206 +0000 UTC m=+0.121259813 container init 8fb74cb71f23e7c49d0a32cac01991a3608b6e3e40c2e1e44068388992cf51c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Feb  2 07:06:38 np0005604943 podman[264651]: 2026-02-02 12:06:38.594491032 +0000 UTC m=+0.128143619 container start 8fb74cb71f23e7c49d0a32cac01991a3608b6e3e40c2e1e44068388992cf51c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:06:38 np0005604943 podman[264651]: 2026-02-02 12:06:38.597887493 +0000 UTC m=+0.131540080 container attach 8fb74cb71f23e7c49d0a32cac01991a3608b6e3e40c2e1e44068388992cf51c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hawking, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 07:06:38 np0005604943 laughing_hawking[264667]: 167 167
Feb  2 07:06:38 np0005604943 systemd[1]: libpod-8fb74cb71f23e7c49d0a32cac01991a3608b6e3e40c2e1e44068388992cf51c4.scope: Deactivated successfully.
Feb  2 07:06:38 np0005604943 conmon[264667]: conmon 8fb74cb71f23e7c49d0a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8fb74cb71f23e7c49d0a32cac01991a3608b6e3e40c2e1e44068388992cf51c4.scope/container/memory.events
Feb  2 07:06:38 np0005604943 podman[264651]: 2026-02-02 12:06:38.602008444 +0000 UTC m=+0.135661031 container died 8fb74cb71f23e7c49d0a32cac01991a3608b6e3e40c2e1e44068388992cf51c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 07:06:38 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f81ace054a343e1c999b91ad595b23ea678fd45ee64d236f531dc42450b0069c-merged.mount: Deactivated successfully.
Feb  2 07:06:38 np0005604943 podman[264651]: 2026-02-02 12:06:38.632267091 +0000 UTC m=+0.165919678 container remove 8fb74cb71f23e7c49d0a32cac01991a3608b6e3e40c2e1e44068388992cf51c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 07:06:38 np0005604943 nova_compute[238883]: 2026-02-02 12:06:38.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:06:38 np0005604943 nova_compute[238883]: 2026-02-02 12:06:38.645 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:06:38 np0005604943 nova_compute[238883]: 2026-02-02 12:06:38.645 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:06:38 np0005604943 systemd[1]: libpod-conmon-8fb74cb71f23e7c49d0a32cac01991a3608b6e3e40c2e1e44068388992cf51c4.scope: Deactivated successfully.
Feb  2 07:06:38 np0005604943 nova_compute[238883]: 2026-02-02 12:06:38.679 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:38 np0005604943 nova_compute[238883]: 2026-02-02 12:06:38.680 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:38 np0005604943 nova_compute[238883]: 2026-02-02 12:06:38.681 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:38 np0005604943 nova_compute[238883]: 2026-02-02 12:06:38.681 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 07:06:38 np0005604943 nova_compute[238883]: 2026-02-02 12:06:38.682 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:38 np0005604943 podman[264692]: 2026-02-02 12:06:38.778441634 +0000 UTC m=+0.040126663 container create 542b54b1a2fc9dc78df35d583623861c6d17ef4dbc87ca1c45eb51b171e9ffaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_borg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 07:06:38 np0005604943 systemd[1]: Started libpod-conmon-542b54b1a2fc9dc78df35d583623861c6d17ef4dbc87ca1c45eb51b171e9ffaa.scope.
Feb  2 07:06:38 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:06:38 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb33e14fdf5f35bd5a2ef2d0421957c1211d560062fc7124452c0b67537d356c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:06:38 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb33e14fdf5f35bd5a2ef2d0421957c1211d560062fc7124452c0b67537d356c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:06:38 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb33e14fdf5f35bd5a2ef2d0421957c1211d560062fc7124452c0b67537d356c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:06:38 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb33e14fdf5f35bd5a2ef2d0421957c1211d560062fc7124452c0b67537d356c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:06:38 np0005604943 podman[264692]: 2026-02-02 12:06:38.762495374 +0000 UTC m=+0.024180423 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:06:38 np0005604943 podman[264692]: 2026-02-02 12:06:38.867115667 +0000 UTC m=+0.128800716 container init 542b54b1a2fc9dc78df35d583623861c6d17ef4dbc87ca1c45eb51b171e9ffaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_borg, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:06:38 np0005604943 podman[264692]: 2026-02-02 12:06:38.873712974 +0000 UTC m=+0.135398003 container start 542b54b1a2fc9dc78df35d583623861c6d17ef4dbc87ca1c45eb51b171e9ffaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_borg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:06:38 np0005604943 podman[264692]: 2026-02-02 12:06:38.87686667 +0000 UTC m=+0.138551719 container attach 542b54b1a2fc9dc78df35d583623861c6d17ef4dbc87ca1c45eb51b171e9ffaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_borg, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:06:39 np0005604943 bold_borg[264726]: {
Feb  2 07:06:39 np0005604943 bold_borg[264726]:    "0": [
Feb  2 07:06:39 np0005604943 bold_borg[264726]:        {
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "devices": [
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "/dev/loop3"
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            ],
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "lv_name": "ceph_lv0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "lv_size": "21470642176",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "name": "ceph_lv0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "tags": {
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.cluster_name": "ceph",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.crush_device_class": "",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.encrypted": "0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.objectstore": "bluestore",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.osd_id": "0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.type": "block",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.vdo": "0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.with_tpm": "0"
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            },
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "type": "block",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "vg_name": "ceph_vg0"
Feb  2 07:06:39 np0005604943 bold_borg[264726]:        }
Feb  2 07:06:39 np0005604943 bold_borg[264726]:    ],
Feb  2 07:06:39 np0005604943 bold_borg[264726]:    "1": [
Feb  2 07:06:39 np0005604943 bold_borg[264726]:        {
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "devices": [
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "/dev/loop4"
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            ],
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "lv_name": "ceph_lv1",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "lv_size": "21470642176",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "name": "ceph_lv1",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "tags": {
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.cluster_name": "ceph",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.crush_device_class": "",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.encrypted": "0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.objectstore": "bluestore",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.osd_id": "1",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.type": "block",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.vdo": "0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.with_tpm": "0"
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            },
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "type": "block",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "vg_name": "ceph_vg1"
Feb  2 07:06:39 np0005604943 bold_borg[264726]:        }
Feb  2 07:06:39 np0005604943 bold_borg[264726]:    ],
Feb  2 07:06:39 np0005604943 bold_borg[264726]:    "2": [
Feb  2 07:06:39 np0005604943 bold_borg[264726]:        {
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "devices": [
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "/dev/loop5"
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            ],
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "lv_name": "ceph_lv2",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "lv_size": "21470642176",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "name": "ceph_lv2",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "tags": {
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.cluster_name": "ceph",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.crush_device_class": "",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.encrypted": "0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.objectstore": "bluestore",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.osd_id": "2",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.type": "block",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.vdo": "0",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:                "ceph.with_tpm": "0"
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            },
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "type": "block",
Feb  2 07:06:39 np0005604943 bold_borg[264726]:            "vg_name": "ceph_vg2"
Feb  2 07:06:39 np0005604943 bold_borg[264726]:        }
Feb  2 07:06:39 np0005604943 bold_borg[264726]:    ]
Feb  2 07:06:39 np0005604943 bold_borg[264726]: }
Feb  2 07:06:39 np0005604943 systemd[1]: libpod-542b54b1a2fc9dc78df35d583623861c6d17ef4dbc87ca1c45eb51b171e9ffaa.scope: Deactivated successfully.
Feb  2 07:06:39 np0005604943 podman[264692]: 2026-02-02 12:06:39.184239322 +0000 UTC m=+0.445924361 container died 542b54b1a2fc9dc78df35d583623861c6d17ef4dbc87ca1c45eb51b171e9ffaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_borg, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:06:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:06:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592710857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:06:39 np0005604943 nova_compute[238883]: 2026-02-02 12:06:39.260 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:39 np0005604943 systemd[1]: var-lib-containers-storage-overlay-bb33e14fdf5f35bd5a2ef2d0421957c1211d560062fc7124452c0b67537d356c-merged.mount: Deactivated successfully.
Feb  2 07:06:39 np0005604943 nova_compute[238883]: 2026-02-02 12:06:39.347 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:06:39 np0005604943 nova_compute[238883]: 2026-02-02 12:06:39.347 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:06:39 np0005604943 podman[264692]: 2026-02-02 12:06:39.514389839 +0000 UTC m=+0.776074868 container remove 542b54b1a2fc9dc78df35d583623861c6d17ef4dbc87ca1c45eb51b171e9ffaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Feb  2 07:06:39 np0005604943 nova_compute[238883]: 2026-02-02 12:06:39.539 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:06:39 np0005604943 nova_compute[238883]: 2026-02-02 12:06:39.541 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4105MB free_disk=59.98800448235124GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 07:06:39 np0005604943 nova_compute[238883]: 2026-02-02 12:06:39.541 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:39 np0005604943 nova_compute[238883]: 2026-02-02 12:06:39.541 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:39 np0005604943 systemd[1]: libpod-conmon-542b54b1a2fc9dc78df35d583623861c6d17ef4dbc87ca1c45eb51b171e9ffaa.scope: Deactivated successfully.
Feb  2 07:06:39 np0005604943 nova_compute[238883]: 2026-02-02 12:06:39.620 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance 117c0603-9127-4e21-9fc6-df67391a5b24 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 07:06:39 np0005604943 nova_compute[238883]: 2026-02-02 12:06:39.621 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 07:06:39 np0005604943 nova_compute[238883]: 2026-02-02 12:06:39.621 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 07:06:39 np0005604943 nova_compute[238883]: 2026-02-02 12:06:39.658 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:40 np0005604943 podman[264835]: 2026-02-02 12:06:39.960346239 +0000 UTC m=+0.028515900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:06:40 np0005604943 podman[264835]: 2026-02-02 12:06:40.068160678 +0000 UTC m=+0.136330319 container create c2faa60f9c7428e581fe874231c94c4048d5849b10fbd2669d97bd5303f14676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swirles, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 07:06:40 np0005604943 systemd[1]: Started libpod-conmon-c2faa60f9c7428e581fe874231c94c4048d5849b10fbd2669d97bd5303f14676.scope.
Feb  2 07:06:40 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:06:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:06:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2453691696' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:06:40 np0005604943 nova_compute[238883]: 2026-02-02 12:06:40.217 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:40 np0005604943 podman[264835]: 2026-02-02 12:06:40.223065327 +0000 UTC m=+0.291234988 container init c2faa60f9c7428e581fe874231c94c4048d5849b10fbd2669d97bd5303f14676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:06:40 np0005604943 nova_compute[238883]: 2026-02-02 12:06:40.224 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:06:40 np0005604943 podman[264835]: 2026-02-02 12:06:40.231975697 +0000 UTC m=+0.300145338 container start c2faa60f9c7428e581fe874231c94c4048d5849b10fbd2669d97bd5303f14676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swirles, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:06:40 np0005604943 musing_swirles[264852]: 167 167
Feb  2 07:06:40 np0005604943 systemd[1]: libpod-c2faa60f9c7428e581fe874231c94c4048d5849b10fbd2669d97bd5303f14676.scope: Deactivated successfully.
Feb  2 07:06:40 np0005604943 nova_compute[238883]: 2026-02-02 12:06:40.240 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:06:40 np0005604943 nova_compute[238883]: 2026-02-02 12:06:40.268 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 07:06:40 np0005604943 nova_compute[238883]: 2026-02-02 12:06:40.269 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:40 np0005604943 podman[264835]: 2026-02-02 12:06:40.304088333 +0000 UTC m=+0.372257984 container attach c2faa60f9c7428e581fe874231c94c4048d5849b10fbd2669d97bd5303f14676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swirles, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 07:06:40 np0005604943 podman[264835]: 2026-02-02 12:06:40.304524665 +0000 UTC m=+0.372694316 container died c2faa60f9c7428e581fe874231c94c4048d5849b10fbd2669d97bd5303f14676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 07:06:40 np0005604943 systemd[1]: var-lib-containers-storage-overlay-25a1bb91de85dc0593a7eb9d5de70788322393a043052ac2b594b4a6041abbd6-merged.mount: Deactivated successfully.
Feb  2 07:06:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 83 op/s
Feb  2 07:06:40 np0005604943 podman[264835]: 2026-02-02 12:06:40.453394802 +0000 UTC m=+0.521564443 container remove c2faa60f9c7428e581fe874231c94c4048d5849b10fbd2669d97bd5303f14676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_swirles, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:06:40 np0005604943 systemd[1]: libpod-conmon-c2faa60f9c7428e581fe874231c94c4048d5849b10fbd2669d97bd5303f14676.scope: Deactivated successfully.
Feb  2 07:06:40 np0005604943 podman[264880]: 2026-02-02 12:06:40.596966214 +0000 UTC m=+0.044855021 container create 74b9881134df575633eaa7a3fde72aeb93bb014b6d8d240a84c97c6f388f5513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:06:40 np0005604943 nova_compute[238883]: 2026-02-02 12:06:40.603 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:40 np0005604943 systemd[1]: Started libpod-conmon-74b9881134df575633eaa7a3fde72aeb93bb014b6d8d240a84c97c6f388f5513.scope.
Feb  2 07:06:40 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:06:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dfa66c834e9ac3fe6b275f92832ae28f96665558209b3dc95c48302fb68f355/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:06:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dfa66c834e9ac3fe6b275f92832ae28f96665558209b3dc95c48302fb68f355/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:06:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dfa66c834e9ac3fe6b275f92832ae28f96665558209b3dc95c48302fb68f355/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:06:40 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dfa66c834e9ac3fe6b275f92832ae28f96665558209b3dc95c48302fb68f355/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:06:40 np0005604943 podman[264880]: 2026-02-02 12:06:40.577289904 +0000 UTC m=+0.025178761 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:06:40 np0005604943 podman[264880]: 2026-02-02 12:06:40.675792731 +0000 UTC m=+0.123681558 container init 74b9881134df575633eaa7a3fde72aeb93bb014b6d8d240a84c97c6f388f5513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cartwright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:06:40 np0005604943 podman[264880]: 2026-02-02 12:06:40.683431637 +0000 UTC m=+0.131320444 container start 74b9881134df575633eaa7a3fde72aeb93bb014b6d8d240a84c97c6f388f5513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cartwright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 07:06:40 np0005604943 podman[264880]: 2026-02-02 12:06:40.686640033 +0000 UTC m=+0.134528830 container attach 74b9881134df575633eaa7a3fde72aeb93bb014b6d8d240a84c97c6f388f5513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 07:06:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:06:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:06:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:06:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:06:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:06:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:06:41 np0005604943 lvm[264972]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:06:41 np0005604943 lvm[264972]: VG ceph_vg0 finished
Feb  2 07:06:41 np0005604943 lvm[264975]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 07:06:41 np0005604943 lvm[264975]: VG ceph_vg1 finished
Feb  2 07:06:41 np0005604943 lvm[264977]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 07:06:41 np0005604943 lvm[264977]: VG ceph_vg2 finished
Feb  2 07:06:41 np0005604943 cool_cartwright[264896]: {}
Feb  2 07:06:41 np0005604943 systemd[1]: libpod-74b9881134df575633eaa7a3fde72aeb93bb014b6d8d240a84c97c6f388f5513.scope: Deactivated successfully.
Feb  2 07:06:41 np0005604943 systemd[1]: libpod-74b9881134df575633eaa7a3fde72aeb93bb014b6d8d240a84c97c6f388f5513.scope: Consumed 1.258s CPU time.
Feb  2 07:06:41 np0005604943 podman[264980]: 2026-02-02 12:06:41.587768595 +0000 UTC m=+0.027763631 container died 74b9881134df575633eaa7a3fde72aeb93bb014b6d8d240a84c97c6f388f5513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb  2 07:06:41 np0005604943 systemd[1]: var-lib-containers-storage-overlay-2dfa66c834e9ac3fe6b275f92832ae28f96665558209b3dc95c48302fb68f355-merged.mount: Deactivated successfully.
Feb  2 07:06:41 np0005604943 podman[264980]: 2026-02-02 12:06:41.631095614 +0000 UTC m=+0.071090630 container remove 74b9881134df575633eaa7a3fde72aeb93bb014b6d8d240a84c97c6f388f5513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cartwright, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 07:06:41 np0005604943 systemd[1]: libpod-conmon-74b9881134df575633eaa7a3fde72aeb93bb014b6d8d240a84c97c6f388f5513.scope: Deactivated successfully.
Feb  2 07:06:41 np0005604943 nova_compute[238883]: 2026-02-02 12:06:41.673 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:06:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:06:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:06:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.182 238887 DEBUG oslo_concurrency.lockutils [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "117c0603-9127-4e21-9fc6-df67391a5b24" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.183 238887 DEBUG oslo_concurrency.lockutils [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "117c0603-9127-4e21-9fc6-df67391a5b24" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.183 238887 DEBUG oslo_concurrency.lockutils [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.184 238887 DEBUG oslo_concurrency.lockutils [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.184 238887 DEBUG oslo_concurrency.lockutils [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.185 238887 INFO nova.compute.manager [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Terminating instance#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.186 238887 DEBUG nova.compute.manager [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:06:42 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:06:42 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:06:42 np0005604943 kernel: tap24bdd88c-5f (unregistering): left promiscuous mode
Feb  2 07:06:42 np0005604943 NetworkManager[49093]: <info>  [1770034002.2345] device (tap24bdd88c-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.241 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:42 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:42Z|00202|binding|INFO|Releasing lport 24bdd88c-5f95-463b-940e-03c2b17e5e19 from this chassis (sb_readonly=0)
Feb  2 07:06:42 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:42Z|00203|binding|INFO|Setting lport 24bdd88c-5f95-463b-940e-03c2b17e5e19 down in Southbound
Feb  2 07:06:42 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:42Z|00204|binding|INFO|Removing iface tap24bdd88c-5f ovn-installed in OVS
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.246 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:42 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:42.250 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:59:79 10.100.0.6'], port_security=['fa:16:3e:a9:59:79 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '117c0603-9127-4e21-9fc6-df67391a5b24', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34290362-cccd-452d-8e7e-22a6057fdb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e66ed51ccbb840f083b8a86476696747', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f445a686-10d3-4653-b101-b0c161d236b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c1fa263-7715-4982-bfcc-ab441fef3c03, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=24bdd88c-5f95-463b-940e-03c2b17e5e19) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:06:42 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:42.251 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 24bdd88c-5f95-463b-940e-03c2b17e5e19 in datapath 34290362-cccd-452d-8e7e-22a6057fdb60 unbound from our chassis#033[00m
Feb  2 07:06:42 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:42.253 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34290362-cccd-452d-8e7e-22a6057fdb60, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:06:42 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:42.254 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[89aa10bc-9df7-486a-98ce-695e209016da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:42 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:42.255 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 namespace which is not needed anymore#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.258 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:42 np0005604943 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Deactivated successfully.
Feb  2 07:06:42 np0005604943 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Consumed 13.903s CPU time.
Feb  2 07:06:42 np0005604943 systemd-machined[206973]: Machine qemu-20-instance-00000014 terminated.
Feb  2 07:06:42 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[264298]: [NOTICE]   (264302) : haproxy version is 2.8.14-c23fe91
Feb  2 07:06:42 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[264298]: [NOTICE]   (264302) : path to executable is /usr/sbin/haproxy
Feb  2 07:06:42 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[264298]: [WARNING]  (264302) : Exiting Master process...
Feb  2 07:06:42 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[264298]: [WARNING]  (264302) : Exiting Master process...
Feb  2 07:06:42 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[264298]: [ALERT]    (264302) : Current worker (264304) exited with code 143 (Terminated)
Feb  2 07:06:42 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[264298]: [WARNING]  (264302) : All workers exited. Exiting... (0)
Feb  2 07:06:42 np0005604943 systemd[1]: libpod-aab24abd82d13437d80188a6aa31e7aa0e7a3f0b90e1f36f934bdbd778a761cd.scope: Deactivated successfully.
Feb  2 07:06:42 np0005604943 podman[265042]: 2026-02-02 12:06:42.389432962 +0000 UTC m=+0.047568865 container died aab24abd82d13437d80188a6aa31e7aa0e7a3f0b90e1f36f934bdbd778a761cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:06:42 np0005604943 NetworkManager[49093]: <info>  [1770034002.4092] manager: (tap24bdd88c-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Feb  2 07:06:42 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aab24abd82d13437d80188a6aa31e7aa0e7a3f0b90e1f36f934bdbd778a761cd-userdata-shm.mount: Deactivated successfully.
Feb  2 07:06:42 np0005604943 systemd[1]: var-lib-containers-storage-overlay-9e633287686008b206bacce5782b87eeaeee5089955e291482128e84e45282f6-merged.mount: Deactivated successfully.
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.427 238887 INFO nova.virt.libvirt.driver [-] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Instance destroyed successfully.#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.428 238887 DEBUG nova.objects.instance [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lazy-loading 'resources' on Instance uuid 117c0603-9127-4e21-9fc6-df67391a5b24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:06:42 np0005604943 podman[265042]: 2026-02-02 12:06:42.435016222 +0000 UTC m=+0.093152095 container cleanup aab24abd82d13437d80188a6aa31e7aa0e7a3f0b90e1f36f934bdbd778a761cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb  2 07:06:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 168 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 84 op/s
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.445 238887 DEBUG nova.virt.libvirt.vif [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:06:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1199863715',display_name='tempest-TestVolumeBootPattern-server-1199863715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1199863715',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO8pS3TOdyjX/N+jIFJqRkOzhDpnnQvMuyVIbWIYhdDa58/4gu4+MtK78TaoPi0KBaxHL0lWzg2GYnnuAmOLK3vOMGsshwGNfMmLTGNRIjuKqnaNrr1v/EHYLJ6m8LkFkQ==',key_name='tempest-TestVolumeBootPattern-656783760',keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:06:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-jvabnf70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:06:22Z,user_data=None,user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=117c0603-9127-4e21-9fc6-df67391a5b24,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "address": "fa:16:3e:a9:59:79", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bdd88c-5f", "ovs_interfaceid": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.445 238887 DEBUG nova.network.os_vif_util [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "address": "fa:16:3e:a9:59:79", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24bdd88c-5f", "ovs_interfaceid": "24bdd88c-5f95-463b-940e-03c2b17e5e19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.446 238887 DEBUG nova.network.os_vif_util [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a9:59:79,bridge_name='br-int',has_traffic_filtering=True,id=24bdd88c-5f95-463b-940e-03c2b17e5e19,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24bdd88c-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.446 238887 DEBUG os_vif [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:59:79,bridge_name='br-int',has_traffic_filtering=True,id=24bdd88c-5f95-463b-940e-03c2b17e5e19,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24bdd88c-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:06:42 np0005604943 systemd[1]: libpod-conmon-aab24abd82d13437d80188a6aa31e7aa0e7a3f0b90e1f36f934bdbd778a761cd.scope: Deactivated successfully.
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.449 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.449 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24bdd88c-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.451 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.452 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.454 238887 INFO os_vif [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:59:79,bridge_name='br-int',has_traffic_filtering=True,id=24bdd88c-5f95-463b-940e-03c2b17e5e19,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24bdd88c-5f')#033[00m
Feb  2 07:06:42 np0005604943 podman[265081]: 2026-02-02 12:06:42.502111732 +0000 UTC m=+0.043144305 container remove aab24abd82d13437d80188a6aa31e7aa0e7a3f0b90e1f36f934bdbd778a761cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:06:42 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:42.507 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[809f4989-0307-4549-aff9-4e3ac153a5f1]: (4, ('Mon Feb  2 12:06:42 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 (aab24abd82d13437d80188a6aa31e7aa0e7a3f0b90e1f36f934bdbd778a761cd)\naab24abd82d13437d80188a6aa31e7aa0e7a3f0b90e1f36f934bdbd778a761cd\nMon Feb  2 12:06:42 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 (aab24abd82d13437d80188a6aa31e7aa0e7a3f0b90e1f36f934bdbd778a761cd)\naab24abd82d13437d80188a6aa31e7aa0e7a3f0b90e1f36f934bdbd778a761cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:42 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:42.510 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[37841f92-4993-4ee7-8adc-e415ff0e9a71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:42 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:42.511 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34290362-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.513 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:42 np0005604943 kernel: tap34290362-c0: left promiscuous mode
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.518 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:42 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:42.522 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[e84f3f16-7561-48a8-8191-8ae1da2ec91f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:42 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:42.539 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[30e7711a-2d54-4fdc-abc5-73d23076260d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:42 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:42.541 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9f92f5b3-2a36-41b5-9e52-3c4ff8afb6f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:42 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:42.558 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[8a88fc1c-3596-4867-a1d5-4eedb66c3b75]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442678, 'reachable_time': 30071, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265114, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:42 np0005604943 systemd[1]: run-netns-ovnmeta\x2d34290362\x2dcccd\x2d452d\x2d8e7e\x2d22a6057fdb60.mount: Deactivated successfully.
Feb  2 07:06:42 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:42.563 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 07:06:42 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:42.564 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0c17eb-8e32-4e7f-aa1b-3f4218ac4d08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.599 238887 INFO nova.virt.libvirt.driver [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Deleting instance files /var/lib/nova/instances/117c0603-9127-4e21-9fc6-df67391a5b24_del#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.601 238887 INFO nova.virt.libvirt.driver [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Deletion of /var/lib/nova/instances/117c0603-9127-4e21-9fc6-df67391a5b24_del complete#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.666 238887 INFO nova.compute.manager [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Took 0.48 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.667 238887 DEBUG oslo.service.loopingcall [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.667 238887 DEBUG nova.compute.manager [-] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:06:42 np0005604943 nova_compute[238883]: 2026-02-02 12:06:42.667 238887 DEBUG nova.network.neutron [-] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:06:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:06:43 np0005604943 nova_compute[238883]: 2026-02-02 12:06:43.081 238887 DEBUG nova.compute.manager [req-1875f5fa-42dd-447c-a8db-29206d737617 req-d9ef3701-f0d2-4b26-a29e-c04462610f44 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Received event network-vif-unplugged-24bdd88c-5f95-463b-940e-03c2b17e5e19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:06:43 np0005604943 nova_compute[238883]: 2026-02-02 12:06:43.082 238887 DEBUG oslo_concurrency.lockutils [req-1875f5fa-42dd-447c-a8db-29206d737617 req-d9ef3701-f0d2-4b26-a29e-c04462610f44 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:43 np0005604943 nova_compute[238883]: 2026-02-02 12:06:43.082 238887 DEBUG oslo_concurrency.lockutils [req-1875f5fa-42dd-447c-a8db-29206d737617 req-d9ef3701-f0d2-4b26-a29e-c04462610f44 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:43 np0005604943 nova_compute[238883]: 2026-02-02 12:06:43.083 238887 DEBUG oslo_concurrency.lockutils [req-1875f5fa-42dd-447c-a8db-29206d737617 req-d9ef3701-f0d2-4b26-a29e-c04462610f44 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:43 np0005604943 nova_compute[238883]: 2026-02-02 12:06:43.083 238887 DEBUG nova.compute.manager [req-1875f5fa-42dd-447c-a8db-29206d737617 req-d9ef3701-f0d2-4b26-a29e-c04462610f44 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] No waiting events found dispatching network-vif-unplugged-24bdd88c-5f95-463b-940e-03c2b17e5e19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:06:43 np0005604943 nova_compute[238883]: 2026-02-02 12:06:43.083 238887 DEBUG nova.compute.manager [req-1875f5fa-42dd-447c-a8db-29206d737617 req-d9ef3701-f0d2-4b26-a29e-c04462610f44 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Received event network-vif-unplugged-24bdd88c-5f95-463b-940e-03c2b17e5e19 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:06:43 np0005604943 nova_compute[238883]: 2026-02-02 12:06:43.266 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:06:43 np0005604943 nova_compute[238883]: 2026-02-02 12:06:43.266 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 07:06:43 np0005604943 nova_compute[238883]: 2026-02-02 12:06:43.546 238887 DEBUG nova.network.neutron [-] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:06:43 np0005604943 nova_compute[238883]: 2026-02-02 12:06:43.568 238887 INFO nova.compute.manager [-] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Took 0.90 seconds to deallocate network for instance.#033[00m
Feb  2 07:06:43 np0005604943 nova_compute[238883]: 2026-02-02 12:06:43.647 238887 DEBUG nova.compute.manager [req-2c4b14b8-4e40-4f5e-8211-edb4b654ebd2 req-ccf15e4b-93c3-4dba-af84-6c6540298f4a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Received event network-vif-deleted-24bdd88c-5f95-463b-940e-03c2b17e5e19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:06:43 np0005604943 nova_compute[238883]: 2026-02-02 12:06:43.812 238887 INFO nova.compute.manager [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Took 0.24 seconds to detach 1 volumes for instance.#033[00m
Feb  2 07:06:43 np0005604943 nova_compute[238883]: 2026-02-02 12:06:43.867 238887 DEBUG oslo_concurrency.lockutils [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:43 np0005604943 nova_compute[238883]: 2026-02-02 12:06:43.868 238887 DEBUG oslo_concurrency.lockutils [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:43 np0005604943 nova_compute[238883]: 2026-02-02 12:06:43.905 238887 DEBUG oslo_concurrency.processutils [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:06:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/137915793' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:06:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 179 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 1023 KiB/s rd, 3.2 MiB/s wr, 89 op/s
Feb  2 07:06:44 np0005604943 nova_compute[238883]: 2026-02-02 12:06:44.436 238887 DEBUG oslo_concurrency.processutils [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:44 np0005604943 nova_compute[238883]: 2026-02-02 12:06:44.442 238887 DEBUG nova.compute.provider_tree [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:06:44 np0005604943 nova_compute[238883]: 2026-02-02 12:06:44.461 238887 DEBUG nova.scheduler.client.report [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:06:44 np0005604943 nova_compute[238883]: 2026-02-02 12:06:44.489 238887 DEBUG oslo_concurrency.lockutils [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:44 np0005604943 nova_compute[238883]: 2026-02-02 12:06:44.526 238887 INFO nova.scheduler.client.report [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Deleted allocations for instance 117c0603-9127-4e21-9fc6-df67391a5b24#033[00m
Feb  2 07:06:44 np0005604943 nova_compute[238883]: 2026-02-02 12:06:44.598 238887 DEBUG oslo_concurrency.lockutils [None req-06fa503e-ef5e-4dcd-be84-8bf26f074e01 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "117c0603-9127-4e21-9fc6-df67391a5b24" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:44 np0005604943 nova_compute[238883]: 2026-02-02 12:06:44.635 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:06:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:06:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3806055739' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:06:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:06:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3806055739' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:06:45 np0005604943 podman[265139]: 2026-02-02 12:06:45.038976151 +0000 UTC m=+0.052775325 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 07:06:45 np0005604943 podman[265138]: 2026-02-02 12:06:45.069945426 +0000 UTC m=+0.083932845 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb  2 07:06:45 np0005604943 nova_compute[238883]: 2026-02-02 12:06:45.181 238887 DEBUG nova.compute.manager [req-405b349b-a864-4b49-b78b-18192cccd20e req-1c82ba8b-bcbf-45b6-97bc-b0af2c321255 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Received event network-vif-plugged-24bdd88c-5f95-463b-940e-03c2b17e5e19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:06:45 np0005604943 nova_compute[238883]: 2026-02-02 12:06:45.182 238887 DEBUG oslo_concurrency.lockutils [req-405b349b-a864-4b49-b78b-18192cccd20e req-1c82ba8b-bcbf-45b6-97bc-b0af2c321255 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:45 np0005604943 nova_compute[238883]: 2026-02-02 12:06:45.182 238887 DEBUG oslo_concurrency.lockutils [req-405b349b-a864-4b49-b78b-18192cccd20e req-1c82ba8b-bcbf-45b6-97bc-b0af2c321255 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:45 np0005604943 nova_compute[238883]: 2026-02-02 12:06:45.182 238887 DEBUG oslo_concurrency.lockutils [req-405b349b-a864-4b49-b78b-18192cccd20e req-1c82ba8b-bcbf-45b6-97bc-b0af2c321255 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "117c0603-9127-4e21-9fc6-df67391a5b24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:45 np0005604943 nova_compute[238883]: 2026-02-02 12:06:45.182 238887 DEBUG nova.compute.manager [req-405b349b-a864-4b49-b78b-18192cccd20e req-1c82ba8b-bcbf-45b6-97bc-b0af2c321255 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] No waiting events found dispatching network-vif-plugged-24bdd88c-5f95-463b-940e-03c2b17e5e19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:06:45 np0005604943 nova_compute[238883]: 2026-02-02 12:06:45.182 238887 WARNING nova.compute.manager [req-405b349b-a864-4b49-b78b-18192cccd20e req-1c82ba8b-bcbf-45b6-97bc-b0af2c321255 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Received unexpected event network-vif-plugged-24bdd88c-5f95-463b-940e-03c2b17e5e19 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:06:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 179 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 3.2 MiB/s wr, 86 op/s
Feb  2 07:06:46 np0005604943 nova_compute[238883]: 2026-02-02 12:06:46.677 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:47 np0005604943 nova_compute[238883]: 2026-02-02 12:06:47.452 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:47 np0005604943 nova_compute[238883]: 2026-02-02 12:06:47.683 238887 DEBUG oslo_concurrency.lockutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:47 np0005604943 nova_compute[238883]: 2026-02-02 12:06:47.683 238887 DEBUG oslo_concurrency.lockutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:47 np0005604943 nova_compute[238883]: 2026-02-02 12:06:47.700 238887 DEBUG nova.compute.manager [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:06:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:06:47 np0005604943 nova_compute[238883]: 2026-02-02 12:06:47.776 238887 DEBUG oslo_concurrency.lockutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:47 np0005604943 nova_compute[238883]: 2026-02-02 12:06:47.777 238887 DEBUG oslo_concurrency.lockutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:47 np0005604943 nova_compute[238883]: 2026-02-02 12:06:47.783 238887 DEBUG nova.virt.hardware [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:06:47 np0005604943 nova_compute[238883]: 2026-02-02 12:06:47.783 238887 INFO nova.compute.claims [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:06:47 np0005604943 nova_compute[238883]: 2026-02-02 12:06:47.888 238887 DEBUG oslo_concurrency.processutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.229 238887 DEBUG oslo_concurrency.lockutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "140c7b65-c11d-4032-aaf8-db6b3df5127e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.230 238887 DEBUG oslo_concurrency.lockutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "140c7b65-c11d-4032-aaf8-db6b3df5127e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.271 238887 DEBUG nova.compute.manager [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.350 238887 DEBUG oslo_concurrency.lockutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:06:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3018556076' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.432 238887 DEBUG oslo_concurrency.processutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 253 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 361 KiB/s rd, 9.2 MiB/s wr, 116 op/s
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.437 238887 DEBUG nova.compute.provider_tree [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.456 238887 DEBUG nova.scheduler.client.report [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.486 238887 DEBUG oslo_concurrency.lockutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.487 238887 DEBUG nova.compute.manager [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.489 238887 DEBUG oslo_concurrency.lockutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.495 238887 DEBUG nova.virt.hardware [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.495 238887 INFO nova.compute.claims [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.561 238887 DEBUG nova.compute.manager [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.561 238887 DEBUG nova.network.neutron [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.588 238887 INFO nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.611 238887 DEBUG nova.compute.manager [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.661 238887 DEBUG oslo_concurrency.processutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.685 238887 INFO nova.virt.block_device [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Booting with volume 5b4325ec-9602-4b79-9255-eb8f8017eaca at /dev/vda#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.800 238887 DEBUG nova.policy [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cd5824e18d5e443cb24d3bf55ff2c553', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4c7b49c49c104c079544033b07fb2f3d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.827 238887 DEBUG os_brick.utils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.828 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.843 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.844 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf79b78-3794-4e90-8bab-8af0476279ad]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.846 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.853 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.854 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[20516668-deef-4cd0-ad70-7f92db30bdc2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.855 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.864 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.864 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[5224cfc9-0b70-4ff0-90c4-487956019783]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.866 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[5e97acae-852d-4503-ac18-84fffa9abdba]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.866 238887 DEBUG oslo_concurrency.processutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.883 238887 DEBUG oslo_concurrency.processutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "nvme version" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.885 238887 DEBUG os_brick.initiator.connectors.lightos [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.885 238887 DEBUG os_brick.initiator.connectors.lightos [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.885 238887 DEBUG os_brick.initiator.connectors.lightos [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.886 238887 DEBUG os_brick.utils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] <== get_connector_properties: return (58ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:06:48 np0005604943 nova_compute[238883]: 2026-02-02 12:06:48.886 238887 DEBUG nova.virt.block_device [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Updating existing volume attachment record: 68867f49-a5d9-4d26-9a15-cde64d181a6a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:06:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:06:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1307914503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.261 238887 DEBUG oslo_concurrency.processutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.266 238887 DEBUG nova.compute.provider_tree [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.285 238887 DEBUG nova.scheduler.client.report [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.324 238887 DEBUG oslo_concurrency.lockutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.325 238887 DEBUG nova.compute.manager [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.391 238887 DEBUG nova.compute.manager [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.392 238887 DEBUG nova.network.neutron [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.507 238887 INFO nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.580 238887 DEBUG nova.compute.manager [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.623 238887 INFO nova.virt.block_device [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Booting with volume 35e93bdb-d5a0-4f55-9db9-c7fbfb691c9c at /dev/vda#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.662 238887 DEBUG nova.policy [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e3fc9d8415541ecaa0da4968c9fa242', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e66ed51ccbb840f083b8a86476696747', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.799 238887 DEBUG os_brick.utils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.801 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.809 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.809 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[a6696f85-0613-4014-8a4f-30078f27efc9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.811 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.819 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.819 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[6224efde-5717-4d57-a100-80b50198cad0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.821 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.833 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.833 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[810f01c0-f2e9-464f-9f2d-a5bd39e5802e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.835 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[fb582ea7-b1d4-4181-ae05-2947e8186ee3]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.835 238887 DEBUG oslo_concurrency.processutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.858 238887 DEBUG oslo_concurrency.processutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.860 238887 DEBUG os_brick.initiator.connectors.lightos [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.861 238887 DEBUG os_brick.initiator.connectors.lightos [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.861 238887 DEBUG os_brick.initiator.connectors.lightos [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.861 238887 DEBUG os_brick.utils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:06:49 np0005604943 nova_compute[238883]: 2026-02-02 12:06:49.861 238887 DEBUG nova.virt.block_device [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Updating existing volume attachment record: 0d6f6623-485f-4a42-aee4-79a388427755 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:06:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:06:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2853628355' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:06:50 np0005604943 nova_compute[238883]: 2026-02-02 12:06:50.207 238887 DEBUG nova.network.neutron [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Successfully created port: e40b0257-80f7-4e6f-ab5e-058f6961b2fa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:06:50 np0005604943 nova_compute[238883]: 2026-02-02 12:06:50.426 238887 DEBUG nova.compute.manager [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:06:50 np0005604943 nova_compute[238883]: 2026-02-02 12:06:50.429 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:06:50 np0005604943 nova_compute[238883]: 2026-02-02 12:06:50.429 238887 INFO nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Creating image(s)#033[00m
Feb  2 07:06:50 np0005604943 nova_compute[238883]: 2026-02-02 12:06:50.430 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 07:06:50 np0005604943 nova_compute[238883]: 2026-02-02 12:06:50.430 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Ensure instance console log exists: /var/lib/nova/instances/d0404e7d-4162-4ea0-86e0-e7869e7fb702/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:06:50 np0005604943 nova_compute[238883]: 2026-02-02 12:06:50.431 238887 DEBUG oslo_concurrency.lockutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:50 np0005604943 nova_compute[238883]: 2026-02-02 12:06:50.432 238887 DEBUG oslo_concurrency.lockutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:50 np0005604943 nova_compute[238883]: 2026-02-02 12:06:50.432 238887 DEBUG oslo_concurrency.lockutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 281 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 10 MiB/s wr, 74 op/s
Feb  2 07:06:50 np0005604943 nova_compute[238883]: 2026-02-02 12:06:50.477 238887 DEBUG nova.network.neutron [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Successfully created port: 0afadb99-91e4-4b90-8cad-6f4e97daf0f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:06:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:06:50 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3124977467' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:06:50 np0005604943 nova_compute[238883]: 2026-02-02 12:06:50.902 238887 DEBUG nova.network.neutron [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Successfully updated port: e40b0257-80f7-4e6f-ab5e-058f6961b2fa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:06:50 np0005604943 nova_compute[238883]: 2026-02-02 12:06:50.929 238887 DEBUG oslo_concurrency.lockutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "refresh_cache-d0404e7d-4162-4ea0-86e0-e7869e7fb702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:06:50 np0005604943 nova_compute[238883]: 2026-02-02 12:06:50.929 238887 DEBUG oslo_concurrency.lockutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquired lock "refresh_cache-d0404e7d-4162-4ea0-86e0-e7869e7fb702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:06:50 np0005604943 nova_compute[238883]: 2026-02-02 12:06:50.929 238887 DEBUG nova.network.neutron [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.045 238887 DEBUG nova.compute.manager [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.046 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.046 238887 INFO nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Creating image(s)#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.047 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.048 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Ensure instance console log exists: /var/lib/nova/instances/140c7b65-c11d-4032-aaf8-db6b3df5127e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.048 238887 DEBUG oslo_concurrency.lockutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.049 238887 DEBUG oslo_concurrency.lockutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.049 238887 DEBUG oslo_concurrency.lockutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.081 238887 DEBUG nova.network.neutron [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.330 238887 DEBUG nova.compute.manager [req-7a4ec6e5-61ce-44c6-b9d8-c11de0c91e23 req-34ba73c3-8738-4284-9563-31187c408aef 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Received event network-changed-e40b0257-80f7-4e6f-ab5e-058f6961b2fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.330 238887 DEBUG nova.compute.manager [req-7a4ec6e5-61ce-44c6-b9d8-c11de0c91e23 req-34ba73c3-8738-4284-9563-31187c408aef 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Refreshing instance network info cache due to event network-changed-e40b0257-80f7-4e6f-ab5e-058f6961b2fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.331 238887 DEBUG oslo_concurrency.lockutils [req-7a4ec6e5-61ce-44c6-b9d8-c11de0c91e23 req-34ba73c3-8738-4284-9563-31187c408aef 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-d0404e7d-4162-4ea0-86e0-e7869e7fb702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.386 238887 DEBUG nova.network.neutron [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Successfully updated port: 0afadb99-91e4-4b90-8cad-6f4e97daf0f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.401 238887 DEBUG oslo_concurrency.lockutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "refresh_cache-140c7b65-c11d-4032-aaf8-db6b3df5127e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.401 238887 DEBUG oslo_concurrency.lockutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquired lock "refresh_cache-140c7b65-c11d-4032-aaf8-db6b3df5127e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.401 238887 DEBUG nova.network.neutron [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.458 238887 DEBUG nova.compute.manager [req-697ac2b5-8873-4392-aa26-0f67a238274a req-90f16779-d9d6-46ce-ac2b-b0c3e51da6b2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Received event network-changed-0afadb99-91e4-4b90-8cad-6f4e97daf0f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.458 238887 DEBUG nova.compute.manager [req-697ac2b5-8873-4392-aa26-0f67a238274a req-90f16779-d9d6-46ce-ac2b-b0c3e51da6b2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Refreshing instance network info cache due to event network-changed-0afadb99-91e4-4b90-8cad-6f4e97daf0f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.458 238887 DEBUG oslo_concurrency.lockutils [req-697ac2b5-8873-4392-aa26-0f67a238274a req-90f16779-d9d6-46ce-ac2b-b0c3e51da6b2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-140c7b65-c11d-4032-aaf8-db6b3df5127e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.556 238887 DEBUG nova.network.neutron [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.676 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.709 238887 DEBUG nova.network.neutron [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Updating instance_info_cache with network_info: [{"id": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "address": "fa:16:3e:c9:6f:59", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape40b0257-80", "ovs_interfaceid": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.723 238887 DEBUG oslo_concurrency.lockutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Releasing lock "refresh_cache-d0404e7d-4162-4ea0-86e0-e7869e7fb702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.723 238887 DEBUG nova.compute.manager [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Instance network_info: |[{"id": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "address": "fa:16:3e:c9:6f:59", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape40b0257-80", "ovs_interfaceid": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.723 238887 DEBUG oslo_concurrency.lockutils [req-7a4ec6e5-61ce-44c6-b9d8-c11de0c91e23 req-34ba73c3-8738-4284-9563-31187c408aef 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-d0404e7d-4162-4ea0-86e0-e7869e7fb702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.723 238887 DEBUG nova.network.neutron [req-7a4ec6e5-61ce-44c6-b9d8-c11de0c91e23 req-34ba73c3-8738-4284-9563-31187c408aef 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Refreshing network info cache for port e40b0257-80f7-4e6f-ab5e-058f6961b2fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.726 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Start _get_guest_xml network_info=[{"id": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "address": "fa:16:3e:c9:6f:59", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape40b0257-80", "ovs_interfaceid": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'attachment_id': '68867f49-a5d9-4d26-9a15-cde64d181a6a', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5b4325ec-9602-4b79-9255-eb8f8017eaca', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5b4325ec-9602-4b79-9255-eb8f8017eaca', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'd0404e7d-4162-4ea0-86e0-e7869e7fb702', 'attached_at': '', 'detached_at': '', 'volume_id': '5b4325ec-9602-4b79-9255-eb8f8017eaca', 'serial': '5b4325ec-9602-4b79-9255-eb8f8017eaca'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.730 238887 WARNING nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.734 238887 DEBUG nova.virt.libvirt.host [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.735 238887 DEBUG nova.virt.libvirt.host [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.739 238887 DEBUG nova.virt.libvirt.host [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.739 238887 DEBUG nova.virt.libvirt.host [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.740 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.740 238887 DEBUG nova.virt.hardware [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.740 238887 DEBUG nova.virt.hardware [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.740 238887 DEBUG nova.virt.hardware [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.740 238887 DEBUG nova.virt.hardware [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.741 238887 DEBUG nova.virt.hardware [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.741 238887 DEBUG nova.virt.hardware [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.741 238887 DEBUG nova.virt.hardware [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.741 238887 DEBUG nova.virt.hardware [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.741 238887 DEBUG nova.virt.hardware [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.741 238887 DEBUG nova.virt.hardware [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.742 238887 DEBUG nova.virt.hardware [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.763 238887 DEBUG nova.storage.rbd_utils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] rbd image d0404e7d-4162-4ea0-86e0-e7869e7fb702_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:06:51 np0005604943 nova_compute[238883]: 2026-02-02 12:06:51.767 238887 DEBUG oslo_concurrency.processutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.202 238887 DEBUG nova.network.neutron [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Updating instance_info_cache with network_info: [{"id": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "address": "fa:16:3e:18:be:eb", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afadb99-91", "ovs_interfaceid": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:06:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:06:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2818448972' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.280 238887 DEBUG oslo_concurrency.lockutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Releasing lock "refresh_cache-140c7b65-c11d-4032-aaf8-db6b3df5127e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.281 238887 DEBUG nova.compute.manager [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Instance network_info: |[{"id": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "address": "fa:16:3e:18:be:eb", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afadb99-91", "ovs_interfaceid": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.281 238887 DEBUG oslo_concurrency.lockutils [req-697ac2b5-8873-4392-aa26-0f67a238274a req-90f16779-d9d6-46ce-ac2b-b0c3e51da6b2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-140c7b65-c11d-4032-aaf8-db6b3df5127e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.282 238887 DEBUG nova.network.neutron [req-697ac2b5-8873-4392-aa26-0f67a238274a req-90f16779-d9d6-46ce-ac2b-b0c3e51da6b2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Refreshing network info cache for port 0afadb99-91e4-4b90-8cad-6f4e97daf0f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.284 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Start _get_guest_xml network_info=[{"id": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "address": "fa:16:3e:18:be:eb", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afadb99-91", "ovs_interfaceid": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'attachment_id': '0d6f6623-485f-4a42-aee4-79a388427755', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-35e93bdb-d5a0-4f55-9db9-c7fbfb691c9c', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '35e93bdb-d5a0-4f55-9db9-c7fbfb691c9c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '140c7b65-c11d-4032-aaf8-db6b3df5127e', 'attached_at': '', 'detached_at': '', 'volume_id': '35e93bdb-d5a0-4f55-9db9-c7fbfb691c9c', 'serial': '35e93bdb-d5a0-4f55-9db9-c7fbfb691c9c'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.289 238887 WARNING nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.296 238887 DEBUG oslo_concurrency.processutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.309 238887 DEBUG nova.virt.libvirt.host [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.309 238887 DEBUG nova.virt.libvirt.host [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.322 238887 DEBUG nova.virt.libvirt.host [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.323 238887 DEBUG nova.virt.libvirt.host [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.323 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.323 238887 DEBUG nova.virt.hardware [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.324 238887 DEBUG nova.virt.hardware [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.324 238887 DEBUG nova.virt.hardware [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.324 238887 DEBUG nova.virt.hardware [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.324 238887 DEBUG nova.virt.hardware [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.324 238887 DEBUG nova.virt.hardware [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.325 238887 DEBUG nova.virt.hardware [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.325 238887 DEBUG nova.virt.hardware [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.325 238887 DEBUG nova.virt.hardware [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.325 238887 DEBUG nova.virt.hardware [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.325 238887 DEBUG nova.virt.hardware [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.350 238887 DEBUG nova.storage.rbd_utils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image 140c7b65-c11d-4032-aaf8-db6b3df5127e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.355 238887 DEBUG oslo_concurrency.processutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 351 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 15 MiB/s wr, 96 op/s
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.440 238887 DEBUG os_brick.encryptors [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Using volume encryption metadata '{'encryption_key_id': '1fe221d3-01b6-4799-b3ad-3f2da060cf1c', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5b4325ec-9602-4b79-9255-eb8f8017eaca', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5b4325ec-9602-4b79-9255-eb8f8017eaca', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'd0404e7d-4162-4ea0-86e0-e7869e7fb702', 'attached_at': '', 'detached_at': '', 'volume_id': '5b4325ec-9602-4b79-9255-eb8f8017eaca', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.445 238887 DEBUG barbicanclient.client [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.466 238887 DEBUG barbicanclient.v1.secrets [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/1fe221d3-01b6-4799-b3ad-3f2da060cf1c get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.467 238887 INFO barbicanclient.base [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/1fe221d3-01b6-4799-b3ad-3f2da060cf1c#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.500 238887 DEBUG barbicanclient.client [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.501 238887 INFO barbicanclient.base [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/1fe221d3-01b6-4799-b3ad-3f2da060cf1c#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.505 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.520 238887 DEBUG barbicanclient.client [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.521 238887 INFO barbicanclient.base [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/1fe221d3-01b6-4799-b3ad-3f2da060cf1c#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.545 238887 DEBUG barbicanclient.client [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.546 238887 INFO barbicanclient.base [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/1fe221d3-01b6-4799-b3ad-3f2da060cf1c#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.568 238887 DEBUG barbicanclient.client [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.569 238887 INFO barbicanclient.base [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/1fe221d3-01b6-4799-b3ad-3f2da060cf1c#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.593 238887 DEBUG barbicanclient.client [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.594 238887 INFO barbicanclient.base [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/1fe221d3-01b6-4799-b3ad-3f2da060cf1c#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.617 238887 DEBUG barbicanclient.client [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.617 238887 INFO barbicanclient.base [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/1fe221d3-01b6-4799-b3ad-3f2da060cf1c#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.641 238887 DEBUG barbicanclient.client [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.641 238887 INFO barbicanclient.base [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/1fe221d3-01b6-4799-b3ad-3f2da060cf1c#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.660 238887 DEBUG barbicanclient.client [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.661 238887 INFO barbicanclient.base [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/1fe221d3-01b6-4799-b3ad-3f2da060cf1c#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.682 238887 DEBUG barbicanclient.client [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.682 238887 INFO barbicanclient.base [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/1fe221d3-01b6-4799-b3ad-3f2da060cf1c#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.701 238887 DEBUG barbicanclient.client [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.701 238887 INFO barbicanclient.base [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/1fe221d3-01b6-4799-b3ad-3f2da060cf1c#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.723 238887 DEBUG barbicanclient.client [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.723 238887 INFO barbicanclient.base [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/1fe221d3-01b6-4799-b3ad-3f2da060cf1c#033[00m
Feb  2 07:06:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.740 238887 DEBUG barbicanclient.client [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.741 238887 INFO barbicanclient.base [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/1fe221d3-01b6-4799-b3ad-3f2da060cf1c#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.761 238887 DEBUG barbicanclient.client [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.761 238887 INFO barbicanclient.base [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/1fe221d3-01b6-4799-b3ad-3f2da060cf1c#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.781 238887 DEBUG barbicanclient.client [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.782 238887 INFO barbicanclient.base [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/1fe221d3-01b6-4799-b3ad-3f2da060cf1c#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.806 238887 DEBUG barbicanclient.client [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.806 238887 DEBUG nova.virt.libvirt.host [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  <usage type="volume">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <volume>5b4325ec-9602-4b79-9255-eb8f8017eaca</volume>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  </usage>
Feb  2 07:06:52 np0005604943 nova_compute[238883]: </secret>
Feb  2 07:06:52 np0005604943 nova_compute[238883]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.836 238887 DEBUG nova.virt.libvirt.vif [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:06:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1726050391',display_name='tempest-TransferEncryptedVolumeTest-server-1726050391',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1726050391',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFDy1jBskL6RDlru0VUyMYEuQYZdWj4mgPqYNbp/ZxOi/SP0295JAyJLHX3JiQjzCwuF8BsyBv7iV3J6nvrpEE+i/AXa4yixOsMe088OGvWt8cZiFnV/xX7EKx5mK84nug==',key_name='tempest-TransferEncryptedVolumeTest-704936637',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c7b49c49c104c079544033b07fb2f3d',ramdisk_id='',reservation_id='r-zed5n8a7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-347797880',owner_user_name='tempest-TransferEncryptedVolumeTest-347797880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:06:48Z,user_data=None,user_id='cd5824e18d5e443cb24d3bf55ff2c553',uuid=d0404e7d-4162-4ea0-86e0-e7869e7fb702,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "address": "fa:16:3e:c9:6f:59", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape40b0257-80", "ovs_interfaceid": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.836 238887 DEBUG nova.network.os_vif_util [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converting VIF {"id": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "address": "fa:16:3e:c9:6f:59", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape40b0257-80", "ovs_interfaceid": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.837 238887 DEBUG nova.network.os_vif_util [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:6f:59,bridge_name='br-int',has_traffic_filtering=True,id=e40b0257-80f7-4e6f-ab5e-058f6961b2fa,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape40b0257-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.839 238887 DEBUG nova.objects.instance [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lazy-loading 'pci_devices' on Instance uuid d0404e7d-4162-4ea0-86e0-e7869e7fb702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.852 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  <uuid>d0404e7d-4162-4ea0-86e0-e7869e7fb702</uuid>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  <name>instance-00000015</name>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-1726050391</nova:name>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:06:51</nova:creationTime>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:        <nova:user uuid="cd5824e18d5e443cb24d3bf55ff2c553">tempest-TransferEncryptedVolumeTest-347797880-project-member</nova:user>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:        <nova:project uuid="4c7b49c49c104c079544033b07fb2f3d">tempest-TransferEncryptedVolumeTest-347797880</nova:project>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:        <nova:port uuid="e40b0257-80f7-4e6f-ab5e-058f6961b2fa">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <entry name="serial">d0404e7d-4162-4ea0-86e0-e7869e7fb702</entry>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <entry name="uuid">d0404e7d-4162-4ea0-86e0-e7869e7fb702</entry>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/d0404e7d-4162-4ea0-86e0-e7869e7fb702_disk.config">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="volumes/volume-5b4325ec-9602-4b79-9255-eb8f8017eaca">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <serial>5b4325ec-9602-4b79-9255-eb8f8017eaca</serial>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <encryption format="luks">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:        <secret type="passphrase" uuid="b3836512-e672-48e7-ad66-cfeac351ef22"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      </encryption>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:c9:6f:59"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <target dev="tape40b0257-80"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/d0404e7d-4162-4ea0-86e0-e7869e7fb702/console.log" append="off"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:06:52 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:06:52 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:06:52 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:06:52 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.853 238887 DEBUG nova.compute.manager [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Preparing to wait for external event network-vif-plugged-e40b0257-80f7-4e6f-ab5e-058f6961b2fa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.853 238887 DEBUG oslo_concurrency.lockutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.853 238887 DEBUG oslo_concurrency.lockutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.853 238887 DEBUG oslo_concurrency.lockutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.854 238887 DEBUG nova.virt.libvirt.vif [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:06:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1726050391',display_name='tempest-TransferEncryptedVolumeTest-server-1726050391',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1726050391',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFDy1jBskL6RDlru0VUyMYEuQYZdWj4mgPqYNbp/ZxOi/SP0295JAyJLHX3JiQjzCwuF8BsyBv7iV3J6nvrpEE+i/AXa4yixOsMe088OGvWt8cZiFnV/xX7EKx5mK84nug==',key_name='tempest-TransferEncryptedVolumeTest-704936637',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c7b49c49c104c079544033b07fb2f3d',ramdisk_id='',reservation_id='r-zed5n8a7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-347797880',owner_user_name='tempest-TransferEncryptedVolumeTest-347797880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:06:48Z,user_data=None,user_id='cd5824e18d5e443cb24d3bf55ff2c553',uuid=d0404e7d-4162-4ea0-86e0-e7869e7fb702,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "address": "fa:16:3e:c9:6f:59", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape40b0257-80", "ovs_interfaceid": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.854 238887 DEBUG nova.network.os_vif_util [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converting VIF {"id": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "address": "fa:16:3e:c9:6f:59", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape40b0257-80", "ovs_interfaceid": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.855 238887 DEBUG nova.network.os_vif_util [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:6f:59,bridge_name='br-int',has_traffic_filtering=True,id=e40b0257-80f7-4e6f-ab5e-058f6961b2fa,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape40b0257-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.855 238887 DEBUG os_vif [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:6f:59,bridge_name='br-int',has_traffic_filtering=True,id=e40b0257-80f7-4e6f-ab5e-058f6961b2fa,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape40b0257-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.856 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.856 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.856 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.860 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.861 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape40b0257-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.861 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape40b0257-80, col_values=(('external_ids', {'iface-id': 'e40b0257-80f7-4e6f-ab5e-058f6961b2fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:6f:59', 'vm-uuid': 'd0404e7d-4162-4ea0-86e0-e7869e7fb702'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.862 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:52 np0005604943 NetworkManager[49093]: <info>  [1770034012.8638] manager: (tape40b0257-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.865 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.867 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.868 238887 INFO os_vif [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:6f:59,bridge_name='br-int',has_traffic_filtering=True,id=e40b0257-80f7-4e6f-ab5e-058f6961b2fa,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape40b0257-80')#033[00m
Feb  2 07:06:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:06:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2331315076' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.917 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.918 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.918 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] No VIF found with MAC fa:16:3e:c9:6f:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.918 238887 INFO nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Using config drive#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.938 238887 DEBUG nova.storage.rbd_utils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] rbd image d0404e7d-4162-4ea0-86e0-e7869e7fb702_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.943 238887 DEBUG oslo_concurrency.processutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.991 238887 DEBUG nova.virt.libvirt.vif [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:06:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1233051117',display_name='tempest-TestVolumeBootPattern-server-1233051117',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1233051117',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO8pS3TOdyjX/N+jIFJqRkOzhDpnnQvMuyVIbWIYhdDa58/4gu4+MtK78TaoPi0KBaxHL0lWzg2GYnnuAmOLK3vOMGsshwGNfMmLTGNRIjuKqnaNrr1v/EHYLJ6m8LkFkQ==',key_name='tempest-TestVolumeBootPattern-656783760',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-wcv20q0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:06:49Z,user_data=None,user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=140c7b65-c11d-4032-aaf8-db6b3df5127e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "address": "fa:16:3e:18:be:eb", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afadb99-91", "ovs_interfaceid": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.992 238887 DEBUG nova.network.os_vif_util [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "address": "fa:16:3e:18:be:eb", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afadb99-91", "ovs_interfaceid": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.993 238887 DEBUG nova.network.os_vif_util [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:be:eb,bridge_name='br-int',has_traffic_filtering=True,id=0afadb99-91e4-4b90-8cad-6f4e97daf0f9,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0afadb99-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:06:52 np0005604943 nova_compute[238883]: 2026-02-02 12:06:52.994 238887 DEBUG nova.objects.instance [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lazy-loading 'pci_devices' on Instance uuid 140c7b65-c11d-4032-aaf8-db6b3df5127e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.013 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  <uuid>140c7b65-c11d-4032-aaf8-db6b3df5127e</uuid>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  <name>instance-00000016</name>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <nova:name>tempest-TestVolumeBootPattern-server-1233051117</nova:name>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:06:52</nova:creationTime>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:        <nova:user uuid="5e3fc9d8415541ecaa0da4968c9fa242">tempest-TestVolumeBootPattern-1059348902-project-member</nova:user>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:        <nova:project uuid="e66ed51ccbb840f083b8a86476696747">tempest-TestVolumeBootPattern-1059348902</nova:project>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:        <nova:port uuid="0afadb99-91e4-4b90-8cad-6f4e97daf0f9">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <entry name="serial">140c7b65-c11d-4032-aaf8-db6b3df5127e</entry>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <entry name="uuid">140c7b65-c11d-4032-aaf8-db6b3df5127e</entry>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/140c7b65-c11d-4032-aaf8-db6b3df5127e_disk.config">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="volumes/volume-35e93bdb-d5a0-4f55-9db9-c7fbfb691c9c">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <serial>35e93bdb-d5a0-4f55-9db9-c7fbfb691c9c</serial>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:18:be:eb"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <target dev="tap0afadb99-91"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/140c7b65-c11d-4032-aaf8-db6b3df5127e/console.log" append="off"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:06:53 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:06:53 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:06:53 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:06:53 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.013 238887 DEBUG nova.compute.manager [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Preparing to wait for external event network-vif-plugged-0afadb99-91e4-4b90-8cad-6f4e97daf0f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.014 238887 DEBUG oslo_concurrency.lockutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.014 238887 DEBUG oslo_concurrency.lockutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.014 238887 DEBUG oslo_concurrency.lockutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.015 238887 DEBUG nova.virt.libvirt.vif [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:06:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1233051117',display_name='tempest-TestVolumeBootPattern-server-1233051117',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1233051117',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO8pS3TOdyjX/N+jIFJqRkOzhDpnnQvMuyVIbWIYhdDa58/4gu4+MtK78TaoPi0KBaxHL0lWzg2GYnnuAmOLK3vOMGsshwGNfMmLTGNRIjuKqnaNrr1v/EHYLJ6m8LkFkQ==',key_name='tempest-TestVolumeBootPattern-656783760',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-wcv20q0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:06:49Z,user_data=None,user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=140c7b65-c11d-4032-aaf8-db6b3df5127e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "address": "fa:16:3e:18:be:eb", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afadb99-91", "ovs_interfaceid": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.015 238887 DEBUG nova.network.os_vif_util [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "address": "fa:16:3e:18:be:eb", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afadb99-91", "ovs_interfaceid": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.016 238887 DEBUG nova.network.os_vif_util [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:be:eb,bridge_name='br-int',has_traffic_filtering=True,id=0afadb99-91e4-4b90-8cad-6f4e97daf0f9,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0afadb99-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.016 238887 DEBUG os_vif [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:be:eb,bridge_name='br-int',has_traffic_filtering=True,id=0afadb99-91e4-4b90-8cad-6f4e97daf0f9,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0afadb99-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.017 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.017 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.017 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.020 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.020 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0afadb99-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.020 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0afadb99-91, col_values=(('external_ids', {'iface-id': '0afadb99-91e4-4b90-8cad-6f4e97daf0f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:be:eb', 'vm-uuid': '140c7b65-c11d-4032-aaf8-db6b3df5127e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:53 np0005604943 NetworkManager[49093]: <info>  [1770034013.0234] manager: (tap0afadb99-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.025 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.027 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.028 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.029 238887 INFO os_vif [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:be:eb,bridge_name='br-int',has_traffic_filtering=True,id=0afadb99-91e4-4b90-8cad-6f4e97daf0f9,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0afadb99-91')#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.070 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.070 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.070 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No VIF found with MAC fa:16:3e:18:be:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.071 238887 INFO nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Using config drive#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.092 238887 DEBUG nova.storage.rbd_utils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image 140c7b65-c11d-4032-aaf8-db6b3df5127e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.221 238887 DEBUG nova.network.neutron [req-7a4ec6e5-61ce-44c6-b9d8-c11de0c91e23 req-34ba73c3-8738-4284-9563-31187c408aef 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Updated VIF entry in instance network info cache for port e40b0257-80f7-4e6f-ab5e-058f6961b2fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.221 238887 DEBUG nova.network.neutron [req-7a4ec6e5-61ce-44c6-b9d8-c11de0c91e23 req-34ba73c3-8738-4284-9563-31187c408aef 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Updating instance_info_cache with network_info: [{"id": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "address": "fa:16:3e:c9:6f:59", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape40b0257-80", "ovs_interfaceid": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.240 238887 DEBUG oslo_concurrency.lockutils [req-7a4ec6e5-61ce-44c6-b9d8-c11de0c91e23 req-34ba73c3-8738-4284-9563-31187c408aef 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-d0404e7d-4162-4ea0-86e0-e7869e7fb702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.296 238887 INFO nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Creating config drive at /var/lib/nova/instances/d0404e7d-4162-4ea0-86e0-e7869e7fb702/disk.config#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.302 238887 DEBUG oslo_concurrency.processutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d0404e7d-4162-4ea0-86e0-e7869e7fb702/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpnwk_r7hd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.434 238887 DEBUG oslo_concurrency.processutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d0404e7d-4162-4ea0-86e0-e7869e7fb702/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpnwk_r7hd" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.461 238887 DEBUG nova.storage.rbd_utils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] rbd image d0404e7d-4162-4ea0-86e0-e7869e7fb702_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.465 238887 DEBUG oslo_concurrency.processutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d0404e7d-4162-4ea0-86e0-e7869e7fb702/disk.config d0404e7d-4162-4ea0-86e0-e7869e7fb702_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.615 238887 DEBUG oslo_concurrency.processutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d0404e7d-4162-4ea0-86e0-e7869e7fb702/disk.config d0404e7d-4162-4ea0-86e0-e7869e7fb702_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.616 238887 INFO nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Deleting local config drive /var/lib/nova/instances/d0404e7d-4162-4ea0-86e0-e7869e7fb702/disk.config because it was imported into RBD.#033[00m
Feb  2 07:06:53 np0005604943 kernel: tape40b0257-80: entered promiscuous mode
Feb  2 07:06:53 np0005604943 NetworkManager[49093]: <info>  [1770034013.6659] manager: (tape40b0257-80): new Tun device (/org/freedesktop/NetworkManager/Devices/110)
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.668 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:53 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:53Z|00205|binding|INFO|Claiming lport e40b0257-80f7-4e6f-ab5e-058f6961b2fa for this chassis.
Feb  2 07:06:53 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:53Z|00206|binding|INFO|e40b0257-80f7-4e6f-ab5e-058f6961b2fa: Claiming fa:16:3e:c9:6f:59 10.100.0.11
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.678 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:6f:59 10.100.0.11'], port_security=['fa:16:3e:c9:6f:59 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd0404e7d-4162-4ea0-86e0-e7869e7fb702', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efa24ae1-9962-44ca-882a-8d146356fcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c7b49c49c104c079544033b07fb2f3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd3424b4-e169-47dd-816d-ac2340e28ccc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b6e8bcf-741b-41c8-a826-9b6dbb1c260b, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=e40b0257-80f7-4e6f-ab5e-058f6961b2fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.680 155011 INFO neutron.agent.ovn.metadata.agent [-] Port e40b0257-80f7-4e6f-ab5e-058f6961b2fa in datapath efa24ae1-9962-44ca-882a-8d146356fcca bound to our chassis#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.682 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network efa24ae1-9962-44ca-882a-8d146356fcca#033[00m
Feb  2 07:06:53 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:53Z|00207|binding|INFO|Setting lport e40b0257-80f7-4e6f-ab5e-058f6961b2fa ovn-installed in OVS
Feb  2 07:06:53 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:53Z|00208|binding|INFO|Setting lport e40b0257-80f7-4e6f-ab5e-058f6961b2fa up in Southbound
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.699 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.703 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[84c2b84a-6350-4d1b-b544-e759ee1533dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.704 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapefa24ae1-91 in ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.708 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapefa24ae1-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.708 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8723e8-9046-4fdc-8f39-9880cd380aee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.709 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[83a86732-b179-4d11-b4ab-599b29791bb6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:53 np0005604943 systemd-machined[206973]: New machine qemu-21-instance-00000015.
Feb  2 07:06:53 np0005604943 systemd-udevd[265419]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.718 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[308bdd6c-318d-415a-b359-e48bf5178404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:53 np0005604943 systemd[1]: Started Virtual Machine qemu-21-instance-00000015.
Feb  2 07:06:53 np0005604943 NetworkManager[49093]: <info>  [1770034013.7290] device (tape40b0257-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:06:53 np0005604943 NetworkManager[49093]: <info>  [1770034013.7298] device (tape40b0257-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.731 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b9376b5d-752f-42bf-8169-f3f5ccf19ac9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.760 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[1f87a6e0-1bb0-4995-af99-c836d10a60d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:53 np0005604943 systemd-udevd[265423]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:06:53 np0005604943 NetworkManager[49093]: <info>  [1770034013.7684] manager: (tapefa24ae1-90): new Veth device (/org/freedesktop/NetworkManager/Devices/111)
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.766 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b03e9ed9-a014-4e76-be0f-982232c52ace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.799 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4a4ca2-0d0b-48f2-921a-e8b9f3741798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.802 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[a23c8fb0-b030-47a2-9ede-d7848db5de3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:53 np0005604943 NetworkManager[49093]: <info>  [1770034013.8218] device (tapefa24ae1-90): carrier: link connected
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.828 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6b29fe-f866-4481-ab93-7c111a8878f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.845 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b9db75f9-c15b-49d6-a176-5dc445e4f746]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefa24ae1-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:4e:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445917, 'reachable_time': 30666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265454, 'error': None, 'target': 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.862 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ebb98299-3f6c-4a93-bfd4-51265327af1e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:4ebf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445917, 'tstamp': 445917}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265455, 'error': None, 'target': 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.878 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7edb866f-12e0-41ec-a002-8706792e9dae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefa24ae1-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:4e:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445917, 'reachable_time': 30666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265456, 'error': None, 'target': 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.893 238887 INFO nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Creating config drive at /var/lib/nova/instances/140c7b65-c11d-4032-aaf8-db6b3df5127e/disk.config#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.897 238887 DEBUG oslo_concurrency.processutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/140c7b65-c11d-4032-aaf8-db6b3df5127e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpx7v5jwgz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.919 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ce58754f-7e8b-4670-b1df-81eb67ac656a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.966 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[3595ec75-e13b-4c85-977a-1b5e64dfe290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.969 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefa24ae1-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.969 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.970 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefa24ae1-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:53 np0005604943 NetworkManager[49093]: <info>  [1770034013.9730] manager: (tapefa24ae1-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.973 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:53 np0005604943 kernel: tapefa24ae1-90: entered promiscuous mode
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.977 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.978 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapefa24ae1-90, col_values=(('external_ids', {'iface-id': '88fa0d04-0a79-4556-b2c6-d65a3a18ab58'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.979 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:53 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:53Z|00209|binding|INFO|Releasing lport 88fa0d04-0a79-4556-b2c6-d65a3a18ab58 from this chassis (sb_readonly=0)
Feb  2 07:06:53 np0005604943 nova_compute[238883]: 2026-02-02 12:06:53.986 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.987 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/efa24ae1-9962-44ca-882a-8d146356fcca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/efa24ae1-9962-44ca-882a-8d146356fcca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.988 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[5fdeac19-4c2d-4a7f-ac33-b52d58984e51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.989 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-efa24ae1-9962-44ca-882a-8d146356fcca
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/efa24ae1-9962-44ca-882a-8d146356fcca.pid.haproxy
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID efa24ae1-9962-44ca-882a-8d146356fcca
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 07:06:53 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:53.991 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'env', 'PROCESS_TAG=haproxy-efa24ae1-9962-44ca-882a-8d146356fcca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/efa24ae1-9962-44ca-882a-8d146356fcca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.024 238887 DEBUG oslo_concurrency.processutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/140c7b65-c11d-4032-aaf8-db6b3df5127e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpx7v5jwgz" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.053 238887 DEBUG nova.storage.rbd_utils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image 140c7b65-c11d-4032-aaf8-db6b3df5127e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.058 238887 DEBUG oslo_concurrency.processutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/140c7b65-c11d-4032-aaf8-db6b3df5127e/disk.config 140c7b65-c11d-4032-aaf8-db6b3df5127e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.203 238887 DEBUG oslo_concurrency.processutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/140c7b65-c11d-4032-aaf8-db6b3df5127e/disk.config 140c7b65-c11d-4032-aaf8-db6b3df5127e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.204 238887 INFO nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Deleting local config drive /var/lib/nova/instances/140c7b65-c11d-4032-aaf8-db6b3df5127e/disk.config because it was imported into RBD.#033[00m
Feb  2 07:06:54 np0005604943 kernel: tap0afadb99-91: entered promiscuous mode
Feb  2 07:06:54 np0005604943 NetworkManager[49093]: <info>  [1770034014.2498] manager: (tap0afadb99-91): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Feb  2 07:06:54 np0005604943 systemd-udevd[265448]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:06:54 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:54Z|00210|binding|INFO|Claiming lport 0afadb99-91e4-4b90-8cad-6f4e97daf0f9 for this chassis.
Feb  2 07:06:54 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:54Z|00211|binding|INFO|0afadb99-91e4-4b90-8cad-6f4e97daf0f9: Claiming fa:16:3e:18:be:eb 10.100.0.9
Feb  2 07:06:54 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:54Z|00212|binding|INFO|Setting lport 0afadb99-91e4-4b90-8cad-6f4e97daf0f9 ovn-installed in OVS
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.258 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:54 np0005604943 NetworkManager[49093]: <info>  [1770034014.2630] device (tap0afadb99-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:06:54 np0005604943 NetworkManager[49093]: <info>  [1770034014.2643] device (tap0afadb99-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:06:54 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:54Z|00213|binding|INFO|Setting lport 0afadb99-91e4-4b90-8cad-6f4e97daf0f9 up in Southbound
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.268 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:be:eb 10.100.0.9'], port_security=['fa:16:3e:18:be:eb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '140c7b65-c11d-4032-aaf8-db6b3df5127e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34290362-cccd-452d-8e7e-22a6057fdb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e66ed51ccbb840f083b8a86476696747', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f445a686-10d3-4653-b101-b0c161d236b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c1fa263-7715-4982-bfcc-ab441fef3c03, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=0afadb99-91e4-4b90-8cad-6f4e97daf0f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:06:54 np0005604943 systemd-machined[206973]: New machine qemu-22-instance-00000016.
Feb  2 07:06:54 np0005604943 systemd[1]: Started Virtual Machine qemu-22-instance-00000016.
Feb  2 07:06:54 np0005604943 podman[265556]: 2026-02-02 12:06:54.379089357 +0000 UTC m=+0.067407779 container create 713470aef1b8e3963d38a0d5f0f1aa49398b46f584e07faa913a48c6502a82e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true)
Feb  2 07:06:54 np0005604943 podman[265556]: 2026-02-02 12:06:54.336529819 +0000 UTC m=+0.024848271 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 07:06:54 np0005604943 systemd[1]: Started libpod-conmon-713470aef1b8e3963d38a0d5f0f1aa49398b46f584e07faa913a48c6502a82e3.scope.
Feb  2 07:06:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 395 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 19 MiB/s wr, 104 op/s
Feb  2 07:06:54 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:06:54 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988c5e63f9f92b24f3bda8d38f01b1d7dce6747f6e4d0e56d63ed2779e8108d3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 07:06:54 np0005604943 podman[265556]: 2026-02-02 12:06:54.476962268 +0000 UTC m=+0.165280710 container init 713470aef1b8e3963d38a0d5f0f1aa49398b46f584e07faa913a48c6502a82e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Feb  2 07:06:54 np0005604943 podman[265556]: 2026-02-02 12:06:54.481162411 +0000 UTC m=+0.169480833 container start 713470aef1b8e3963d38a0d5f0f1aa49398b46f584e07faa913a48c6502a82e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb  2 07:06:54 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[265597]: [NOTICE]   (265602) : New worker (265604) forked
Feb  2 07:06:54 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[265597]: [NOTICE]   (265602) : Loading success.
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.529 238887 DEBUG nova.compute.manager [req-f23991b8-69a5-4ba1-af8c-4ee5e63850d9 req-ad5708b0-ac44-4bae-9a8d-1171fdb18036 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Received event network-vif-plugged-e40b0257-80f7-4e6f-ab5e-058f6961b2fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.529 238887 DEBUG oslo_concurrency.lockutils [req-f23991b8-69a5-4ba1-af8c-4ee5e63850d9 req-ad5708b0-ac44-4bae-9a8d-1171fdb18036 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.530 238887 DEBUG oslo_concurrency.lockutils [req-f23991b8-69a5-4ba1-af8c-4ee5e63850d9 req-ad5708b0-ac44-4bae-9a8d-1171fdb18036 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.531 238887 DEBUG oslo_concurrency.lockutils [req-f23991b8-69a5-4ba1-af8c-4ee5e63850d9 req-ad5708b0-ac44-4bae-9a8d-1171fdb18036 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.531 238887 DEBUG nova.compute.manager [req-f23991b8-69a5-4ba1-af8c-4ee5e63850d9 req-ad5708b0-ac44-4bae-9a8d-1171fdb18036 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Processing event network-vif-plugged-e40b0257-80f7-4e6f-ab5e-058f6961b2fa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.579 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 0afadb99-91e4-4b90-8cad-6f4e97daf0f9 in datapath 34290362-cccd-452d-8e7e-22a6057fdb60 unbound from our chassis#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.582 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34290362-cccd-452d-8e7e-22a6057fdb60#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.591 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[18ce67cb-544d-4a32-b057-dbf5e15f0444]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.593 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap34290362-c1 in ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.596 238887 DEBUG nova.compute.manager [req-56962ebc-642e-49cd-b098-279a458f972f req-cc096c54-4e9d-4560-b6ae-c83d1e100804 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Received event network-vif-plugged-0afadb99-91e4-4b90-8cad-6f4e97daf0f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.596 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap34290362-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.596 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[e40ce5a5-601d-4ca7-806e-67e6ca978c89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.596 238887 DEBUG oslo_concurrency.lockutils [req-56962ebc-642e-49cd-b098-279a458f972f req-cc096c54-4e9d-4560-b6ae-c83d1e100804 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.597 238887 DEBUG oslo_concurrency.lockutils [req-56962ebc-642e-49cd-b098-279a458f972f req-cc096c54-4e9d-4560-b6ae-c83d1e100804 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.597 238887 DEBUG oslo_concurrency.lockutils [req-56962ebc-642e-49cd-b098-279a458f972f req-cc096c54-4e9d-4560-b6ae-c83d1e100804 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.597 238887 DEBUG nova.compute.manager [req-56962ebc-642e-49cd-b098-279a458f972f req-cc096c54-4e9d-4560-b6ae-c83d1e100804 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Processing event network-vif-plugged-0afadb99-91e4-4b90-8cad-6f4e97daf0f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.597 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[419ecd05-403d-42e2-97f5-9907495b5098]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.611 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[015e30a8-8024-4711-81f0-d1840e0d8420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.629 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d7122b66-4bf1-498d-82f5-efa5b29cfebf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.654 238887 DEBUG nova.network.neutron [req-697ac2b5-8873-4392-aa26-0f67a238274a req-90f16779-d9d6-46ce-ac2b-b0c3e51da6b2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Updated VIF entry in instance network info cache for port 0afadb99-91e4-4b90-8cad-6f4e97daf0f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.655 238887 DEBUG nova.network.neutron [req-697ac2b5-8873-4392-aa26-0f67a238274a req-90f16779-d9d6-46ce-ac2b-b0c3e51da6b2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Updating instance_info_cache with network_info: [{"id": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "address": "fa:16:3e:18:be:eb", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afadb99-91", "ovs_interfaceid": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.658 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[efde07fc-bf35-438b-9622-c30e8631198b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:54 np0005604943 NetworkManager[49093]: <info>  [1770034014.6703] manager: (tap34290362-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/114)
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.672 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[6d185c82-9976-4687-8c19-bd958f6273ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.678 238887 DEBUG oslo_concurrency.lockutils [req-697ac2b5-8873-4392-aa26-0f67a238274a req-90f16779-d9d6-46ce-ac2b-b0c3e51da6b2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-140c7b65-c11d-4032-aaf8-db6b3df5127e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.712 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[d0280929-4318-4c03-8ae1-9e7dde009dbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.720 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[e60c3d53-ad6d-4325-b017-166e4ead58c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:54 np0005604943 NetworkManager[49093]: <info>  [1770034014.7420] device (tap34290362-c0): carrier: link connected
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.746 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[9c027799-87c6-4002-a467-2d4cf05d3fe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.767 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[017e1a58-bbfe-4096-97c3-0a126e1b784b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34290362-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:39:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446009, 'reachable_time': 23880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265666, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.786 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2e7b29-369f-404b-a9fd-507ba17ca9ee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb3:39d2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446009, 'tstamp': 446009}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265668, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.799 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034014.7986467, 140c7b65-c11d-4032-aaf8-db6b3df5127e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.800 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] VM Started (Lifecycle Event)#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.802 238887 DEBUG nova.compute.manager [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.806 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.809 238887 INFO nova.virt.libvirt.driver [-] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Instance spawned successfully.#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.810 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.811 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[4bebf857-2bfa-4563-b32f-dbbd91214efb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34290362-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:39:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446009, 'reachable_time': 23880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265669, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.830 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.837 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.840 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.840 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.840 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.841 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.841 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.841 238887 DEBUG nova.virt.libvirt.driver [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.844 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f094b7-696a-46b4-a0bd-ce0e80e75366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.868 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.869 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034014.8000891, 140c7b65-c11d-4032-aaf8-db6b3df5127e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.869 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.898 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.902 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034014.8053536, 140c7b65-c11d-4032-aaf8-db6b3df5127e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.903 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.905 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[5963f666-66c0-4c1d-bc4a-c4dba7e247af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.907 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34290362-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.907 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.908 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34290362-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:54 np0005604943 NetworkManager[49093]: <info>  [1770034014.9117] manager: (tap34290362-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Feb  2 07:06:54 np0005604943 kernel: tap34290362-c0: entered promiscuous mode
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.914 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34290362-c0, col_values=(('external_ids', {'iface-id': '54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.914 238887 INFO nova.compute.manager [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Took 3.87 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:06:54 np0005604943 ovn_controller[145056]: 2026-02-02T12:06:54Z|00214|binding|INFO|Releasing lport 54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c from this chassis (sb_readonly=0)
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.915 238887 DEBUG nova.compute.manager [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.916 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.917 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/34290362-cccd-452d-8e7e-22a6057fdb60.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/34290362-cccd-452d-8e7e-22a6057fdb60.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.918 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[19c18578-7120-429b-b373-edff6f5a992c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.918 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-34290362-cccd-452d-8e7e-22a6057fdb60
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/34290362-cccd-452d-8e7e-22a6057fdb60.pid.haproxy
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID 34290362-cccd-452d-8e7e-22a6057fdb60
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 07:06:54 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:06:54.920 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'env', 'PROCESS_TAG=haproxy-34290362-cccd-452d-8e7e-22a6057fdb60', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/34290362-cccd-452d-8e7e-22a6057fdb60.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.922 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.924 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.935 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.971 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:06:54 np0005604943 nova_compute[238883]: 2026-02-02 12:06:54.990 238887 INFO nova.compute.manager [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Took 6.66 seconds to build instance.#033[00m
Feb  2 07:06:55 np0005604943 nova_compute[238883]: 2026-02-02 12:06:55.010 238887 DEBUG oslo_concurrency.lockutils [None req-a291eebc-f6f7-495f-9576-66f6a0dd8cb2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "140c7b65-c11d-4032-aaf8-db6b3df5127e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:55 np0005604943 podman[265701]: 2026-02-02 12:06:55.271292418 +0000 UTC m=+0.066317081 container create 117476dc13cd843cf890153d44d2b7959038792c689ab5f923aa17dddf3a1c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb  2 07:06:55 np0005604943 systemd[1]: Started libpod-conmon-117476dc13cd843cf890153d44d2b7959038792c689ab5f923aa17dddf3a1c26.scope.
Feb  2 07:06:55 np0005604943 podman[265701]: 2026-02-02 12:06:55.227937818 +0000 UTC m=+0.022962511 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 07:06:55 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:06:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a92e0eb499f31ca803eb1dac543b24a235b941239a470adad3c760fd7b418e13/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 07:06:55 np0005604943 podman[265701]: 2026-02-02 12:06:55.377943975 +0000 UTC m=+0.172968658 container init 117476dc13cd843cf890153d44d2b7959038792c689ab5f923aa17dddf3a1c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 07:06:55 np0005604943 podman[265701]: 2026-02-02 12:06:55.385143859 +0000 UTC m=+0.180168532 container start 117476dc13cd843cf890153d44d2b7959038792c689ab5f923aa17dddf3a1c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 07:06:55 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[265716]: [NOTICE]   (265720) : New worker (265722) forked
Feb  2 07:06:55 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[265716]: [NOTICE]   (265720) : Loading success.
Feb  2 07:06:55 np0005604943 nova_compute[238883]: 2026-02-02 12:06:55.862 238887 DEBUG oslo_concurrency.lockutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "a8bef119-c694-432a-984b-0f0f2b570103" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:55 np0005604943 nova_compute[238883]: 2026-02-02 12:06:55.863 238887 DEBUG oslo_concurrency.lockutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "a8bef119-c694-432a-984b-0f0f2b570103" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:55 np0005604943 nova_compute[238883]: 2026-02-02 12:06:55.881 238887 DEBUG nova.compute.manager [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:06:55 np0005604943 nova_compute[238883]: 2026-02-02 12:06:55.951 238887 DEBUG oslo_concurrency.lockutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:55 np0005604943 nova_compute[238883]: 2026-02-02 12:06:55.952 238887 DEBUG oslo_concurrency.lockutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:55 np0005604943 nova_compute[238883]: 2026-02-02 12:06:55.960 238887 DEBUG nova.virt.hardware [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:06:55 np0005604943 nova_compute[238883]: 2026-02-02 12:06:55.960 238887 INFO nova.compute.claims [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.099 238887 DEBUG oslo_concurrency.processutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 395 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 18 MiB/s wr, 90 op/s
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.601 238887 DEBUG nova.compute.manager [req-879f7ff0-dff5-4d79-8110-ccb270b7afaf req-38f443de-3323-4dec-b7ce-163cc2cd448b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Received event network-vif-plugged-e40b0257-80f7-4e6f-ab5e-058f6961b2fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.601 238887 DEBUG oslo_concurrency.lockutils [req-879f7ff0-dff5-4d79-8110-ccb270b7afaf req-38f443de-3323-4dec-b7ce-163cc2cd448b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.602 238887 DEBUG oslo_concurrency.lockutils [req-879f7ff0-dff5-4d79-8110-ccb270b7afaf req-38f443de-3323-4dec-b7ce-163cc2cd448b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.602 238887 DEBUG oslo_concurrency.lockutils [req-879f7ff0-dff5-4d79-8110-ccb270b7afaf req-38f443de-3323-4dec-b7ce-163cc2cd448b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.602 238887 DEBUG nova.compute.manager [req-879f7ff0-dff5-4d79-8110-ccb270b7afaf req-38f443de-3323-4dec-b7ce-163cc2cd448b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] No waiting events found dispatching network-vif-plugged-e40b0257-80f7-4e6f-ab5e-058f6961b2fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.602 238887 WARNING nova.compute.manager [req-879f7ff0-dff5-4d79-8110-ccb270b7afaf req-38f443de-3323-4dec-b7ce-163cc2cd448b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Received unexpected event network-vif-plugged-e40b0257-80f7-4e6f-ab5e-058f6961b2fa for instance with vm_state building and task_state spawning.#033[00m
Feb  2 07:06:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:06:56 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1401064817' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.661 238887 DEBUG nova.compute.manager [req-4b0eabba-9fc8-41fa-9f69-045cbb5a05bd req-24d69245-a1bd-4382-8c59-3e96da1678e3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Received event network-vif-plugged-0afadb99-91e4-4b90-8cad-6f4e97daf0f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.661 238887 DEBUG oslo_concurrency.lockutils [req-4b0eabba-9fc8-41fa-9f69-045cbb5a05bd req-24d69245-a1bd-4382-8c59-3e96da1678e3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.662 238887 DEBUG oslo_concurrency.lockutils [req-4b0eabba-9fc8-41fa-9f69-045cbb5a05bd req-24d69245-a1bd-4382-8c59-3e96da1678e3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.662 238887 DEBUG oslo_concurrency.lockutils [req-4b0eabba-9fc8-41fa-9f69-045cbb5a05bd req-24d69245-a1bd-4382-8c59-3e96da1678e3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.662 238887 DEBUG nova.compute.manager [req-4b0eabba-9fc8-41fa-9f69-045cbb5a05bd req-24d69245-a1bd-4382-8c59-3e96da1678e3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] No waiting events found dispatching network-vif-plugged-0afadb99-91e4-4b90-8cad-6f4e97daf0f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.662 238887 WARNING nova.compute.manager [req-4b0eabba-9fc8-41fa-9f69-045cbb5a05bd req-24d69245-a1bd-4382-8c59-3e96da1678e3 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Received unexpected event network-vif-plugged-0afadb99-91e4-4b90-8cad-6f4e97daf0f9 for instance with vm_state active and task_state None.#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.663 238887 DEBUG oslo_concurrency.processutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.670 238887 DEBUG nova.compute.provider_tree [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.680 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.688 238887 DEBUG nova.scheduler.client.report [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.710 238887 DEBUG oslo_concurrency.lockutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.711 238887 DEBUG nova.compute.manager [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.763 238887 DEBUG nova.compute.manager [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.765 238887 DEBUG nova.network.neutron [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.791 238887 INFO nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.866 238887 DEBUG nova.compute.manager [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.912 238887 INFO nova.virt.block_device [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Booting with volume 225f2c13-e4a1-43d3-a23f-dda36481d3ad at /dev/vda#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.960 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034016.9601324, d0404e7d-4162-4ea0-86e0-e7869e7fb702 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.961 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] VM Started (Lifecycle Event)#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.966 238887 DEBUG nova.policy [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '084f489a7b4c4fecba7b0942ed1b7203', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '851fb6d80faf43cc9b2fef1913323704', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.970 238887 DEBUG nova.compute.manager [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.973 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.978 238887 INFO nova.virt.libvirt.driver [-] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Instance spawned successfully.#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.978 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.991 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:06:56 np0005604943 nova_compute[238883]: 2026-02-02 12:06:56.994 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.006 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.007 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.007 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.007 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.008 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.008 238887 DEBUG nova.virt.libvirt.driver [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.015 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.016 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034016.9615479, d0404e7d-4162-4ea0-86e0-e7869e7fb702 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.016 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.024 238887 DEBUG os_brick.utils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.025 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.039 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.039 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd85df2-b1a2-4edb-8583-c83be2c98e14]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.041 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.052 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.051 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.051 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[d296bd2f-230d-4a87-9788-63f32f084c3b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.055 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.061 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034016.973119, d0404e7d-4162-4ea0-86e0-e7869e7fb702 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.061 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.065 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.065 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[6267775c-01f7-4178-a02c-b335d2d33beb]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.067 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[5eee3e2a-2c04-49b4-85fa-413ae71adb04]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.067 238887 DEBUG oslo_concurrency.processutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.094 238887 INFO nova.compute.manager [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Took 6.67 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.095 238887 DEBUG nova.compute.manager [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.096 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.098 238887 DEBUG oslo_concurrency.processutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.102 238887 DEBUG os_brick.initiator.connectors.lightos [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.102 238887 DEBUG os_brick.initiator.connectors.lightos [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.102 238887 DEBUG os_brick.initiator.connectors.lightos [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.103 238887 DEBUG os_brick.utils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.103 238887 DEBUG nova.virt.block_device [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Updating existing volume attachment record: bd28608c-0b83-4110-844a-e7b672412ed3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.112 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.147 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.202 238887 INFO nova.compute.manager [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Took 9.45 seconds to build instance.#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.222 238887 DEBUG oslo_concurrency.lockutils [None req-9111c62b-e5ea-407b-8bba-23a1027ef099 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.426 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770034002.4246635, 117c0603-9127-4e21-9fc6-df67391a5b24 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.427 238887 INFO nova.compute.manager [-] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:06:57 np0005604943 nova_compute[238883]: 2026-02-02 12:06:57.446 238887 DEBUG nova.compute.manager [None req-808671ba-f8f5-4569-858f-e188243a61c4 - - - - - -] [instance: 117c0603-9127-4e21-9fc6-df67391a5b24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:06:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:06:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:06:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1502578852' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:06:58 np0005604943 nova_compute[238883]: 2026-02-02 12:06:58.023 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:06:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 395 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 18 MiB/s wr, 135 op/s
Feb  2 07:07:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 395 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 12 MiB/s wr, 158 op/s
Feb  2 07:07:01 np0005604943 nova_compute[238883]: 2026-02-02 12:07:01.682 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 395 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 9.4 MiB/s wr, 192 op/s
Feb  2 07:07:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:07:03 np0005604943 nova_compute[238883]: 2026-02-02 12:07:03.026 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:03 np0005604943 nova_compute[238883]: 2026-02-02 12:07:03.627 238887 DEBUG nova.compute.manager [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:07:03 np0005604943 nova_compute[238883]: 2026-02-02 12:07:03.629 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:07:03 np0005604943 nova_compute[238883]: 2026-02-02 12:07:03.630 238887 INFO nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Creating image(s)#033[00m
Feb  2 07:07:03 np0005604943 nova_compute[238883]: 2026-02-02 12:07:03.630 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 07:07:03 np0005604943 nova_compute[238883]: 2026-02-02 12:07:03.631 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Ensure instance console log exists: /var/lib/nova/instances/a8bef119-c694-432a-984b-0f0f2b570103/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:07:03 np0005604943 nova_compute[238883]: 2026-02-02 12:07:03.631 238887 DEBUG oslo_concurrency.lockutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:03 np0005604943 nova_compute[238883]: 2026-02-02 12:07:03.632 238887 DEBUG oslo_concurrency.lockutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:03 np0005604943 nova_compute[238883]: 2026-02-02 12:07:03.632 238887 DEBUG oslo_concurrency.lockutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:03 np0005604943 nova_compute[238883]: 2026-02-02 12:07:03.777 238887 DEBUG nova.network.neutron [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Successfully created port: 84c80f93-58de-43af-9685-e46ce8e0854f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:07:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 395 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 158 op/s
Feb  2 07:07:04 np0005604943 nova_compute[238883]: 2026-02-02 12:07:04.520 238887 DEBUG nova.compute.manager [req-fb294968-7fc2-4ab4-8b6a-a11b14f01d81 req-716d024e-d79b-4264-bc53-baba5f48ed32 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Received event network-changed-0afadb99-91e4-4b90-8cad-6f4e97daf0f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:04 np0005604943 nova_compute[238883]: 2026-02-02 12:07:04.521 238887 DEBUG nova.compute.manager [req-fb294968-7fc2-4ab4-8b6a-a11b14f01d81 req-716d024e-d79b-4264-bc53-baba5f48ed32 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Refreshing instance network info cache due to event network-changed-0afadb99-91e4-4b90-8cad-6f4e97daf0f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:07:04 np0005604943 nova_compute[238883]: 2026-02-02 12:07:04.522 238887 DEBUG oslo_concurrency.lockutils [req-fb294968-7fc2-4ab4-8b6a-a11b14f01d81 req-716d024e-d79b-4264-bc53-baba5f48ed32 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-140c7b65-c11d-4032-aaf8-db6b3df5127e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:07:04 np0005604943 nova_compute[238883]: 2026-02-02 12:07:04.522 238887 DEBUG oslo_concurrency.lockutils [req-fb294968-7fc2-4ab4-8b6a-a11b14f01d81 req-716d024e-d79b-4264-bc53-baba5f48ed32 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-140c7b65-c11d-4032-aaf8-db6b3df5127e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:07:04 np0005604943 nova_compute[238883]: 2026-02-02 12:07:04.522 238887 DEBUG nova.network.neutron [req-fb294968-7fc2-4ab4-8b6a-a11b14f01d81 req-716d024e-d79b-4264-bc53-baba5f48ed32 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Refreshing network info cache for port 0afadb99-91e4-4b90-8cad-6f4e97daf0f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:07:04 np0005604943 nova_compute[238883]: 2026-02-02 12:07:04.628 238887 DEBUG nova.network.neutron [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Successfully updated port: 84c80f93-58de-43af-9685-e46ce8e0854f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:07:04 np0005604943 nova_compute[238883]: 2026-02-02 12:07:04.659 238887 DEBUG oslo_concurrency.lockutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "refresh_cache-a8bef119-c694-432a-984b-0f0f2b570103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:07:04 np0005604943 nova_compute[238883]: 2026-02-02 12:07:04.660 238887 DEBUG oslo_concurrency.lockutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquired lock "refresh_cache-a8bef119-c694-432a-984b-0f0f2b570103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:07:04 np0005604943 nova_compute[238883]: 2026-02-02 12:07:04.660 238887 DEBUG nova.network.neutron [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:07:04 np0005604943 nova_compute[238883]: 2026-02-02 12:07:04.736 238887 DEBUG nova.compute.manager [req-66d4816b-ad85-48d7-8dd4-32493efde65e req-815f6615-8e99-4352-b561-44d28e4a545f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Received event network-changed-84c80f93-58de-43af-9685-e46ce8e0854f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:04 np0005604943 nova_compute[238883]: 2026-02-02 12:07:04.737 238887 DEBUG nova.compute.manager [req-66d4816b-ad85-48d7-8dd4-32493efde65e req-815f6615-8e99-4352-b561-44d28e4a545f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Refreshing instance network info cache due to event network-changed-84c80f93-58de-43af-9685-e46ce8e0854f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:07:04 np0005604943 nova_compute[238883]: 2026-02-02 12:07:04.737 238887 DEBUG oslo_concurrency.lockutils [req-66d4816b-ad85-48d7-8dd4-32493efde65e req-815f6615-8e99-4352-b561-44d28e4a545f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-a8bef119-c694-432a-984b-0f0f2b570103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:07:04 np0005604943 nova_compute[238883]: 2026-02-02 12:07:04.821 238887 DEBUG nova.network.neutron [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:07:05 np0005604943 nova_compute[238883]: 2026-02-02 12:07:05.780 238887 DEBUG nova.network.neutron [req-fb294968-7fc2-4ab4-8b6a-a11b14f01d81 req-716d024e-d79b-4264-bc53-baba5f48ed32 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Updated VIF entry in instance network info cache for port 0afadb99-91e4-4b90-8cad-6f4e97daf0f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:07:05 np0005604943 nova_compute[238883]: 2026-02-02 12:07:05.782 238887 DEBUG nova.network.neutron [req-fb294968-7fc2-4ab4-8b6a-a11b14f01d81 req-716d024e-d79b-4264-bc53-baba5f48ed32 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Updating instance_info_cache with network_info: [{"id": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "address": "fa:16:3e:18:be:eb", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afadb99-91", "ovs_interfaceid": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:05 np0005604943 nova_compute[238883]: 2026-02-02 12:07:05.803 238887 DEBUG oslo_concurrency.lockutils [req-fb294968-7fc2-4ab4-8b6a-a11b14f01d81 req-716d024e-d79b-4264-bc53-baba5f48ed32 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-140c7b65-c11d-4032-aaf8-db6b3df5127e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.103 238887 DEBUG nova.network.neutron [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Updating instance_info_cache with network_info: [{"id": "84c80f93-58de-43af-9685-e46ce8e0854f", "address": "fa:16:3e:47:96:08", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c80f93-58", "ovs_interfaceid": "84c80f93-58de-43af-9685-e46ce8e0854f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.131 238887 DEBUG oslo_concurrency.lockutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Releasing lock "refresh_cache-a8bef119-c694-432a-984b-0f0f2b570103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.133 238887 DEBUG nova.compute.manager [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Instance network_info: |[{"id": "84c80f93-58de-43af-9685-e46ce8e0854f", "address": "fa:16:3e:47:96:08", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c80f93-58", "ovs_interfaceid": "84c80f93-58de-43af-9685-e46ce8e0854f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.134 238887 DEBUG oslo_concurrency.lockutils [req-66d4816b-ad85-48d7-8dd4-32493efde65e req-815f6615-8e99-4352-b561-44d28e4a545f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-a8bef119-c694-432a-984b-0f0f2b570103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.134 238887 DEBUG nova.network.neutron [req-66d4816b-ad85-48d7-8dd4-32493efde65e req-815f6615-8e99-4352-b561-44d28e4a545f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Refreshing network info cache for port 84c80f93-58de-43af-9685-e46ce8e0854f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.139 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Start _get_guest_xml network_info=[{"id": "84c80f93-58de-43af-9685-e46ce8e0854f", "address": "fa:16:3e:47:96:08", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c80f93-58", "ovs_interfaceid": "84c80f93-58de-43af-9685-e46ce8e0854f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'attachment_id': 'bd28608c-0b83-4110-844a-e7b672412ed3', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-225f2c13-e4a1-43d3-a23f-dda36481d3ad', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '225f2c13-e4a1-43d3-a23f-dda36481d3ad', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'a8bef119-c694-432a-984b-0f0f2b570103', 'attached_at': '', 'detached_at': '', 'volume_id': '225f2c13-e4a1-43d3-a23f-dda36481d3ad', 'serial': '225f2c13-e4a1-43d3-a23f-dda36481d3ad'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.145 238887 WARNING nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.154 238887 DEBUG nova.virt.libvirt.host [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.156 238887 DEBUG nova.virt.libvirt.host [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.163 238887 DEBUG nova.virt.libvirt.host [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.164 238887 DEBUG nova.virt.libvirt.host [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.165 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.165 238887 DEBUG nova.virt.hardware [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.166 238887 DEBUG nova.virt.hardware [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.166 238887 DEBUG nova.virt.hardware [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.166 238887 DEBUG nova.virt.hardware [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.167 238887 DEBUG nova.virt.hardware [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.167 238887 DEBUG nova.virt.hardware [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.167 238887 DEBUG nova.virt.hardware [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.167 238887 DEBUG nova.virt.hardware [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.168 238887 DEBUG nova.virt.hardware [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.168 238887 DEBUG nova.virt.hardware [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.168 238887 DEBUG nova.virt.hardware [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.196 238887 DEBUG nova.storage.rbd_utils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] rbd image a8bef119-c694-432a-984b-0f0f2b570103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.204 238887 DEBUG oslo_concurrency.processutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 395 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 25 KiB/s wr, 145 op/s
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.685 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:07:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2866202920' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.851 238887 DEBUG oslo_concurrency.processutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.969 238887 DEBUG os_brick.encryptors [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Using volume encryption metadata '{'encryption_key_id': 'a07865d1-fe31-49d5-84b9-12ea01685c86', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-225f2c13-e4a1-43d3-a23f-dda36481d3ad', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '225f2c13-e4a1-43d3-a23f-dda36481d3ad', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'a8bef119-c694-432a-984b-0f0f2b570103', 'attached_at': '', 'detached_at': '', 'volume_id': '225f2c13-e4a1-43d3-a23f-dda36481d3ad', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.972 238887 DEBUG barbicanclient.client [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.991 238887 DEBUG barbicanclient.v1.secrets [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/a07865d1-fe31-49d5-84b9-12ea01685c86 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 07:07:06 np0005604943 nova_compute[238883]: 2026-02-02 12:07:06.992 238887 INFO barbicanclient.base [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/a07865d1-fe31-49d5-84b9-12ea01685c86#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.013 238887 DEBUG barbicanclient.client [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.014 238887 INFO barbicanclient.base [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/a07865d1-fe31-49d5-84b9-12ea01685c86#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.042 238887 DEBUG barbicanclient.client [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.043 238887 INFO barbicanclient.base [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/a07865d1-fe31-49d5-84b9-12ea01685c86#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.066 238887 DEBUG barbicanclient.client [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.067 238887 INFO barbicanclient.base [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/a07865d1-fe31-49d5-84b9-12ea01685c86#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.086 238887 DEBUG barbicanclient.client [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.088 238887 INFO barbicanclient.base [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/a07865d1-fe31-49d5-84b9-12ea01685c86#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.109 238887 DEBUG barbicanclient.client [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.110 238887 INFO barbicanclient.base [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/a07865d1-fe31-49d5-84b9-12ea01685c86#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.243 238887 DEBUG barbicanclient.client [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.244 238887 INFO barbicanclient.base [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/a07865d1-fe31-49d5-84b9-12ea01685c86#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.277 238887 DEBUG barbicanclient.client [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.278 238887 INFO barbicanclient.base [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/a07865d1-fe31-49d5-84b9-12ea01685c86#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.298 238887 DEBUG nova.compute.manager [req-b2fff5bd-37a5-4aa2-bd77-0235ee980646 req-9051f7fe-989f-4bd9-976f-5a10a200176a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Received event network-changed-e40b0257-80f7-4e6f-ab5e-058f6961b2fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.299 238887 DEBUG nova.compute.manager [req-b2fff5bd-37a5-4aa2-bd77-0235ee980646 req-9051f7fe-989f-4bd9-976f-5a10a200176a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Refreshing instance network info cache due to event network-changed-e40b0257-80f7-4e6f-ab5e-058f6961b2fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.299 238887 DEBUG oslo_concurrency.lockutils [req-b2fff5bd-37a5-4aa2-bd77-0235ee980646 req-9051f7fe-989f-4bd9-976f-5a10a200176a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-d0404e7d-4162-4ea0-86e0-e7869e7fb702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.299 238887 DEBUG oslo_concurrency.lockutils [req-b2fff5bd-37a5-4aa2-bd77-0235ee980646 req-9051f7fe-989f-4bd9-976f-5a10a200176a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-d0404e7d-4162-4ea0-86e0-e7869e7fb702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.299 238887 DEBUG nova.network.neutron [req-b2fff5bd-37a5-4aa2-bd77-0235ee980646 req-9051f7fe-989f-4bd9-976f-5a10a200176a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Refreshing network info cache for port e40b0257-80f7-4e6f-ab5e-058f6961b2fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.312 238887 DEBUG barbicanclient.client [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.313 238887 INFO barbicanclient.base [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/a07865d1-fe31-49d5-84b9-12ea01685c86#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.333 238887 DEBUG barbicanclient.client [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.334 238887 INFO barbicanclient.base [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/a07865d1-fe31-49d5-84b9-12ea01685c86#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.354 238887 DEBUG barbicanclient.client [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.355 238887 INFO barbicanclient.base [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/a07865d1-fe31-49d5-84b9-12ea01685c86#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.376 238887 DEBUG barbicanclient.client [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.378 238887 INFO barbicanclient.base [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/a07865d1-fe31-49d5-84b9-12ea01685c86#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.400 238887 DEBUG barbicanclient.client [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.401 238887 INFO barbicanclient.base [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/a07865d1-fe31-49d5-84b9-12ea01685c86#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.421 238887 DEBUG barbicanclient.client [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.422 238887 INFO barbicanclient.base [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/a07865d1-fe31-49d5-84b9-12ea01685c86#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.440 238887 DEBUG barbicanclient.client [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.440 238887 INFO barbicanclient.base [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/a07865d1-fe31-49d5-84b9-12ea01685c86#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.457 238887 DEBUG barbicanclient.client [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.458 238887 DEBUG nova.virt.libvirt.host [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  <usage type="volume">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <volume>225f2c13-e4a1-43d3-a23f-dda36481d3ad</volume>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  </usage>
Feb  2 07:07:07 np0005604943 nova_compute[238883]: </secret>
Feb  2 07:07:07 np0005604943 nova_compute[238883]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.488 238887 DEBUG nova.virt.libvirt.vif [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:06:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-483015749',display_name='tempest-TestEncryptedCinderVolumes-server-483015749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-483015749',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4sEG9hObpGnevoIlqMdkrX6LtyepBRCjADAYBnTUNxH7zE9sXens2JsebTT1q5zN1V4atJxK/wradQkp5n2K1zuz899xdCKCopiRNmhKseY0+RU/9UYAZOT5nySAcl7g==',key_name='tempest-TestEncryptedCinderVolumes-1244227927',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851fb6d80faf43cc9b2fef1913323704',ramdisk_id='',reservation_id='r-m3xj6x1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1976450145',owner_user_name='tempest-TestEncryptedCinderVolumes-1976450145-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:06:56Z,user_data=None,user_id='084f489a7b4c4fecba7b0942ed1b7203',uuid=a8bef119-c694-432a-984b-0f0f2b570103,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84c80f93-58de-43af-9685-e46ce8e0854f", "address": "fa:16:3e:47:96:08", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c80f93-58", "ovs_interfaceid": "84c80f93-58de-43af-9685-e46ce8e0854f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.489 238887 DEBUG nova.network.os_vif_util [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converting VIF {"id": "84c80f93-58de-43af-9685-e46ce8e0854f", "address": "fa:16:3e:47:96:08", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c80f93-58", "ovs_interfaceid": "84c80f93-58de-43af-9685-e46ce8e0854f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.490 238887 DEBUG nova.network.os_vif_util [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:96:08,bridge_name='br-int',has_traffic_filtering=True,id=84c80f93-58de-43af-9685-e46ce8e0854f,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c80f93-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.492 238887 DEBUG nova.objects.instance [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lazy-loading 'pci_devices' on Instance uuid a8bef119-c694-432a-984b-0f0f2b570103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.506 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  <uuid>a8bef119-c694-432a-984b-0f0f2b570103</uuid>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  <name>instance-00000017</name>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-483015749</nova:name>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:07:06</nova:creationTime>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:        <nova:user uuid="084f489a7b4c4fecba7b0942ed1b7203">tempest-TestEncryptedCinderVolumes-1976450145-project-member</nova:user>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:        <nova:project uuid="851fb6d80faf43cc9b2fef1913323704">tempest-TestEncryptedCinderVolumes-1976450145</nova:project>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:        <nova:port uuid="84c80f93-58de-43af-9685-e46ce8e0854f">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <entry name="serial">a8bef119-c694-432a-984b-0f0f2b570103</entry>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <entry name="uuid">a8bef119-c694-432a-984b-0f0f2b570103</entry>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/a8bef119-c694-432a-984b-0f0f2b570103_disk.config">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="volumes/volume-225f2c13-e4a1-43d3-a23f-dda36481d3ad">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <serial>225f2c13-e4a1-43d3-a23f-dda36481d3ad</serial>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <encryption format="luks">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:        <secret type="passphrase" uuid="e0c1c414-bd4a-419d-b925-5edf67747058"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      </encryption>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:47:96:08"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <target dev="tap84c80f93-58"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/a8bef119-c694-432a-984b-0f0f2b570103/console.log" append="off"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:07:07 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:07:07 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:07:07 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:07:07 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.512 238887 DEBUG nova.compute.manager [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Preparing to wait for external event network-vif-plugged-84c80f93-58de-43af-9685-e46ce8e0854f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.513 238887 DEBUG oslo_concurrency.lockutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "a8bef119-c694-432a-984b-0f0f2b570103-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.513 238887 DEBUG oslo_concurrency.lockutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "a8bef119-c694-432a-984b-0f0f2b570103-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.513 238887 DEBUG oslo_concurrency.lockutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "a8bef119-c694-432a-984b-0f0f2b570103-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.514 238887 DEBUG nova.virt.libvirt.vif [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:06:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-483015749',display_name='tempest-TestEncryptedCinderVolumes-server-483015749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-483015749',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4sEG9hObpGnevoIlqMdkrX6LtyepBRCjADAYBnTUNxH7zE9sXens2JsebTT1q5zN1V4atJxK/wradQkp5n2K1zuz899xdCKCopiRNmhKseY0+RU/9UYAZOT5nySAcl7g==',key_name='tempest-TestEncryptedCinderVolumes-1244227927',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851fb6d80faf43cc9b2fef1913323704',ramdisk_id='',reservation_id='r-m3xj6x1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1976450145',owner_user_name='tempest-TestEncryptedCinderVolumes-1976450145-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:06:56Z,user_data=None,user_id='084f489a7b4c4fecba7b0942ed1b7203',uuid=a8bef119-c694-432a-984b-0f0f2b570103,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84c80f93-58de-43af-9685-e46ce8e0854f", "address": "fa:16:3e:47:96:08", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c80f93-58", "ovs_interfaceid": "84c80f93-58de-43af-9685-e46ce8e0854f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.515 238887 DEBUG nova.network.os_vif_util [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converting VIF {"id": "84c80f93-58de-43af-9685-e46ce8e0854f", "address": "fa:16:3e:47:96:08", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c80f93-58", "ovs_interfaceid": "84c80f93-58de-43af-9685-e46ce8e0854f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.516 238887 DEBUG nova.network.os_vif_util [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:96:08,bridge_name='br-int',has_traffic_filtering=True,id=84c80f93-58de-43af-9685-e46ce8e0854f,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c80f93-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.516 238887 DEBUG os_vif [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:96:08,bridge_name='br-int',has_traffic_filtering=True,id=84c80f93-58de-43af-9685-e46ce8e0854f,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c80f93-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.517 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.518 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.518 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.521 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.521 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84c80f93-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.522 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84c80f93-58, col_values=(('external_ids', {'iface-id': '84c80f93-58de-43af-9685-e46ce8e0854f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:96:08', 'vm-uuid': 'a8bef119-c694-432a-984b-0f0f2b570103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.524 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:07 np0005604943 NetworkManager[49093]: <info>  [1770034027.5251] manager: (tap84c80f93-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.528 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.531 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.531 238887 INFO os_vif [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:96:08,bridge_name='br-int',has_traffic_filtering=True,id=84c80f93-58de-43af-9685-e46ce8e0854f,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c80f93-58')#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.582 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.583 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.583 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] No VIF found with MAC fa:16:3e:47:96:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.584 238887 INFO nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Using config drive#033[00m
Feb  2 07:07:07 np0005604943 nova_compute[238883]: 2026-02-02 12:07:07.606 238887 DEBUG nova.storage.rbd_utils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] rbd image a8bef119-c694-432a-984b-0f0f2b570103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:07:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:07:07 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:07Z|00040|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.9
Feb  2 07:07:07 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:07Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:18:be:eb 10.100.0.9
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.049 238887 INFO nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Creating config drive at /var/lib/nova/instances/a8bef119-c694-432a-984b-0f0f2b570103/disk.config#033[00m
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.055 238887 DEBUG oslo_concurrency.processutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8bef119-c694-432a-984b-0f0f2b570103/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8aw180xb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.189 238887 DEBUG oslo_concurrency.processutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8bef119-c694-432a-984b-0f0f2b570103/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8aw180xb" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.221 238887 DEBUG nova.storage.rbd_utils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] rbd image a8bef119-c694-432a-984b-0f0f2b570103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.226 238887 DEBUG oslo_concurrency.processutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a8bef119-c694-432a-984b-0f0f2b570103/disk.config a8bef119-c694-432a-984b-0f0f2b570103_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.259 238887 DEBUG nova.network.neutron [req-66d4816b-ad85-48d7-8dd4-32493efde65e req-815f6615-8e99-4352-b561-44d28e4a545f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Updated VIF entry in instance network info cache for port 84c80f93-58de-43af-9685-e46ce8e0854f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.261 238887 DEBUG nova.network.neutron [req-66d4816b-ad85-48d7-8dd4-32493efde65e req-815f6615-8e99-4352-b561-44d28e4a545f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Updating instance_info_cache with network_info: [{"id": "84c80f93-58de-43af-9685-e46ce8e0854f", "address": "fa:16:3e:47:96:08", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c80f93-58", "ovs_interfaceid": "84c80f93-58de-43af-9685-e46ce8e0854f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.331 238887 DEBUG oslo_concurrency.lockutils [req-66d4816b-ad85-48d7-8dd4-32493efde65e req-815f6615-8e99-4352-b561-44d28e4a545f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-a8bef119-c694-432a-984b-0f0f2b570103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.408 238887 DEBUG oslo_concurrency.processutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a8bef119-c694-432a-984b-0f0f2b570103/disk.config a8bef119-c694-432a-984b-0f0f2b570103_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.409 238887 INFO nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Deleting local config drive /var/lib/nova/instances/a8bef119-c694-432a-984b-0f0f2b570103/disk.config because it was imported into RBD.#033[00m
Feb  2 07:07:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 395 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 37 KiB/s wr, 187 op/s
Feb  2 07:07:08 np0005604943 kernel: tap84c80f93-58: entered promiscuous mode
Feb  2 07:07:08 np0005604943 NetworkManager[49093]: <info>  [1770034028.4561] manager: (tap84c80f93-58): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.458 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:08 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:08Z|00215|binding|INFO|Claiming lport 84c80f93-58de-43af-9685-e46ce8e0854f for this chassis.
Feb  2 07:07:08 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:08Z|00216|binding|INFO|84c80f93-58de-43af-9685-e46ce8e0854f: Claiming fa:16:3e:47:96:08 10.100.0.13
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.466 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:96:08 10.100.0.13'], port_security=['fa:16:3e:47:96:08 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a8bef119-c694-432a-984b-0f0f2b570103', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb13b2a6-b763-41ef-a5c4-123372e94249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851fb6d80faf43cc9b2fef1913323704', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e0a2abe2-60a1-49ea-89b8-ea7fffedac5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10f2dc12-4c00-4783-968f-4cacec86630e, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=84c80f93-58de-43af-9685-e46ce8e0854f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:07:08 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:08Z|00217|binding|INFO|Setting lport 84c80f93-58de-43af-9685-e46ce8e0854f ovn-installed in OVS
Feb  2 07:07:08 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:08Z|00218|binding|INFO|Setting lport 84c80f93-58de-43af-9685-e46ce8e0854f up in Southbound
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.468 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 84c80f93-58de-43af-9685-e46ce8e0854f in datapath fb13b2a6-b763-41ef-a5c4-123372e94249 bound to our chassis#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.470 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fb13b2a6-b763-41ef-a5c4-123372e94249#033[00m
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.471 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.482 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9eb22e-f735-4d7d-835f-d2f5d821f845]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.483 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfb13b2a6-b1 in ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 07:07:08 np0005604943 systemd-udevd[265880]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.485 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfb13b2a6-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.486 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7ec72547-e03a-42c6-8aca-5091bd760d4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.487 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e2fd77-6707-4c19-bdd2-9fa956ed389a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:08 np0005604943 NetworkManager[49093]: <info>  [1770034028.4964] device (tap84c80f93-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:07:08 np0005604943 NetworkManager[49093]: <info>  [1770034028.4970] device (tap84c80f93-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:07:08 np0005604943 systemd-machined[206973]: New machine qemu-23-instance-00000017.
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.501 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[e6393c25-e7fb-4d93-b9ae-2d84ba74ce68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.514 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[0971c5f4-5823-434e-a7b0-34dbc1ae5f54]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:08 np0005604943 systemd[1]: Started Virtual Machine qemu-23-instance-00000017.
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.538 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[a5561568-e252-48b5-99bf-fce30d59b743]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:08 np0005604943 systemd-udevd[265886]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.544 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[dc8f914f-8be9-4ef6-810f-b1de69d7fda8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:08 np0005604943 NetworkManager[49093]: <info>  [1770034028.5462] manager: (tapfb13b2a6-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/118)
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.578 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e7103b-bb7a-49f9-83d2-2f9ec1893f88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.583 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[b215b121-1d81-49a0-b8c9-7b055e03c3de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:08 np0005604943 NetworkManager[49093]: <info>  [1770034028.6078] device (tapfb13b2a6-b0): carrier: link connected
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.613 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca5d6b4-df67-4902-8ff7-e429bb7314fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.632 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[96800548-e9c2-479a-aafb-bb94f98a9b34]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb13b2a6-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:41:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447395, 'reachable_time': 31493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265915, 'error': None, 'target': 'ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.647 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d12fa0fa-0991-4551-9381-8253c795a769]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed1:4144'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447395, 'tstamp': 447395}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265916, 'error': None, 'target': 'ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.670 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a33fad27-c703-4eb7-abc1-6fd3b1b2cd6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb13b2a6-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:41:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447395, 'reachable_time': 31493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265917, 'error': None, 'target': 'ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.698 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ec036e-e1a0-4bb9-9ed4-155e7870983a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.755 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[400d8cf1-dccf-46e8-bb89-23510096da91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.757 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb13b2a6-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.757 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.757 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb13b2a6-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.759 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:08 np0005604943 NetworkManager[49093]: <info>  [1770034028.7605] manager: (tapfb13b2a6-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Feb  2 07:07:08 np0005604943 kernel: tapfb13b2a6-b0: entered promiscuous mode
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.762 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.765 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfb13b2a6-b0, col_values=(('external_ids', {'iface-id': '1d9983aa-de5e-40a5-bc99-8bde08c14b08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.766 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:08 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:08Z|00219|binding|INFO|Releasing lport 1d9983aa-de5e-40a5-bc99-8bde08c14b08 from this chassis (sb_readonly=0)
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.767 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:08 np0005604943 nova_compute[238883]: 2026-02-02 12:07:08.773 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.776 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fb13b2a6-b763-41ef-a5c4-123372e94249.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fb13b2a6-b763-41ef-a5c4-123372e94249.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.778 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[385fcd21-5df3-48fe-96ed-a851377a01aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.779 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-fb13b2a6-b763-41ef-a5c4-123372e94249
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/fb13b2a6-b763-41ef-a5c4-123372e94249.pid.haproxy
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID fb13b2a6-b763-41ef-a5c4-123372e94249
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 07:07:08 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:08.781 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249', 'env', 'PROCESS_TAG=haproxy-fb13b2a6-b763-41ef-a5c4-123372e94249', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fb13b2a6-b763-41ef-a5c4-123372e94249.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 07:07:08 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:08Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:6f:59 10.100.0.11
Feb  2 07:07:08 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:08Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:6f:59 10.100.0.11
Feb  2 07:07:09 np0005604943 podman[265983]: 2026-02-02 12:07:09.141735585 +0000 UTC m=+0.049270530 container create 021e37e1ee3c6e35aa15d7db2ddffdac04d07b2e762feec6de5b6ac4bff19795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb  2 07:07:09 np0005604943 nova_compute[238883]: 2026-02-02 12:07:09.196 238887 DEBUG nova.network.neutron [req-b2fff5bd-37a5-4aa2-bd77-0235ee980646 req-9051f7fe-989f-4bd9-976f-5a10a200176a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Updated VIF entry in instance network info cache for port e40b0257-80f7-4e6f-ab5e-058f6961b2fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:07:09 np0005604943 systemd[1]: Started libpod-conmon-021e37e1ee3c6e35aa15d7db2ddffdac04d07b2e762feec6de5b6ac4bff19795.scope.
Feb  2 07:07:09 np0005604943 nova_compute[238883]: 2026-02-02 12:07:09.198 238887 DEBUG nova.network.neutron [req-b2fff5bd-37a5-4aa2-bd77-0235ee980646 req-9051f7fe-989f-4bd9-976f-5a10a200176a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Updating instance_info_cache with network_info: [{"id": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "address": "fa:16:3e:c9:6f:59", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape40b0257-80", "ovs_interfaceid": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:09 np0005604943 podman[265983]: 2026-02-02 12:07:09.115999901 +0000 UTC m=+0.023534866 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 07:07:09 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:07:09 np0005604943 nova_compute[238883]: 2026-02-02 12:07:09.222 238887 DEBUG oslo_concurrency.lockutils [req-b2fff5bd-37a5-4aa2-bd77-0235ee980646 req-9051f7fe-989f-4bd9-976f-5a10a200176a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-d0404e7d-4162-4ea0-86e0-e7869e7fb702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:07:09 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e1d5508ce2d5d4244f8358e032685a43a127d29cd80144a07f595118cfeaaba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 07:07:09 np0005604943 podman[265983]: 2026-02-02 12:07:09.27274058 +0000 UTC m=+0.180275525 container init 021e37e1ee3c6e35aa15d7db2ddffdac04d07b2e762feec6de5b6ac4bff19795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:07:09 np0005604943 podman[265983]: 2026-02-02 12:07:09.278524286 +0000 UTC m=+0.186059231 container start 021e37e1ee3c6e35aa15d7db2ddffdac04d07b2e762feec6de5b6ac4bff19795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb  2 07:07:09 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[265998]: [NOTICE]   (266002) : New worker (266004) forked
Feb  2 07:07:09 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[265998]: [NOTICE]   (266002) : Loading success.
Feb  2 07:07:09 np0005604943 nova_compute[238883]: 2026-02-02 12:07:09.377 238887 DEBUG nova.compute.manager [req-1b5cebe8-c264-4942-8126-d473585ea217 req-f6a54a61-0b87-4e30-8787-d593c84fee38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Received event network-vif-plugged-84c80f93-58de-43af-9685-e46ce8e0854f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:09 np0005604943 nova_compute[238883]: 2026-02-02 12:07:09.380 238887 DEBUG oslo_concurrency.lockutils [req-1b5cebe8-c264-4942-8126-d473585ea217 req-f6a54a61-0b87-4e30-8787-d593c84fee38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "a8bef119-c694-432a-984b-0f0f2b570103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:09 np0005604943 nova_compute[238883]: 2026-02-02 12:07:09.381 238887 DEBUG oslo_concurrency.lockutils [req-1b5cebe8-c264-4942-8126-d473585ea217 req-f6a54a61-0b87-4e30-8787-d593c84fee38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "a8bef119-c694-432a-984b-0f0f2b570103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:09 np0005604943 nova_compute[238883]: 2026-02-02 12:07:09.381 238887 DEBUG oslo_concurrency.lockutils [req-1b5cebe8-c264-4942-8126-d473585ea217 req-f6a54a61-0b87-4e30-8787-d593c84fee38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "a8bef119-c694-432a-984b-0f0f2b570103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:09 np0005604943 nova_compute[238883]: 2026-02-02 12:07:09.383 238887 DEBUG nova.compute.manager [req-1b5cebe8-c264-4942-8126-d473585ea217 req-f6a54a61-0b87-4e30-8787-d593c84fee38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Processing event network-vif-plugged-84c80f93-58de-43af-9685-e46ce8e0854f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:07:09 np0005604943 nova_compute[238883]: 2026-02-02 12:07:09.384 238887 DEBUG nova.compute.manager [req-1b5cebe8-c264-4942-8126-d473585ea217 req-f6a54a61-0b87-4e30-8787-d593c84fee38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Received event network-vif-plugged-84c80f93-58de-43af-9685-e46ce8e0854f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:09 np0005604943 nova_compute[238883]: 2026-02-02 12:07:09.384 238887 DEBUG oslo_concurrency.lockutils [req-1b5cebe8-c264-4942-8126-d473585ea217 req-f6a54a61-0b87-4e30-8787-d593c84fee38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "a8bef119-c694-432a-984b-0f0f2b570103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:09 np0005604943 nova_compute[238883]: 2026-02-02 12:07:09.384 238887 DEBUG oslo_concurrency.lockutils [req-1b5cebe8-c264-4942-8126-d473585ea217 req-f6a54a61-0b87-4e30-8787-d593c84fee38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "a8bef119-c694-432a-984b-0f0f2b570103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:09 np0005604943 nova_compute[238883]: 2026-02-02 12:07:09.385 238887 DEBUG oslo_concurrency.lockutils [req-1b5cebe8-c264-4942-8126-d473585ea217 req-f6a54a61-0b87-4e30-8787-d593c84fee38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "a8bef119-c694-432a-984b-0f0f2b570103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:09 np0005604943 nova_compute[238883]: 2026-02-02 12:07:09.385 238887 DEBUG nova.compute.manager [req-1b5cebe8-c264-4942-8126-d473585ea217 req-f6a54a61-0b87-4e30-8787-d593c84fee38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] No waiting events found dispatching network-vif-plugged-84c80f93-58de-43af-9685-e46ce8e0854f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:07:09 np0005604943 nova_compute[238883]: 2026-02-02 12:07:09.385 238887 WARNING nova.compute.manager [req-1b5cebe8-c264-4942-8126-d473585ea217 req-f6a54a61-0b87-4e30-8787-d593c84fee38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Received unexpected event network-vif-plugged-84c80f93-58de-43af-9685-e46ce8e0854f for instance with vm_state building and task_state spawning.#033[00m
Feb  2 07:07:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_12:07:09
Feb  2 07:07:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 07:07:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 07:07:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'vms', 'default.rgw.meta', '.rgw.root']
Feb  2 07:07:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 07:07:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:10.032 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:10.032 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:10.034 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 395 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 676 KiB/s wr, 158 op/s
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:07:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:07:11 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:11Z|00044|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.9
Feb  2 07:07:11 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:11Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:18:be:eb 10.100.0.9
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.604 238887 DEBUG nova.compute.manager [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.606 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034031.603657, a8bef119-c694-432a-984b-0f0f2b570103 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.606 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a8bef119-c694-432a-984b-0f0f2b570103] VM Started (Lifecycle Event)#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.609 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.615 238887 INFO nova.virt.libvirt.driver [-] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Instance spawned successfully.#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.615 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.642 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.648 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.651 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.651 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.651 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.652 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.652 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.653 238887 DEBUG nova.virt.libvirt.driver [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.678 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a8bef119-c694-432a-984b-0f0f2b570103] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.679 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034031.605044, a8bef119-c694-432a-984b-0f0f2b570103 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.679 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a8bef119-c694-432a-984b-0f0f2b570103] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.688 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.705 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.709 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034031.6087658, a8bef119-c694-432a-984b-0f0f2b570103 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.709 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a8bef119-c694-432a-984b-0f0f2b570103] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.713 238887 INFO nova.compute.manager [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Took 8.08 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.714 238887 DEBUG nova.compute.manager [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.724 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.728 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.761 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: a8bef119-c694-432a-984b-0f0f2b570103] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.781 238887 INFO nova.compute.manager [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Took 15.86 seconds to build instance.#033[00m
Feb  2 07:07:11 np0005604943 nova_compute[238883]: 2026-02-02 12:07:11.800 238887 DEBUG oslo_concurrency.lockutils [None req-7fb3d014-a9bf-4b0c-b453-59c0e922515b 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "a8bef119-c694-432a-984b-0f0f2b570103" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 444 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.4 MiB/s wr, 160 op/s
Feb  2 07:07:12 np0005604943 nova_compute[238883]: 2026-02-02 12:07:12.524 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:07:12 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:12Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:18:be:eb 10.100.0.9
Feb  2 07:07:12 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:12Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:18:be:eb 10.100.0.9
Feb  2 07:07:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 466 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.8 MiB/s wr, 170 op/s
Feb  2 07:07:14 np0005604943 nova_compute[238883]: 2026-02-02 12:07:14.676 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:14 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:14.677 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:07:14 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:14.678 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 07:07:15 np0005604943 nova_compute[238883]: 2026-02-02 12:07:15.798 238887 DEBUG nova.compute.manager [req-01676af8-2970-4bc9-8bf1-56c41f20b54c req-f1ab1d2b-172b-4866-ba37-7e94567a91a7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Received event network-changed-84c80f93-58de-43af-9685-e46ce8e0854f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:15 np0005604943 nova_compute[238883]: 2026-02-02 12:07:15.799 238887 DEBUG nova.compute.manager [req-01676af8-2970-4bc9-8bf1-56c41f20b54c req-f1ab1d2b-172b-4866-ba37-7e94567a91a7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Refreshing instance network info cache due to event network-changed-84c80f93-58de-43af-9685-e46ce8e0854f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:07:15 np0005604943 nova_compute[238883]: 2026-02-02 12:07:15.800 238887 DEBUG oslo_concurrency.lockutils [req-01676af8-2970-4bc9-8bf1-56c41f20b54c req-f1ab1d2b-172b-4866-ba37-7e94567a91a7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-a8bef119-c694-432a-984b-0f0f2b570103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:07:15 np0005604943 nova_compute[238883]: 2026-02-02 12:07:15.800 238887 DEBUG oslo_concurrency.lockutils [req-01676af8-2970-4bc9-8bf1-56c41f20b54c req-f1ab1d2b-172b-4866-ba37-7e94567a91a7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-a8bef119-c694-432a-984b-0f0f2b570103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:07:15 np0005604943 nova_compute[238883]: 2026-02-02 12:07:15.800 238887 DEBUG nova.network.neutron [req-01676af8-2970-4bc9-8bf1-56c41f20b54c req-f1ab1d2b-172b-4866-ba37-7e94567a91a7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Refreshing network info cache for port 84c80f93-58de-43af-9685-e46ce8e0854f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:07:16 np0005604943 podman[266021]: 2026-02-02 12:07:16.07237853 +0000 UTC m=+0.083128173 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:07:16 np0005604943 podman[266020]: 2026-02-02 12:07:16.080652993 +0000 UTC m=+0.094134140 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Feb  2 07:07:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 466 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.8 MiB/s wr, 170 op/s
Feb  2 07:07:16 np0005604943 nova_compute[238883]: 2026-02-02 12:07:16.724 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:17 np0005604943 nova_compute[238883]: 2026-02-02 12:07:17.357 238887 DEBUG nova.network.neutron [req-01676af8-2970-4bc9-8bf1-56c41f20b54c req-f1ab1d2b-172b-4866-ba37-7e94567a91a7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Updated VIF entry in instance network info cache for port 84c80f93-58de-43af-9685-e46ce8e0854f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:07:17 np0005604943 nova_compute[238883]: 2026-02-02 12:07:17.358 238887 DEBUG nova.network.neutron [req-01676af8-2970-4bc9-8bf1-56c41f20b54c req-f1ab1d2b-172b-4866-ba37-7e94567a91a7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Updating instance_info_cache with network_info: [{"id": "84c80f93-58de-43af-9685-e46ce8e0854f", "address": "fa:16:3e:47:96:08", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c80f93-58", "ovs_interfaceid": "84c80f93-58de-43af-9685-e46ce8e0854f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:17 np0005604943 nova_compute[238883]: 2026-02-02 12:07:17.408 238887 DEBUG oslo_concurrency.lockutils [req-01676af8-2970-4bc9-8bf1-56c41f20b54c req-f1ab1d2b-172b-4866-ba37-7e94567a91a7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-a8bef119-c694-432a-984b-0f0f2b570103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:07:17 np0005604943 nova_compute[238883]: 2026-02-02 12:07:17.526 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:07:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 466 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.8 MiB/s wr, 196 op/s
Feb  2 07:07:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 466 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.8 MiB/s wr, 155 op/s
Feb  2 07:07:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Feb  2 07:07:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Feb  2 07:07:20 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.439323317486847e-06 of space, bias 1.0, pg target 0.002831796995246054 quantized to 32 (current 32)
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005492223568972397 of space, bias 1.0, pg target 1.6476670706917191 quantized to 32 (current 32)
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.028226867942067e-06 of space, bias 1.0, pg target 0.000606439833514678 quantized to 32 (current 32)
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006667421130670765 of space, bias 1.0, pg target 0.19935589180705587 quantized to 32 (current 32)
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0699105540655193e-06 of space, bias 4.0, pg target 0.0012796130226623611 quantized to 16 (current 16)
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:07:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Feb  2 07:07:21 np0005604943 nova_compute[238883]: 2026-02-02 12:07:21.728 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 466 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 110 op/s
Feb  2 07:07:22 np0005604943 nova_compute[238883]: 2026-02-02 12:07:22.528 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:22 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb  2 07:07:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.265 238887 DEBUG oslo_concurrency.lockutils [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.266 238887 DEBUG oslo_concurrency.lockutils [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.266 238887 DEBUG oslo_concurrency.lockutils [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.266 238887 DEBUG oslo_concurrency.lockutils [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.267 238887 DEBUG oslo_concurrency.lockutils [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.268 238887 INFO nova.compute.manager [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Terminating instance#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.269 238887 DEBUG nova.compute.manager [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:07:23 np0005604943 kernel: tape40b0257-80 (unregistering): left promiscuous mode
Feb  2 07:07:23 np0005604943 NetworkManager[49093]: <info>  [1770034043.3233] device (tape40b0257-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:07:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:23Z|00220|binding|INFO|Releasing lport e40b0257-80f7-4e6f-ab5e-058f6961b2fa from this chassis (sb_readonly=0)
Feb  2 07:07:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:23Z|00221|binding|INFO|Setting lport e40b0257-80f7-4e6f-ab5e-058f6961b2fa down in Southbound
Feb  2 07:07:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:23Z|00222|binding|INFO|Removing iface tape40b0257-80 ovn-installed in OVS
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.333 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.340 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.347 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:6f:59 10.100.0.11'], port_security=['fa:16:3e:c9:6f:59 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd0404e7d-4162-4ea0-86e0-e7869e7fb702', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efa24ae1-9962-44ca-882a-8d146356fcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c7b49c49c104c079544033b07fb2f3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd3424b4-e169-47dd-816d-ac2340e28ccc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b6e8bcf-741b-41c8-a826-9b6dbb1c260b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=e40b0257-80f7-4e6f-ab5e-058f6961b2fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.348 155011 INFO neutron.agent.ovn.metadata.agent [-] Port e40b0257-80f7-4e6f-ab5e-058f6961b2fa in datapath efa24ae1-9962-44ca-882a-8d146356fcca unbound from our chassis#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.350 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network efa24ae1-9962-44ca-882a-8d146356fcca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.351 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[aab38958-9e05-4269-b348-d1a9087b5589]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.355 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca namespace which is not needed anymore#033[00m
Feb  2 07:07:23 np0005604943 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Deactivated successfully.
Feb  2 07:07:23 np0005604943 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Consumed 16.167s CPU time.
Feb  2 07:07:23 np0005604943 systemd-machined[206973]: Machine qemu-21-instance-00000015 terminated.
Feb  2 07:07:23 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[265597]: [NOTICE]   (265602) : haproxy version is 2.8.14-c23fe91
Feb  2 07:07:23 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[265597]: [NOTICE]   (265602) : path to executable is /usr/sbin/haproxy
Feb  2 07:07:23 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[265597]: [WARNING]  (265602) : Exiting Master process...
Feb  2 07:07:23 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[265597]: [ALERT]    (265602) : Current worker (265604) exited with code 143 (Terminated)
Feb  2 07:07:23 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[265597]: [WARNING]  (265602) : All workers exited. Exiting... (0)
Feb  2 07:07:23 np0005604943 systemd[1]: libpod-713470aef1b8e3963d38a0d5f0f1aa49398b46f584e07faa913a48c6502a82e3.scope: Deactivated successfully.
Feb  2 07:07:23 np0005604943 podman[266088]: 2026-02-02 12:07:23.489779877 +0000 UTC m=+0.048372356 container died 713470aef1b8e3963d38a0d5f0f1aa49398b46f584e07faa913a48c6502a82e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:07:23 np0005604943 kernel: tape40b0257-80: entered promiscuous mode
Feb  2 07:07:23 np0005604943 NetworkManager[49093]: <info>  [1770034043.4954] manager: (tape40b0257-80): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Feb  2 07:07:23 np0005604943 systemd-udevd[266066]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:07:23 np0005604943 kernel: tape40b0257-80 (unregistering): left promiscuous mode
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.504 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:23Z|00223|binding|INFO|Claiming lport e40b0257-80f7-4e6f-ab5e-058f6961b2fa for this chassis.
Feb  2 07:07:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:23Z|00224|binding|INFO|e40b0257-80f7-4e6f-ab5e-058f6961b2fa: Claiming fa:16:3e:c9:6f:59 10.100.0.11
Feb  2 07:07:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:23Z|00225|if_status|INFO|Dropped 2 log messages in last 359 seconds (most recently, 359 seconds ago) due to excessive rate
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.519 238887 INFO nova.virt.libvirt.driver [-] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Instance destroyed successfully.#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.520 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:23Z|00226|if_status|INFO|Not setting lport e40b0257-80f7-4e6f-ab5e-058f6961b2fa down as sb is readonly
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.521 238887 DEBUG nova.objects.instance [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lazy-loading 'resources' on Instance uuid d0404e7d-4162-4ea0-86e0-e7869e7fb702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:07:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:23Z|00227|binding|INFO|Releasing lport e40b0257-80f7-4e6f-ab5e-058f6961b2fa from this chassis (sb_readonly=0)
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.527 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:6f:59 10.100.0.11'], port_security=['fa:16:3e:c9:6f:59 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd0404e7d-4162-4ea0-86e0-e7869e7fb702', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efa24ae1-9962-44ca-882a-8d146356fcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c7b49c49c104c079544033b07fb2f3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd3424b4-e169-47dd-816d-ac2340e28ccc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b6e8bcf-741b-41c8-a826-9b6dbb1c260b, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=e40b0257-80f7-4e6f-ab5e-058f6961b2fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:07:23 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-713470aef1b8e3963d38a0d5f0f1aa49398b46f584e07faa913a48c6502a82e3-userdata-shm.mount: Deactivated successfully.
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.534 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:6f:59 10.100.0.11'], port_security=['fa:16:3e:c9:6f:59 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd0404e7d-4162-4ea0-86e0-e7869e7fb702', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efa24ae1-9962-44ca-882a-8d146356fcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c7b49c49c104c079544033b07fb2f3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd3424b4-e169-47dd-816d-ac2340e28ccc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b6e8bcf-741b-41c8-a826-9b6dbb1c260b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=e40b0257-80f7-4e6f-ab5e-058f6961b2fa) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:07:23 np0005604943 systemd[1]: var-lib-containers-storage-overlay-988c5e63f9f92b24f3bda8d38f01b1d7dce6747f6e4d0e56d63ed2779e8108d3-merged.mount: Deactivated successfully.
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.540 238887 DEBUG nova.virt.libvirt.vif [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:06:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1726050391',display_name='tempest-TransferEncryptedVolumeTest-server-1726050391',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1726050391',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFDy1jBskL6RDlru0VUyMYEuQYZdWj4mgPqYNbp/ZxOi/SP0295JAyJLHX3JiQjzCwuF8BsyBv7iV3J6nvrpEE+i/AXa4yixOsMe088OGvWt8cZiFnV/xX7EKx5mK84nug==',key_name='tempest-TransferEncryptedVolumeTest-704936637',keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:06:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4c7b49c49c104c079544033b07fb2f3d',ramdisk_id='',reservation_id='r-zed5n8a7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-347797880',owner_user_name='tempest-TransferEncryptedVolumeTest-347797880-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:06:57Z,user_data=None,user_id='cd5824e18d5e443cb24d3bf55ff2c553',uuid=d0404e7d-4162-4ea0-86e0-e7869e7fb702,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "address": "fa:16:3e:c9:6f:59", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape40b0257-80", "ovs_interfaceid": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.541 238887 DEBUG nova.network.os_vif_util [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converting VIF {"id": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "address": "fa:16:3e:c9:6f:59", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape40b0257-80", "ovs_interfaceid": "e40b0257-80f7-4e6f-ab5e-058f6961b2fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.541 238887 DEBUG nova.network.os_vif_util [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c9:6f:59,bridge_name='br-int',has_traffic_filtering=True,id=e40b0257-80f7-4e6f-ab5e-058f6961b2fa,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape40b0257-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.542 238887 DEBUG os_vif [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:6f:59,bridge_name='br-int',has_traffic_filtering=True,id=e40b0257-80f7-4e6f-ab5e-058f6961b2fa,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape40b0257-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.544 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.544 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape40b0257-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.546 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.547 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.549 238887 INFO os_vif [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:6f:59,bridge_name='br-int',has_traffic_filtering=True,id=e40b0257-80f7-4e6f-ab5e-058f6961b2fa,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape40b0257-80')#033[00m
Feb  2 07:07:23 np0005604943 podman[266088]: 2026-02-02 12:07:23.550628238 +0000 UTC m=+0.109220727 container cleanup 713470aef1b8e3963d38a0d5f0f1aa49398b46f584e07faa913a48c6502a82e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:07:23 np0005604943 systemd[1]: libpod-conmon-713470aef1b8e3963d38a0d5f0f1aa49398b46f584e07faa913a48c6502a82e3.scope: Deactivated successfully.
Feb  2 07:07:23 np0005604943 podman[266131]: 2026-02-02 12:07:23.625254981 +0000 UTC m=+0.043607867 container remove 713470aef1b8e3963d38a0d5f0f1aa49398b46f584e07faa913a48c6502a82e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true)
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.630 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6bef56-a950-4672-acf1-c62bcdaf5ced]: (4, ('Mon Feb  2 12:07:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca (713470aef1b8e3963d38a0d5f0f1aa49398b46f584e07faa913a48c6502a82e3)\n713470aef1b8e3963d38a0d5f0f1aa49398b46f584e07faa913a48c6502a82e3\nMon Feb  2 12:07:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca (713470aef1b8e3963d38a0d5f0f1aa49398b46f584e07faa913a48c6502a82e3)\n713470aef1b8e3963d38a0d5f0f1aa49398b46f584e07faa913a48c6502a82e3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.632 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[28cd15ef-937f-4fc2-9e6d-a17247acb470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.634 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefa24ae1-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:23 np0005604943 kernel: tapefa24ae1-90: left promiscuous mode
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.636 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.642 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fa987513-adf9-4629-90a8-f7f45a470827]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.643 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.663 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a997bb9c-8cbd-45d2-a03b-ab11a0658507]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.664 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[fffd4cc2-63a5-4120-a42f-7f302e3d9d96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.678 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4e6b90-0ab6-420e-8717-b680d36548a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445910, 'reachable_time': 40630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266157, 'error': None, 'target': 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.681 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 07:07:23 np0005604943 systemd[1]: run-netns-ovnmeta\x2defa24ae1\x2d9962\x2d44ca\x2d882a\x2d8d146356fcca.mount: Deactivated successfully.
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.681 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[ed49d905-80d5-416b-81b4-5e2cd2b7a420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.683 155011 INFO neutron.agent.ovn.metadata.agent [-] Port e40b0257-80f7-4e6f-ab5e-058f6961b2fa in datapath efa24ae1-9962-44ca-882a-8d146356fcca unbound from our chassis#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.685 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network efa24ae1-9962-44ca-882a-8d146356fcca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.686 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d5f82a-e51c-40bb-9912-c6f6ec269f38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.686 155011 INFO neutron.agent.ovn.metadata.agent [-] Port e40b0257-80f7-4e6f-ab5e-058f6961b2fa in datapath efa24ae1-9962-44ca-882a-8d146356fcca unbound from our chassis#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.688 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network efa24ae1-9962-44ca-882a-8d146356fcca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:07:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:23.689 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b28b1774-65f5-439f-b902-b92ccc9582ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.706 238887 INFO nova.virt.libvirt.driver [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Deleting instance files /var/lib/nova/instances/d0404e7d-4162-4ea0-86e0-e7869e7fb702_del#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.707 238887 INFO nova.virt.libvirt.driver [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Deletion of /var/lib/nova/instances/d0404e7d-4162-4ea0-86e0-e7869e7fb702_del complete#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.800 238887 INFO nova.compute.manager [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Took 0.53 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.800 238887 DEBUG oslo.service.loopingcall [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.801 238887 DEBUG nova.compute.manager [-] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:07:23 np0005604943 nova_compute[238883]: 2026-02-02 12:07:23.801 238887 DEBUG nova.network.neutron [-] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:07:24 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb  2 07:07:24 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:24Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:96:08 10.100.0.13
Feb  2 07:07:24 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:24Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:96:08 10.100.0.13
Feb  2 07:07:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 470 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.2 MiB/s wr, 73 op/s
Feb  2 07:07:24 np0005604943 nova_compute[238883]: 2026-02-02 12:07:24.604 238887 DEBUG nova.compute.manager [req-43c56fe3-8e3f-4dd3-889d-60e1e6e9b144 req-de79abbe-8d27-4efa-91f1-94e4b13ae252 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Received event network-vif-unplugged-e40b0257-80f7-4e6f-ab5e-058f6961b2fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:24 np0005604943 nova_compute[238883]: 2026-02-02 12:07:24.605 238887 DEBUG oslo_concurrency.lockutils [req-43c56fe3-8e3f-4dd3-889d-60e1e6e9b144 req-de79abbe-8d27-4efa-91f1-94e4b13ae252 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:24 np0005604943 nova_compute[238883]: 2026-02-02 12:07:24.606 238887 DEBUG oslo_concurrency.lockutils [req-43c56fe3-8e3f-4dd3-889d-60e1e6e9b144 req-de79abbe-8d27-4efa-91f1-94e4b13ae252 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:24 np0005604943 nova_compute[238883]: 2026-02-02 12:07:24.606 238887 DEBUG oslo_concurrency.lockutils [req-43c56fe3-8e3f-4dd3-889d-60e1e6e9b144 req-de79abbe-8d27-4efa-91f1-94e4b13ae252 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:24 np0005604943 nova_compute[238883]: 2026-02-02 12:07:24.607 238887 DEBUG nova.compute.manager [req-43c56fe3-8e3f-4dd3-889d-60e1e6e9b144 req-de79abbe-8d27-4efa-91f1-94e4b13ae252 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] No waiting events found dispatching network-vif-unplugged-e40b0257-80f7-4e6f-ab5e-058f6961b2fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:07:24 np0005604943 nova_compute[238883]: 2026-02-02 12:07:24.607 238887 DEBUG nova.compute.manager [req-43c56fe3-8e3f-4dd3-889d-60e1e6e9b144 req-de79abbe-8d27-4efa-91f1-94e4b13ae252 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Received event network-vif-unplugged-e40b0257-80f7-4e6f-ab5e-058f6961b2fa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:07:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:24.681 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:24 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:24Z|00228|memory|INFO|peak resident set size grew 50% in last 1605.5 seconds, from 16384 kB to 24644 kB
Feb  2 07:07:24 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:24Z|00229|memory|INFO|idl-cells-OVN_Southbound:10958 idl-cells-Open_vSwitch:870 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:401 lflow-cache-entries-cache-matches:293 lflow-cache-size-KB:1637 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:672 ofctrl_installed_flow_usage-KB:491 ofctrl_sb_flow_ref_usage-KB:255
Feb  2 07:07:25 np0005604943 nova_compute[238883]: 2026-02-02 12:07:25.042 238887 DEBUG nova.network.neutron [-] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:25 np0005604943 nova_compute[238883]: 2026-02-02 12:07:25.108 238887 INFO nova.compute.manager [-] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Took 1.31 seconds to deallocate network for instance.#033[00m
Feb  2 07:07:25 np0005604943 nova_compute[238883]: 2026-02-02 12:07:25.350 238887 INFO nova.compute.manager [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Took 0.24 seconds to detach 1 volumes for instance.#033[00m
Feb  2 07:07:25 np0005604943 nova_compute[238883]: 2026-02-02 12:07:25.489 238887 DEBUG oslo_concurrency.lockutils [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:25 np0005604943 nova_compute[238883]: 2026-02-02 12:07:25.489 238887 DEBUG oslo_concurrency.lockutils [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:25 np0005604943 nova_compute[238883]: 2026-02-02 12:07:25.605 238887 DEBUG oslo_concurrency.processutils [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:07:26 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/51518201' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.190 238887 DEBUG oslo_concurrency.processutils [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.197 238887 DEBUG nova.compute.provider_tree [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.232 238887 DEBUG nova.scheduler.client.report [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.244 238887 DEBUG oslo_concurrency.lockutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "e1504ff5-76c4-4676-b71d-745b31db4308" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.245 238887 DEBUG oslo_concurrency.lockutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "e1504ff5-76c4-4676-b71d-745b31db4308" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.294 238887 DEBUG oslo_concurrency.lockutils [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.298 238887 DEBUG nova.compute.manager [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.347 238887 INFO nova.scheduler.client.report [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Deleted allocations for instance d0404e7d-4162-4ea0-86e0-e7869e7fb702#033[00m
Feb  2 07:07:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 470 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.2 MiB/s wr, 73 op/s
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.612 238887 DEBUG oslo_concurrency.lockutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.612 238887 DEBUG oslo_concurrency.lockutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.615 238887 DEBUG oslo_concurrency.lockutils [None req-b1a5b8f7-17b0-4fa9-acfe-fc86563e60be cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.622 238887 DEBUG nova.virt.hardware [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.623 238887 INFO nova.compute.claims [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.712 238887 DEBUG nova.compute.manager [req-797dd936-3968-4b82-8475-4f3cb0e10122 req-3ab44b7a-0701-49a4-8cdf-fc7474376f1c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Received event network-vif-plugged-e40b0257-80f7-4e6f-ab5e-058f6961b2fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.713 238887 DEBUG oslo_concurrency.lockutils [req-797dd936-3968-4b82-8475-4f3cb0e10122 req-3ab44b7a-0701-49a4-8cdf-fc7474376f1c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.713 238887 DEBUG oslo_concurrency.lockutils [req-797dd936-3968-4b82-8475-4f3cb0e10122 req-3ab44b7a-0701-49a4-8cdf-fc7474376f1c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.713 238887 DEBUG oslo_concurrency.lockutils [req-797dd936-3968-4b82-8475-4f3cb0e10122 req-3ab44b7a-0701-49a4-8cdf-fc7474376f1c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "d0404e7d-4162-4ea0-86e0-e7869e7fb702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.714 238887 DEBUG nova.compute.manager [req-797dd936-3968-4b82-8475-4f3cb0e10122 req-3ab44b7a-0701-49a4-8cdf-fc7474376f1c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] No waiting events found dispatching network-vif-plugged-e40b0257-80f7-4e6f-ab5e-058f6961b2fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.714 238887 WARNING nova.compute.manager [req-797dd936-3968-4b82-8475-4f3cb0e10122 req-3ab44b7a-0701-49a4-8cdf-fc7474376f1c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Received unexpected event network-vif-plugged-e40b0257-80f7-4e6f-ab5e-058f6961b2fa for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.714 238887 DEBUG nova.compute.manager [req-797dd936-3968-4b82-8475-4f3cb0e10122 req-3ab44b7a-0701-49a4-8cdf-fc7474376f1c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Received event network-vif-deleted-e40b0257-80f7-4e6f-ab5e-058f6961b2fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.728 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:26 np0005604943 nova_compute[238883]: 2026-02-02 12:07:26.790 238887 DEBUG oslo_concurrency.processutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:07:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4059653080' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.321 238887 DEBUG oslo_concurrency.processutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.328 238887 DEBUG nova.compute.provider_tree [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.367 238887 DEBUG nova.scheduler.client.report [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.431 238887 DEBUG oslo_concurrency.lockutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.432 238887 DEBUG nova.compute.manager [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.515 238887 DEBUG nova.compute.manager [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.516 238887 DEBUG nova.network.neutron [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.535 238887 INFO nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.586 238887 DEBUG nova.compute.manager [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.661 238887 DEBUG nova.policy [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e3fc9d8415541ecaa0da4968c9fa242', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e66ed51ccbb840f083b8a86476696747', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.687 238887 INFO nova.virt.block_device [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Booting with volume f2bff8c5-4641-45a6-9892-9073a5fffa1a at /dev/vda#033[00m
Feb  2 07:07:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.808 238887 DEBUG os_brick.utils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.810 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.826 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.826 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[e1047a1b-ef47-4e9c-a169-3086d7679167]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.828 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.838 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.838 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[c758355a-dd7b-4651-927f-8b7380d0f681]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.840 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.849 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.850 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[543b230a-104f-4237-96f0-0e72cdb8df06]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.852 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[69c145c6-5d7f-452e-be10-04fab77b7ed8]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.853 238887 DEBUG oslo_concurrency.processutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.883 238887 DEBUG oslo_concurrency.processutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.885 238887 DEBUG os_brick.initiator.connectors.lightos [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.886 238887 DEBUG os_brick.initiator.connectors.lightos [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.886 238887 DEBUG os_brick.initiator.connectors.lightos [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.886 238887 DEBUG os_brick.utils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:07:27 np0005604943 nova_compute[238883]: 2026-02-02 12:07:27.887 238887 DEBUG nova.virt.block_device [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Updating existing volume attachment record: a03b8794-747a-4b46-a95b-793d70101980 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:07:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 503 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 4.7 MiB/s wr, 123 op/s
Feb  2 07:07:28 np0005604943 nova_compute[238883]: 2026-02-02 12:07:28.491 238887 DEBUG nova.network.neutron [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Successfully created port: aa33b4ec-1599-4737-b61c-25704b712543 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:07:28 np0005604943 nova_compute[238883]: 2026-02-02 12:07:28.547 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:07:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4154138199' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:07:29 np0005604943 nova_compute[238883]: 2026-02-02 12:07:29.142 238887 DEBUG nova.compute.manager [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:07:29 np0005604943 nova_compute[238883]: 2026-02-02 12:07:29.144 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:07:29 np0005604943 nova_compute[238883]: 2026-02-02 12:07:29.145 238887 INFO nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Creating image(s)#033[00m
Feb  2 07:07:29 np0005604943 nova_compute[238883]: 2026-02-02 12:07:29.145 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 07:07:29 np0005604943 nova_compute[238883]: 2026-02-02 12:07:29.146 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Ensure instance console log exists: /var/lib/nova/instances/e1504ff5-76c4-4676-b71d-745b31db4308/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:07:29 np0005604943 nova_compute[238883]: 2026-02-02 12:07:29.146 238887 DEBUG oslo_concurrency.lockutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:29 np0005604943 nova_compute[238883]: 2026-02-02 12:07:29.147 238887 DEBUG oslo_concurrency.lockutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:29 np0005604943 nova_compute[238883]: 2026-02-02 12:07:29.147 238887 DEBUG oslo_concurrency.lockutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:29 np0005604943 nova_compute[238883]: 2026-02-02 12:07:29.290 238887 DEBUG nova.network.neutron [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Successfully updated port: aa33b4ec-1599-4737-b61c-25704b712543 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:07:29 np0005604943 nova_compute[238883]: 2026-02-02 12:07:29.390 238887 DEBUG oslo_concurrency.lockutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "refresh_cache-e1504ff5-76c4-4676-b71d-745b31db4308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:07:29 np0005604943 nova_compute[238883]: 2026-02-02 12:07:29.390 238887 DEBUG oslo_concurrency.lockutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquired lock "refresh_cache-e1504ff5-76c4-4676-b71d-745b31db4308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:07:29 np0005604943 nova_compute[238883]: 2026-02-02 12:07:29.390 238887 DEBUG nova.network.neutron [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:07:29 np0005604943 nova_compute[238883]: 2026-02-02 12:07:29.401 238887 DEBUG nova.compute.manager [req-9c94e189-69c5-491e-81ae-c19bbbe9c69f req-2881c9fe-aad1-41b9-8d7c-769392c7767d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Received event network-changed-aa33b4ec-1599-4737-b61c-25704b712543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:29 np0005604943 nova_compute[238883]: 2026-02-02 12:07:29.401 238887 DEBUG nova.compute.manager [req-9c94e189-69c5-491e-81ae-c19bbbe9c69f req-2881c9fe-aad1-41b9-8d7c-769392c7767d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Refreshing instance network info cache due to event network-changed-aa33b4ec-1599-4737-b61c-25704b712543. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:07:29 np0005604943 nova_compute[238883]: 2026-02-02 12:07:29.401 238887 DEBUG oslo_concurrency.lockutils [req-9c94e189-69c5-491e-81ae-c19bbbe9c69f req-2881c9fe-aad1-41b9-8d7c-769392c7767d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-e1504ff5-76c4-4676-b71d-745b31db4308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:07:29 np0005604943 nova_compute[238883]: 2026-02-02 12:07:29.626 238887 DEBUG nova.network.neutron [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:07:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 535 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 890 KiB/s rd, 7.0 MiB/s wr, 139 op/s
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.828 238887 DEBUG nova.network.neutron [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Updating instance_info_cache with network_info: [{"id": "aa33b4ec-1599-4737-b61c-25704b712543", "address": "fa:16:3e:b8:03:f7", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa33b4ec-15", "ovs_interfaceid": "aa33b4ec-1599-4737-b61c-25704b712543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.912 238887 DEBUG oslo_concurrency.lockutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Releasing lock "refresh_cache-e1504ff5-76c4-4676-b71d-745b31db4308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.912 238887 DEBUG nova.compute.manager [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Instance network_info: |[{"id": "aa33b4ec-1599-4737-b61c-25704b712543", "address": "fa:16:3e:b8:03:f7", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa33b4ec-15", "ovs_interfaceid": "aa33b4ec-1599-4737-b61c-25704b712543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.913 238887 DEBUG oslo_concurrency.lockutils [req-9c94e189-69c5-491e-81ae-c19bbbe9c69f req-2881c9fe-aad1-41b9-8d7c-769392c7767d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-e1504ff5-76c4-4676-b71d-745b31db4308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.913 238887 DEBUG nova.network.neutron [req-9c94e189-69c5-491e-81ae-c19bbbe9c69f req-2881c9fe-aad1-41b9-8d7c-769392c7767d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Refreshing network info cache for port aa33b4ec-1599-4737-b61c-25704b712543 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.916 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Start _get_guest_xml network_info=[{"id": "aa33b4ec-1599-4737-b61c-25704b712543", "address": "fa:16:3e:b8:03:f7", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa33b4ec-15", "ovs_interfaceid": "aa33b4ec-1599-4737-b61c-25704b712543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'attachment_id': 'a03b8794-747a-4b46-a95b-793d70101980', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f2bff8c5-4641-45a6-9892-9073a5fffa1a', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f2bff8c5-4641-45a6-9892-9073a5fffa1a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'e1504ff5-76c4-4676-b71d-745b31db4308', 'attached_at': '', 'detached_at': '', 'volume_id': 'f2bff8c5-4641-45a6-9892-9073a5fffa1a', 'serial': 'f2bff8c5-4641-45a6-9892-9073a5fffa1a'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.922 238887 WARNING nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.927 238887 DEBUG nova.virt.libvirt.host [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.928 238887 DEBUG nova.virt.libvirt.host [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.932 238887 DEBUG nova.virt.libvirt.host [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.933 238887 DEBUG nova.virt.libvirt.host [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.933 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.934 238887 DEBUG nova.virt.hardware [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.934 238887 DEBUG nova.virt.hardware [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.934 238887 DEBUG nova.virt.hardware [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.934 238887 DEBUG nova.virt.hardware [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.935 238887 DEBUG nova.virt.hardware [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.935 238887 DEBUG nova.virt.hardware [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.935 238887 DEBUG nova.virt.hardware [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.935 238887 DEBUG nova.virt.hardware [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.935 238887 DEBUG nova.virt.hardware [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.936 238887 DEBUG nova.virt.hardware [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.936 238887 DEBUG nova.virt.hardware [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.964 238887 DEBUG nova.storage.rbd_utils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image e1504ff5-76c4-4676-b71d-745b31db4308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:07:30 np0005604943 nova_compute[238883]: 2026-02-02 12:07:30.968 238887 DEBUG oslo_concurrency.processutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:07:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4136826866' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.539 238887 DEBUG oslo_concurrency.processutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.584 238887 DEBUG nova.virt.libvirt.vif [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:07:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-642961730',display_name='tempest-TestVolumeBootPattern-server-642961730',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-642961730',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO8pS3TOdyjX/N+jIFJqRkOzhDpnnQvMuyVIbWIYhdDa58/4gu4+MtK78TaoPi0KBaxHL0lWzg2GYnnuAmOLK3vOMGsshwGNfMmLTGNRIjuKqnaNrr1v/EHYLJ6m8LkFkQ==',key_name='tempest-TestVolumeBootPattern-656783760',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-aabq3tp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:07:27Z,user_data=None,user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=e1504ff5-76c4-4676-b71d-745b31db4308,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa33b4ec-1599-4737-b61c-25704b712543", "address": "fa:16:3e:b8:03:f7", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa33b4ec-15", "ovs_interfaceid": "aa33b4ec-1599-4737-b61c-25704b712543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.584 238887 DEBUG nova.network.os_vif_util [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "aa33b4ec-1599-4737-b61c-25704b712543", "address": "fa:16:3e:b8:03:f7", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa33b4ec-15", "ovs_interfaceid": "aa33b4ec-1599-4737-b61c-25704b712543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.585 238887 DEBUG nova.network.os_vif_util [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:03:f7,bridge_name='br-int',has_traffic_filtering=True,id=aa33b4ec-1599-4737-b61c-25704b712543,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa33b4ec-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.586 238887 DEBUG nova.objects.instance [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lazy-loading 'pci_devices' on Instance uuid e1504ff5-76c4-4676-b71d-745b31db4308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.600 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  <uuid>e1504ff5-76c4-4676-b71d-745b31db4308</uuid>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  <name>instance-00000018</name>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <nova:name>tempest-TestVolumeBootPattern-server-642961730</nova:name>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:07:30</nova:creationTime>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:        <nova:user uuid="5e3fc9d8415541ecaa0da4968c9fa242">tempest-TestVolumeBootPattern-1059348902-project-member</nova:user>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:        <nova:project uuid="e66ed51ccbb840f083b8a86476696747">tempest-TestVolumeBootPattern-1059348902</nova:project>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:        <nova:port uuid="aa33b4ec-1599-4737-b61c-25704b712543">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <entry name="serial">e1504ff5-76c4-4676-b71d-745b31db4308</entry>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <entry name="uuid">e1504ff5-76c4-4676-b71d-745b31db4308</entry>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/e1504ff5-76c4-4676-b71d-745b31db4308_disk.config">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="volumes/volume-f2bff8c5-4641-45a6-9892-9073a5fffa1a">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <serial>f2bff8c5-4641-45a6-9892-9073a5fffa1a</serial>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:b8:03:f7"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <target dev="tapaa33b4ec-15"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/e1504ff5-76c4-4676-b71d-745b31db4308/console.log" append="off"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:07:31 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:07:31 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:07:31 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:07:31 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.601 238887 DEBUG nova.compute.manager [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Preparing to wait for external event network-vif-plugged-aa33b4ec-1599-4737-b61c-25704b712543 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.601 238887 DEBUG oslo_concurrency.lockutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.602 238887 DEBUG oslo_concurrency.lockutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.602 238887 DEBUG oslo_concurrency.lockutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.603 238887 DEBUG nova.virt.libvirt.vif [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:07:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-642961730',display_name='tempest-TestVolumeBootPattern-server-642961730',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-642961730',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO8pS3TOdyjX/N+jIFJqRkOzhDpnnQvMuyVIbWIYhdDa58/4gu4+MtK78TaoPi0KBaxHL0lWzg2GYnnuAmOLK3vOMGsshwGNfMmLTGNRIjuKqnaNrr1v/EHYLJ6m8LkFkQ==',key_name='tempest-TestVolumeBootPattern-656783760',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-aabq3tp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:07:27Z,user_data=None,user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=e1504ff5-76c4-4676-b71d-745b31db4308,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa33b4ec-1599-4737-b61c-25704b712543", "address": "fa:16:3e:b8:03:f7", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa33b4ec-15", "ovs_interfaceid": "aa33b4ec-1599-4737-b61c-25704b712543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.603 238887 DEBUG nova.network.os_vif_util [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "aa33b4ec-1599-4737-b61c-25704b712543", "address": "fa:16:3e:b8:03:f7", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa33b4ec-15", "ovs_interfaceid": "aa33b4ec-1599-4737-b61c-25704b712543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.604 238887 DEBUG nova.network.os_vif_util [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:03:f7,bridge_name='br-int',has_traffic_filtering=True,id=aa33b4ec-1599-4737-b61c-25704b712543,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa33b4ec-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.605 238887 DEBUG os_vif [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:03:f7,bridge_name='br-int',has_traffic_filtering=True,id=aa33b4ec-1599-4737-b61c-25704b712543,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa33b4ec-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.606 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.606 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.607 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.611 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.611 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa33b4ec-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.612 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa33b4ec-15, col_values=(('external_ids', {'iface-id': 'aa33b4ec-1599-4737-b61c-25704b712543', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:03:f7', 'vm-uuid': 'e1504ff5-76c4-4676-b71d-745b31db4308'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.613 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:31 np0005604943 NetworkManager[49093]: <info>  [1770034051.6147] manager: (tapaa33b4ec-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.617 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.619 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.621 238887 INFO os_vif [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:03:f7,bridge_name='br-int',has_traffic_filtering=True,id=aa33b4ec-1599-4737-b61c-25704b712543,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa33b4ec-15')#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.729 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.788 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.789 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.789 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] No VIF found with MAC fa:16:3e:b8:03:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.789 238887 INFO nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Using config drive#033[00m
Feb  2 07:07:31 np0005604943 nova_compute[238883]: 2026-02-02 12:07:31.815 238887 DEBUG nova.storage.rbd_utils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image e1504ff5-76c4-4676-b71d-745b31db4308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.099 238887 DEBUG oslo_concurrency.lockutils [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "a8bef119-c694-432a-984b-0f0f2b570103" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.100 238887 DEBUG oslo_concurrency.lockutils [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "a8bef119-c694-432a-984b-0f0f2b570103" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.101 238887 DEBUG oslo_concurrency.lockutils [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "a8bef119-c694-432a-984b-0f0f2b570103-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.101 238887 DEBUG oslo_concurrency.lockutils [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "a8bef119-c694-432a-984b-0f0f2b570103-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.101 238887 DEBUG oslo_concurrency.lockutils [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "a8bef119-c694-432a-984b-0f0f2b570103-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.103 238887 INFO nova.compute.manager [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Terminating instance#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.104 238887 DEBUG nova.compute.manager [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:07:32 np0005604943 kernel: tap84c80f93-58 (unregistering): left promiscuous mode
Feb  2 07:07:32 np0005604943 NetworkManager[49093]: <info>  [1770034052.1667] device (tap84c80f93-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:07:32 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:32Z|00230|binding|INFO|Releasing lport 84c80f93-58de-43af-9685-e46ce8e0854f from this chassis (sb_readonly=0)
Feb  2 07:07:32 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:32Z|00231|binding|INFO|Setting lport 84c80f93-58de-43af-9685-e46ce8e0854f down in Southbound
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.224 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:32 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:32Z|00232|binding|INFO|Removing iface tap84c80f93-58 ovn-installed in OVS
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.227 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.232 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.233 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:96:08 10.100.0.13'], port_security=['fa:16:3e:47:96:08 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a8bef119-c694-432a-984b-0f0f2b570103', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb13b2a6-b763-41ef-a5c4-123372e94249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851fb6d80faf43cc9b2fef1913323704', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e0a2abe2-60a1-49ea-89b8-ea7fffedac5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10f2dc12-4c00-4783-968f-4cacec86630e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=84c80f93-58de-43af-9685-e46ce8e0854f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.235 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 84c80f93-58de-43af-9685-e46ce8e0854f in datapath fb13b2a6-b763-41ef-a5c4-123372e94249 unbound from our chassis#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.237 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fb13b2a6-b763-41ef-a5c4-123372e94249, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.238 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[53558a2d-02e5-4b2a-a13a-8dc8167ec0d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.239 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249 namespace which is not needed anymore#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.259 238887 INFO nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Creating config drive at /var/lib/nova/instances/e1504ff5-76c4-4676-b71d-745b31db4308/disk.config#033[00m
Feb  2 07:07:32 np0005604943 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Deactivated successfully.
Feb  2 07:07:32 np0005604943 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Consumed 15.480s CPU time.
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.264 238887 DEBUG oslo_concurrency.processutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1504ff5-76c4-4676-b71d-745b31db4308/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpq1gttdyo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:32 np0005604943 systemd-machined[206973]: Machine qemu-23-instance-00000017 terminated.
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.343 238887 INFO nova.virt.libvirt.driver [-] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Instance destroyed successfully.#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.344 238887 DEBUG nova.objects.instance [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lazy-loading 'resources' on Instance uuid a8bef119-c694-432a-984b-0f0f2b570103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.360 238887 DEBUG nova.virt.libvirt.vif [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:06:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-483015749',display_name='tempest-TestEncryptedCinderVolumes-server-483015749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-483015749',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4sEG9hObpGnevoIlqMdkrX6LtyepBRCjADAYBnTUNxH7zE9sXens2JsebTT1q5zN1V4atJxK/wradQkp5n2K1zuz899xdCKCopiRNmhKseY0+RU/9UYAZOT5nySAcl7g==',key_name='tempest-TestEncryptedCinderVolumes-1244227927',keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:07:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='851fb6d80faf43cc9b2fef1913323704',ramdisk_id='',reservation_id='r-m3xj6x1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-1976450145',owner_user_name='tempest-TestEncryptedCinderVolumes-1976450145-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:07:11Z,user_data=None,user_id='084f489a7b4c4fecba7b0942ed1b7203',uuid=a8bef119-c694-432a-984b-0f0f2b570103,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84c80f93-58de-43af-9685-e46ce8e0854f", "address": "fa:16:3e:47:96:08", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c80f93-58", "ovs_interfaceid": "84c80f93-58de-43af-9685-e46ce8e0854f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.361 238887 DEBUG nova.network.os_vif_util [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converting VIF {"id": "84c80f93-58de-43af-9685-e46ce8e0854f", "address": "fa:16:3e:47:96:08", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84c80f93-58", "ovs_interfaceid": "84c80f93-58de-43af-9685-e46ce8e0854f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.362 238887 DEBUG nova.network.os_vif_util [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:47:96:08,bridge_name='br-int',has_traffic_filtering=True,id=84c80f93-58de-43af-9685-e46ce8e0854f,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c80f93-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.362 238887 DEBUG os_vif [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:96:08,bridge_name='br-int',has_traffic_filtering=True,id=84c80f93-58de-43af-9685-e46ce8e0854f,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c80f93-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.365 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.365 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c80f93-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.367 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:32 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[265998]: [NOTICE]   (266002) : haproxy version is 2.8.14-c23fe91
Feb  2 07:07:32 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[265998]: [NOTICE]   (266002) : path to executable is /usr/sbin/haproxy
Feb  2 07:07:32 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[265998]: [WARNING]  (266002) : Exiting Master process...
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.369 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.371 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:32 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[265998]: [ALERT]    (266002) : Current worker (266004) exited with code 143 (Terminated)
Feb  2 07:07:32 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[265998]: [WARNING]  (266002) : All workers exited. Exiting... (0)
Feb  2 07:07:32 np0005604943 systemd[1]: libpod-021e37e1ee3c6e35aa15d7db2ddffdac04d07b2e762feec6de5b6ac4bff19795.scope: Deactivated successfully.
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.375 238887 INFO os_vif [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:96:08,bridge_name='br-int',has_traffic_filtering=True,id=84c80f93-58de-43af-9685-e46ce8e0854f,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84c80f93-58')#033[00m
Feb  2 07:07:32 np0005604943 podman[266296]: 2026-02-02 12:07:32.380011538 +0000 UTC m=+0.052156738 container died 021e37e1ee3c6e35aa15d7db2ddffdac04d07b2e762feec6de5b6ac4bff19795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.400 238887 DEBUG oslo_concurrency.processutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1504ff5-76c4-4676-b71d-745b31db4308/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpq1gttdyo" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:32 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-021e37e1ee3c6e35aa15d7db2ddffdac04d07b2e762feec6de5b6ac4bff19795-userdata-shm.mount: Deactivated successfully.
Feb  2 07:07:32 np0005604943 systemd[1]: var-lib-containers-storage-overlay-0e1d5508ce2d5d4244f8358e032685a43a127d29cd80144a07f595118cfeaaba-merged.mount: Deactivated successfully.
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.428 238887 DEBUG nova.storage.rbd_utils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] rbd image e1504ff5-76c4-4676-b71d-745b31db4308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.432 238887 DEBUG oslo_concurrency.processutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1504ff5-76c4-4676-b71d-745b31db4308/disk.config e1504ff5-76c4-4676-b71d-745b31db4308_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:32 np0005604943 podman[266296]: 2026-02-02 12:07:32.44868254 +0000 UTC m=+0.120827730 container cleanup 021e37e1ee3c6e35aa15d7db2ddffdac04d07b2e762feec6de5b6ac4bff19795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 07:07:32 np0005604943 systemd[1]: libpod-conmon-021e37e1ee3c6e35aa15d7db2ddffdac04d07b2e762feec6de5b6ac4bff19795.scope: Deactivated successfully.
Feb  2 07:07:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 535 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 765 KiB/s rd, 6.0 MiB/s wr, 121 op/s
Feb  2 07:07:32 np0005604943 podman[266373]: 2026-02-02 12:07:32.526701056 +0000 UTC m=+0.059090595 container remove 021e37e1ee3c6e35aa15d7db2ddffdac04d07b2e762feec6de5b6ac4bff19795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.526 238887 DEBUG nova.compute.manager [req-998a2166-513a-4377-92e6-72561a858e59 req-2b42200b-af5f-49d4-a86a-c0d061b37af4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Received event network-vif-unplugged-84c80f93-58de-43af-9685-e46ce8e0854f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.528 238887 DEBUG oslo_concurrency.lockutils [req-998a2166-513a-4377-92e6-72561a858e59 req-2b42200b-af5f-49d4-a86a-c0d061b37af4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "a8bef119-c694-432a-984b-0f0f2b570103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.528 238887 DEBUG oslo_concurrency.lockutils [req-998a2166-513a-4377-92e6-72561a858e59 req-2b42200b-af5f-49d4-a86a-c0d061b37af4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "a8bef119-c694-432a-984b-0f0f2b570103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.528 238887 DEBUG oslo_concurrency.lockutils [req-998a2166-513a-4377-92e6-72561a858e59 req-2b42200b-af5f-49d4-a86a-c0d061b37af4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "a8bef119-c694-432a-984b-0f0f2b570103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.529 238887 DEBUG nova.compute.manager [req-998a2166-513a-4377-92e6-72561a858e59 req-2b42200b-af5f-49d4-a86a-c0d061b37af4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] No waiting events found dispatching network-vif-unplugged-84c80f93-58de-43af-9685-e46ce8e0854f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.529 238887 DEBUG nova.compute.manager [req-998a2166-513a-4377-92e6-72561a858e59 req-2b42200b-af5f-49d4-a86a-c0d061b37af4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Received event network-vif-unplugged-84c80f93-58de-43af-9685-e46ce8e0854f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.533 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[0dadef2a-8881-4786-99f1-1cfeb30ba725]: (4, ('Mon Feb  2 12:07:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249 (021e37e1ee3c6e35aa15d7db2ddffdac04d07b2e762feec6de5b6ac4bff19795)\n021e37e1ee3c6e35aa15d7db2ddffdac04d07b2e762feec6de5b6ac4bff19795\nMon Feb  2 12:07:32 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249 (021e37e1ee3c6e35aa15d7db2ddffdac04d07b2e762feec6de5b6ac4bff19795)\n021e37e1ee3c6e35aa15d7db2ddffdac04d07b2e762feec6de5b6ac4bff19795\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.536 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[08029685-17fc-4b47-98d7-a2f910adfd08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.537 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb13b2a6-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:32 np0005604943 kernel: tapfb13b2a6-b0: left promiscuous mode
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.541 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.551 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.555 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c557ba7c-34a7-4b9d-a4ed-26825d68b263]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.566 238887 DEBUG nova.network.neutron [req-9c94e189-69c5-491e-81ae-c19bbbe9c69f req-2881c9fe-aad1-41b9-8d7c-769392c7767d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Updated VIF entry in instance network info cache for port aa33b4ec-1599-4737-b61c-25704b712543. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.566 238887 DEBUG nova.network.neutron [req-9c94e189-69c5-491e-81ae-c19bbbe9c69f req-2881c9fe-aad1-41b9-8d7c-769392c7767d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Updating instance_info_cache with network_info: [{"id": "aa33b4ec-1599-4737-b61c-25704b712543", "address": "fa:16:3e:b8:03:f7", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa33b4ec-15", "ovs_interfaceid": "aa33b4ec-1599-4737-b61c-25704b712543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.568 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[2b96fd02-256f-42e8-b3a8-942246ccb5f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.571 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[5672a86f-1fdd-4873-b32c-d86b596e98e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.580 238887 DEBUG oslo_concurrency.lockutils [req-9c94e189-69c5-491e-81ae-c19bbbe9c69f req-2881c9fe-aad1-41b9-8d7c-769392c7767d 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-e1504ff5-76c4-4676-b71d-745b31db4308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.587 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[539e6861-5f4c-486c-8af5-6568a6576965]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447388, 'reachable_time': 30983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266411, 'error': None, 'target': 'ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:32 np0005604943 systemd[1]: run-netns-ovnmeta\x2dfb13b2a6\x2db763\x2d41ef\x2da5c4\x2d123372e94249.mount: Deactivated successfully.
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.592 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.593 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[5048b208-cddf-44fc-b046-d3cd1fd0cf5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.617 238887 DEBUG oslo_concurrency.processutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1504ff5-76c4-4676-b71d-745b31db4308/disk.config e1504ff5-76c4-4676-b71d-745b31db4308_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.618 238887 INFO nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Deleting local config drive /var/lib/nova/instances/e1504ff5-76c4-4676-b71d-745b31db4308/disk.config because it was imported into RBD.#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.655 238887 INFO nova.virt.libvirt.driver [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Deleting instance files /var/lib/nova/instances/a8bef119-c694-432a-984b-0f0f2b570103_del#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.656 238887 INFO nova.virt.libvirt.driver [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Deletion of /var/lib/nova/instances/a8bef119-c694-432a-984b-0f0f2b570103_del complete#033[00m
Feb  2 07:07:32 np0005604943 kernel: tapaa33b4ec-15: entered promiscuous mode
Feb  2 07:07:32 np0005604943 NetworkManager[49093]: <info>  [1770034052.6721] manager: (tapaa33b4ec-15): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Feb  2 07:07:32 np0005604943 systemd-udevd[266273]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:07:32 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:32Z|00233|binding|INFO|Claiming lport aa33b4ec-1599-4737-b61c-25704b712543 for this chassis.
Feb  2 07:07:32 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:32Z|00234|binding|INFO|aa33b4ec-1599-4737-b61c-25704b712543: Claiming fa:16:3e:b8:03:f7 10.100.0.11
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.674 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.681 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:03:f7 10.100.0.11'], port_security=['fa:16:3e:b8:03:f7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e1504ff5-76c4-4676-b71d-745b31db4308', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34290362-cccd-452d-8e7e-22a6057fdb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e66ed51ccbb840f083b8a86476696747', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f445a686-10d3-4653-b101-b0c161d236b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c1fa263-7715-4982-bfcc-ab441fef3c03, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=aa33b4ec-1599-4737-b61c-25704b712543) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.682 155011 INFO neutron.agent.ovn.metadata.agent [-] Port aa33b4ec-1599-4737-b61c-25704b712543 in datapath 34290362-cccd-452d-8e7e-22a6057fdb60 bound to our chassis#033[00m
Feb  2 07:07:32 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:32Z|00235|binding|INFO|Setting lport aa33b4ec-1599-4737-b61c-25704b712543 ovn-installed in OVS
Feb  2 07:07:32 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:32Z|00236|binding|INFO|Setting lport aa33b4ec-1599-4737-b61c-25704b712543 up in Southbound
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.684 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34290362-cccd-452d-8e7e-22a6057fdb60#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.684 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:32 np0005604943 NetworkManager[49093]: <info>  [1770034052.6865] device (tapaa33b4ec-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:07:32 np0005604943 NetworkManager[49093]: <info>  [1770034052.6871] device (tapaa33b4ec-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.689 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.700 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[ea349aef-740b-4cee-85b7-51c99683be74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:32 np0005604943 systemd-machined[206973]: New machine qemu-24-instance-00000018.
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.710 238887 INFO nova.compute.manager [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.711 238887 DEBUG oslo.service.loopingcall [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.711 238887 DEBUG nova.compute.manager [-] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.712 238887 DEBUG nova.network.neutron [-] [instance: a8bef119-c694-432a-984b-0f0f2b570103] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:07:32 np0005604943 systemd[1]: Started Virtual Machine qemu-24-instance-00000018.
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.733 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[a96cddf6-a250-48f4-9576-098e7b6c0177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.738 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[9c1ea8a4-997b-4112-836f-cb75bff6f433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.769 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[097ba53c-aed7-4a5b-8092-d338535f0ffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.787 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[769f8a11-f2f4-4e5c-898f-222fe196fe28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34290362-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:39:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446009, 'reachable_time': 23880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266437, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.803 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc96a81-e679-4d64-9e6a-21234f7a769b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap34290362-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446022, 'tstamp': 446022}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266439, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap34290362-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446024, 'tstamp': 446024}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266439, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.805 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34290362-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:32 np0005604943 nova_compute[238883]: 2026-02-02 12:07:32.807 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.809 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34290362-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.809 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.810 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34290362-c0, col_values=(('external_ids', {'iface-id': '54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:32.810 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:07:33 np0005604943 nova_compute[238883]: 2026-02-02 12:07:33.240 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034053.2392292, e1504ff5-76c4-4676-b71d-745b31db4308 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:07:33 np0005604943 nova_compute[238883]: 2026-02-02 12:07:33.241 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] VM Started (Lifecycle Event)#033[00m
Feb  2 07:07:33 np0005604943 nova_compute[238883]: 2026-02-02 12:07:33.265 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:33 np0005604943 nova_compute[238883]: 2026-02-02 12:07:33.271 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034053.2393923, e1504ff5-76c4-4676-b71d-745b31db4308 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:07:33 np0005604943 nova_compute[238883]: 2026-02-02 12:07:33.271 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:07:33 np0005604943 nova_compute[238883]: 2026-02-02 12:07:33.291 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:33 np0005604943 nova_compute[238883]: 2026-02-02 12:07:33.295 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:07:33 np0005604943 nova_compute[238883]: 2026-02-02 12:07:33.316 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:07:33 np0005604943 nova_compute[238883]: 2026-02-02 12:07:33.534 238887 DEBUG nova.network.neutron [-] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:33 np0005604943 nova_compute[238883]: 2026-02-02 12:07:33.551 238887 INFO nova.compute.manager [-] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Took 0.84 seconds to deallocate network for instance.#033[00m
Feb  2 07:07:33 np0005604943 nova_compute[238883]: 2026-02-02 12:07:33.706 238887 INFO nova.compute.manager [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Took 0.15 seconds to detach 1 volumes for instance.#033[00m
Feb  2 07:07:33 np0005604943 nova_compute[238883]: 2026-02-02 12:07:33.787 238887 DEBUG oslo_concurrency.lockutils [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:33 np0005604943 nova_compute[238883]: 2026-02-02 12:07:33.787 238887 DEBUG oslo_concurrency.lockutils [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.077 238887 DEBUG oslo_concurrency.processutils [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 535 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 606 KiB/s rd, 5.8 MiB/s wr, 125 op/s
Feb  2 07:07:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:07:34 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3932855171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.613 238887 DEBUG nova.compute.manager [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Received event network-vif-plugged-84c80f93-58de-43af-9685-e46ce8e0854f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.614 238887 DEBUG oslo_concurrency.lockutils [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "a8bef119-c694-432a-984b-0f0f2b570103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.614 238887 DEBUG oslo_concurrency.lockutils [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "a8bef119-c694-432a-984b-0f0f2b570103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.614 238887 DEBUG oslo_concurrency.lockutils [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "a8bef119-c694-432a-984b-0f0f2b570103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.614 238887 DEBUG nova.compute.manager [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] No waiting events found dispatching network-vif-plugged-84c80f93-58de-43af-9685-e46ce8e0854f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.615 238887 WARNING nova.compute.manager [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Received unexpected event network-vif-plugged-84c80f93-58de-43af-9685-e46ce8e0854f for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.615 238887 DEBUG nova.compute.manager [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Received event network-vif-plugged-aa33b4ec-1599-4737-b61c-25704b712543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.615 238887 DEBUG oslo_concurrency.lockutils [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.615 238887 DEBUG oslo_concurrency.lockutils [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.616 238887 DEBUG oslo_concurrency.lockutils [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.616 238887 DEBUG nova.compute.manager [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Processing event network-vif-plugged-aa33b4ec-1599-4737-b61c-25704b712543 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.616 238887 DEBUG nova.compute.manager [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Received event network-vif-plugged-aa33b4ec-1599-4737-b61c-25704b712543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.616 238887 DEBUG oslo_concurrency.lockutils [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.616 238887 DEBUG oslo_concurrency.lockutils [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.617 238887 DEBUG oslo_concurrency.lockutils [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.617 238887 DEBUG nova.compute.manager [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] No waiting events found dispatching network-vif-plugged-aa33b4ec-1599-4737-b61c-25704b712543 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.617 238887 WARNING nova.compute.manager [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Received unexpected event network-vif-plugged-aa33b4ec-1599-4737-b61c-25704b712543 for instance with vm_state building and task_state spawning.#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.617 238887 DEBUG nova.compute.manager [req-8373b84a-270c-4f77-8df8-f88f8e9141ca req-2b0cd1e1-124d-4b6f-96cc-751f4be514b6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Received event network-vif-deleted-84c80f93-58de-43af-9685-e46ce8e0854f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.619 238887 DEBUG nova.compute.manager [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.623 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034054.6224017, e1504ff5-76c4-4676-b71d-745b31db4308 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.624 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.626 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.632 238887 INFO nova.virt.libvirt.driver [-] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Instance spawned successfully.#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.632 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.637 238887 DEBUG oslo_concurrency.processutils [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.643 238887 DEBUG nova.compute.provider_tree [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.671 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.682 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.682 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.683 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.683 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.684 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.684 238887 DEBUG nova.virt.libvirt.driver [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.688 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.721 238887 DEBUG nova.scheduler.client.report [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.727 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.765 238887 INFO nova.compute.manager [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Took 5.62 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.766 238887 DEBUG nova.compute.manager [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.768 238887 DEBUG oslo_concurrency.lockutils [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.807 238887 INFO nova.scheduler.client.report [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Deleted allocations for instance a8bef119-c694-432a-984b-0f0f2b570103#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.866 238887 INFO nova.compute.manager [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Took 8.49 seconds to build instance.#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.877 238887 DEBUG oslo_concurrency.lockutils [None req-c4f2ddff-d3cc-4544-9891-2c591d023bb9 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "a8bef119-c694-432a-984b-0f0f2b570103" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:34 np0005604943 nova_compute[238883]: 2026-02-02 12:07:34.881 238887 DEBUG oslo_concurrency.lockutils [None req-1bd1c88d-c192-4f15-a287-d089ceceee1a 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "e1504ff5-76c4-4676-b71d-745b31db4308" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Feb  2 07:07:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Feb  2 07:07:35 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Feb  2 07:07:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 535 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 587 KiB/s rd, 5.8 MiB/s wr, 119 op/s
Feb  2 07:07:36 np0005604943 nova_compute[238883]: 2026-02-02 12:07:36.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:07:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:07:36 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2559001862' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:07:36 np0005604943 nova_compute[238883]: 2026-02-02 12:07:36.732 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.182 238887 DEBUG oslo_concurrency.lockutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.182 238887 DEBUG oslo_concurrency.lockutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.215 238887 DEBUG nova.compute.manager [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.288 238887 DEBUG oslo_concurrency.lockutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.289 238887 DEBUG oslo_concurrency.lockutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.296 238887 DEBUG nova.virt.hardware [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.298 238887 INFO nova.compute.claims [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.367 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.423 238887 DEBUG oslo_concurrency.processutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.644 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.644 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.665 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb  2 07:07:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.907 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "refresh_cache-140c7b65-c11d-4032-aaf8-db6b3df5127e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.908 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquired lock "refresh_cache-140c7b65-c11d-4032-aaf8-db6b3df5127e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.908 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 07:07:37 np0005604943 nova_compute[238883]: 2026-02-02 12:07:37.909 238887 DEBUG nova.objects.instance [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 140c7b65-c11d-4032-aaf8-db6b3df5127e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:07:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:07:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/928166113' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.005 238887 DEBUG oslo_concurrency.processutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.010 238887 DEBUG nova.compute.provider_tree [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.026 238887 DEBUG nova.scheduler.client.report [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.046 238887 DEBUG oslo_concurrency.lockutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.047 238887 DEBUG nova.compute.manager [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.092 238887 DEBUG nova.compute.manager [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.093 238887 DEBUG nova.network.neutron [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.118 238887 INFO nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.143 238887 DEBUG nova.compute.manager [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.195 238887 INFO nova.virt.block_device [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Booting with volume 5b4325ec-9602-4b79-9255-eb8f8017eaca at /dev/vda#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.291 238887 DEBUG nova.policy [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cd5824e18d5e443cb24d3bf55ff2c553', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4c7b49c49c104c079544033b07fb2f3d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.337 238887 DEBUG nova.compute.manager [req-0d89532f-28f3-4b9f-a6f1-566906a246f1 req-8ba68dfa-425e-4bb0-add6-8b16a0fdd54c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Received event network-changed-aa33b4ec-1599-4737-b61c-25704b712543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.338 238887 DEBUG nova.compute.manager [req-0d89532f-28f3-4b9f-a6f1-566906a246f1 req-8ba68dfa-425e-4bb0-add6-8b16a0fdd54c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Refreshing instance network info cache due to event network-changed-aa33b4ec-1599-4737-b61c-25704b712543. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.338 238887 DEBUG oslo_concurrency.lockutils [req-0d89532f-28f3-4b9f-a6f1-566906a246f1 req-8ba68dfa-425e-4bb0-add6-8b16a0fdd54c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-e1504ff5-76c4-4676-b71d-745b31db4308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.338 238887 DEBUG oslo_concurrency.lockutils [req-0d89532f-28f3-4b9f-a6f1-566906a246f1 req-8ba68dfa-425e-4bb0-add6-8b16a0fdd54c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-e1504ff5-76c4-4676-b71d-745b31db4308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.338 238887 DEBUG nova.network.neutron [req-0d89532f-28f3-4b9f-a6f1-566906a246f1 req-8ba68dfa-425e-4bb0-add6-8b16a0fdd54c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Refreshing network info cache for port aa33b4ec-1599-4737-b61c-25704b712543 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.370 238887 DEBUG os_brick.utils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.371 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.382 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.382 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[718ca54d-5c34-4d36-8326-24e3ad099778]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.384 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.393 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.394 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2e124d-b7a7-4bd8-912b-132826865e0e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.396 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.406 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.407 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0cd739-bd47-4371-89d9-99aac70aca9c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.408 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[e49e7de9-d046-4d51-84d9-e1fbd1034032]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.409 238887 DEBUG oslo_concurrency.processutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.436 238887 DEBUG oslo_concurrency.processutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.438 238887 DEBUG os_brick.initiator.connectors.lightos [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.438 238887 DEBUG os_brick.initiator.connectors.lightos [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.439 238887 DEBUG os_brick.initiator.connectors.lightos [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.439 238887 DEBUG os_brick.utils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.440 238887 DEBUG nova.virt.block_device [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Updating existing volume attachment record: 28a1a93f-06d1-41cf-ac40-0ec24f075e4d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:07:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 535 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.3 MiB/s wr, 111 op/s
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.524 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770034043.5234044, d0404e7d-4162-4ea0-86e0-e7869e7fb702 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.525 238887 INFO nova.compute.manager [-] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:07:38 np0005604943 nova_compute[238883]: 2026-02-02 12:07:38.547 238887 DEBUG nova.compute.manager [None req-9a4b101b-0d13-4404-ae5c-c59830d0275b - - - - - -] [instance: d0404e7d-4162-4ea0-86e0-e7869e7fb702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:07:39 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2445613838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.477 238887 DEBUG nova.compute.manager [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.481 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.481 238887 INFO nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Creating image(s)#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.482 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.482 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Ensure instance console log exists: /var/lib/nova/instances/63fa96af-eee7-4ee3-b95a-c4036a37b3bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.482 238887 DEBUG oslo_concurrency.lockutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.483 238887 DEBUG oslo_concurrency.lockutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.483 238887 DEBUG oslo_concurrency.lockutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.573 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Updating instance_info_cache with network_info: [{"id": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "address": "fa:16:3e:18:be:eb", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afadb99-91", "ovs_interfaceid": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.591 238887 DEBUG nova.network.neutron [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Successfully created port: 5698cd0c-cd85-4888-ad3a-2c588d4e45cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.597 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Releasing lock "refresh_cache-140c7b65-c11d-4032-aaf8-db6b3df5127e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.598 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.599 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.599 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.626 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.627 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.628 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.628 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.628 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.961 238887 DEBUG nova.network.neutron [req-0d89532f-28f3-4b9f-a6f1-566906a246f1 req-8ba68dfa-425e-4bb0-add6-8b16a0fdd54c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Updated VIF entry in instance network info cache for port aa33b4ec-1599-4737-b61c-25704b712543. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.962 238887 DEBUG nova.network.neutron [req-0d89532f-28f3-4b9f-a6f1-566906a246f1 req-8ba68dfa-425e-4bb0-add6-8b16a0fdd54c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Updating instance_info_cache with network_info: [{"id": "aa33b4ec-1599-4737-b61c-25704b712543", "address": "fa:16:3e:b8:03:f7", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa33b4ec-15", "ovs_interfaceid": "aa33b4ec-1599-4737-b61c-25704b712543", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:39 np0005604943 nova_compute[238883]: 2026-02-02 12:07:39.986 238887 DEBUG oslo_concurrency.lockutils [req-0d89532f-28f3-4b9f-a6f1-566906a246f1 req-8ba68dfa-425e-4bb0-add6-8b16a0fdd54c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-e1504ff5-76c4-4676-b71d-745b31db4308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:07:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:07:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/507700217' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.273 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.356 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.357 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.361 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.361 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.401 238887 DEBUG nova.network.neutron [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Successfully updated port: 5698cd0c-cd85-4888-ad3a-2c588d4e45cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.417 238887 DEBUG oslo_concurrency.lockutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "refresh_cache-63fa96af-eee7-4ee3-b95a-c4036a37b3bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.418 238887 DEBUG oslo_concurrency.lockutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquired lock "refresh_cache-63fa96af-eee7-4ee3-b95a-c4036a37b3bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.418 238887 DEBUG nova.network.neutron [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:07:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 535 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 41 KiB/s wr, 141 op/s
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.477 238887 DEBUG nova.compute.manager [req-3285f19a-b379-414a-b041-19eb0fbfb0e3 req-bd09060a-ad54-43b6-a364-998bfe59ef37 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Received event network-changed-5698cd0c-cd85-4888-ad3a-2c588d4e45cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.479 238887 DEBUG nova.compute.manager [req-3285f19a-b379-414a-b041-19eb0fbfb0e3 req-bd09060a-ad54-43b6-a364-998bfe59ef37 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Refreshing instance network info cache due to event network-changed-5698cd0c-cd85-4888-ad3a-2c588d4e45cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.479 238887 DEBUG oslo_concurrency.lockutils [req-3285f19a-b379-414a-b041-19eb0fbfb0e3 req-bd09060a-ad54-43b6-a364-998bfe59ef37 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-63fa96af-eee7-4ee3-b95a-c4036a37b3bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.564 238887 DEBUG nova.network.neutron [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.570 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.572 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3986MB free_disk=59.98785804864019GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.573 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.573 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.678 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance 140c7b65-c11d-4032-aaf8-db6b3df5127e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.679 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance e1504ff5-76c4-4676-b71d-745b31db4308 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.679 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance 63fa96af-eee7-4ee3-b95a-c4036a37b3bb actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.680 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.680 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 07:07:40 np0005604943 nova_compute[238883]: 2026-02-02 12:07:40.748 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:07:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:07:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:07:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:07:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:07:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.171 238887 DEBUG oslo_concurrency.lockutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.172 238887 DEBUG oslo_concurrency.lockutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.195 238887 DEBUG nova.compute.manager [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.273 238887 DEBUG oslo_concurrency.lockutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:07:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1204290773' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.327 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.333 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.351 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.375 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.375 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.376 238887 DEBUG oslo_concurrency.lockutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.382 238887 DEBUG nova.virt.hardware [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.382 238887 INFO nova.compute.claims [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.460 238887 DEBUG nova.network.neutron [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Updating instance_info_cache with network_info: [{"id": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "address": "fa:16:3e:66:ce:61", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5698cd0c-cd", "ovs_interfaceid": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.483 238887 DEBUG oslo_concurrency.lockutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Releasing lock "refresh_cache-63fa96af-eee7-4ee3-b95a-c4036a37b3bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.484 238887 DEBUG nova.compute.manager [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Instance network_info: |[{"id": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "address": "fa:16:3e:66:ce:61", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5698cd0c-cd", "ovs_interfaceid": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.484 238887 DEBUG oslo_concurrency.lockutils [req-3285f19a-b379-414a-b041-19eb0fbfb0e3 req-bd09060a-ad54-43b6-a364-998bfe59ef37 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-63fa96af-eee7-4ee3-b95a-c4036a37b3bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.484 238887 DEBUG nova.network.neutron [req-3285f19a-b379-414a-b041-19eb0fbfb0e3 req-bd09060a-ad54-43b6-a364-998bfe59ef37 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Refreshing network info cache for port 5698cd0c-cd85-4888-ad3a-2c588d4e45cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.488 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Start _get_guest_xml network_info=[{"id": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "address": "fa:16:3e:66:ce:61", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5698cd0c-cd", "ovs_interfaceid": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'attachment_id': '28a1a93f-06d1-41cf-ac40-0ec24f075e4d', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5b4325ec-9602-4b79-9255-eb8f8017eaca', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5b4325ec-9602-4b79-9255-eb8f8017eaca', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '63fa96af-eee7-4ee3-b95a-c4036a37b3bb', 'attached_at': '', 'detached_at': '', 'volume_id': '5b4325ec-9602-4b79-9255-eb8f8017eaca', 'serial': '5b4325ec-9602-4b79-9255-eb8f8017eaca'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.493 238887 WARNING nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.504 238887 DEBUG nova.virt.libvirt.host [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.505 238887 DEBUG nova.virt.libvirt.host [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.510 238887 DEBUG nova.virt.libvirt.host [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.511 238887 DEBUG nova.virt.libvirt.host [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.511 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.511 238887 DEBUG nova.virt.hardware [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.512 238887 DEBUG nova.virt.hardware [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.512 238887 DEBUG nova.virt.hardware [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.512 238887 DEBUG nova.virt.hardware [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.513 238887 DEBUG nova.virt.hardware [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.513 238887 DEBUG nova.virt.hardware [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.513 238887 DEBUG nova.virt.hardware [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.513 238887 DEBUG nova.virt.hardware [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.513 238887 DEBUG nova.virt.hardware [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.514 238887 DEBUG nova.virt.hardware [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.514 238887 DEBUG nova.virt.hardware [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.537 238887 DEBUG nova.storage.rbd_utils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] rbd image 63fa96af-eee7-4ee3-b95a-c4036a37b3bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.542 238887 DEBUG oslo_concurrency.processutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.565 238887 DEBUG oslo_concurrency.processutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:41 np0005604943 nova_compute[238883]: 2026-02-02 12:07:41.735 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4209124703' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.078 238887 DEBUG oslo_concurrency.processutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/6097843' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.185 238887 DEBUG os_brick.encryptors [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Using volume encryption metadata '{'encryption_key_id': '0c819315-c3ed-483c-bb26-8f84bd44f5b0', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5b4325ec-9602-4b79-9255-eb8f8017eaca', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5b4325ec-9602-4b79-9255-eb8f8017eaca', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '63fa96af-eee7-4ee3-b95a-c4036a37b3bb', 'attached_at': '', 'detached_at': '', 'volume_id': '5b4325ec-9602-4b79-9255-eb8f8017eaca', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.189 238887 DEBUG barbicanclient.client [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.193 238887 DEBUG oslo_concurrency.processutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.200 238887 DEBUG nova.compute.provider_tree [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.209 238887 DEBUG barbicanclient.v1.secrets [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/0c819315-c3ed-483c-bb26-8f84bd44f5b0 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.210 238887 INFO barbicanclient.base [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/0c819315-c3ed-483c-bb26-8f84bd44f5b0#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.218 238887 DEBUG nova.scheduler.client.report [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.242 238887 DEBUG barbicanclient.client [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.243 238887 INFO barbicanclient.base [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/0c819315-c3ed-483c-bb26-8f84bd44f5b0#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.245 238887 DEBUG oslo_concurrency.lockutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.245 238887 DEBUG nova.compute.manager [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.273 238887 DEBUG barbicanclient.client [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.273 238887 INFO barbicanclient.base [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/0c819315-c3ed-483c-bb26-8f84bd44f5b0#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.296 238887 DEBUG barbicanclient.client [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.297 238887 INFO barbicanclient.base [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/0c819315-c3ed-483c-bb26-8f84bd44f5b0#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.303 238887 DEBUG nova.compute.manager [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.304 238887 DEBUG nova.network.neutron [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.320 238887 DEBUG barbicanclient.client [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.321 238887 INFO barbicanclient.base [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/0c819315-c3ed-483c-bb26-8f84bd44f5b0#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.325 238887 INFO nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.343 238887 DEBUG nova.compute.manager [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.351 238887 DEBUG barbicanclient.client [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.352 238887 INFO barbicanclient.base [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/0c819315-c3ed-483c-bb26-8f84bd44f5b0#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.359 238887 DEBUG nova.network.neutron [req-3285f19a-b379-414a-b041-19eb0fbfb0e3 req-bd09060a-ad54-43b6-a364-998bfe59ef37 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Updated VIF entry in instance network info cache for port 5698cd0c-cd85-4888-ad3a-2c588d4e45cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.359 238887 DEBUG nova.network.neutron [req-3285f19a-b379-414a-b041-19eb0fbfb0e3 req-bd09060a-ad54-43b6-a364-998bfe59ef37 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Updating instance_info_cache with network_info: [{"id": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "address": "fa:16:3e:66:ce:61", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5698cd0c-cd", "ovs_interfaceid": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.369 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.375 238887 DEBUG barbicanclient.client [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.376 238887 INFO barbicanclient.base [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/0c819315-c3ed-483c-bb26-8f84bd44f5b0#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.379 238887 DEBUG oslo_concurrency.lockutils [req-3285f19a-b379-414a-b041-19eb0fbfb0e3 req-bd09060a-ad54-43b6-a364-998bfe59ef37 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-63fa96af-eee7-4ee3-b95a-c4036a37b3bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.405 238887 DEBUG barbicanclient.client [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.406 238887 INFO barbicanclient.base [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/0c819315-c3ed-483c-bb26-8f84bd44f5b0#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.410 238887 INFO nova.virt.block_device [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Booting with volume c883c4de-4ba6-485f-86f8-0f0ce82aee70 at /dev/vda#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.419 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.420 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.432 238887 DEBUG barbicanclient.client [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.432 238887 INFO barbicanclient.base [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/0c819315-c3ed-483c-bb26-8f84bd44f5b0#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.453 238887 DEBUG barbicanclient.client [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.454 238887 INFO barbicanclient.base [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/0c819315-c3ed-483c-bb26-8f84bd44f5b0#033[00m
Feb  2 07:07:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 535 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 22 KiB/s wr, 138 op/s
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.474 238887 DEBUG barbicanclient.client [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.475 238887 INFO barbicanclient.base [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/0c819315-c3ed-483c-bb26-8f84bd44f5b0#033[00m
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.501 238887 DEBUG barbicanclient.client [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.502 238887 INFO barbicanclient.base [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/0c819315-c3ed-483c-bb26-8f84bd44f5b0#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.533 238887 DEBUG nova.policy [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '084f489a7b4c4fecba7b0942ed1b7203', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '851fb6d80faf43cc9b2fef1913323704', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.557 238887 DEBUG os_brick.utils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.558 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.567 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.567 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[bc59797e-a807-4070-a69b-70191acf9fbb]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.569 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.576 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.576 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[d2357f1d-cf2a-414f-b52d-4061383baa85]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.578 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.591 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.592 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[f525b194-52b7-4861-959f-bd134bbf9a00]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.594 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[02e33579-2a1c-4830-93b8-8c20108c201a]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.594 238887 DEBUG oslo_concurrency.processutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.623 238887 DEBUG barbicanclient.client [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.624 238887 INFO barbicanclient.base [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/0c819315-c3ed-483c-bb26-8f84bd44f5b0#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.627 238887 DEBUG oslo_concurrency.processutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.629 238887 DEBUG os_brick.initiator.connectors.lightos [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.629 238887 DEBUG os_brick.initiator.connectors.lightos [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.630 238887 DEBUG os_brick.initiator.connectors.lightos [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.630 238887 DEBUG os_brick.utils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.630 238887 DEBUG nova.virt.block_device [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Updating existing volume attachment record: 41b37ba5-4ace-4f95-8c02-20ba41e31139 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.647 238887 DEBUG barbicanclient.client [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.648 238887 INFO barbicanclient.base [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/0c819315-c3ed-483c-bb26-8f84bd44f5b0#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.672 238887 DEBUG barbicanclient.client [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.674 238887 INFO barbicanclient.base [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/0c819315-c3ed-483c-bb26-8f84bd44f5b0#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.699 238887 DEBUG barbicanclient.client [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.701 238887 DEBUG nova.virt.libvirt.host [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  <usage type="volume">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <volume>5b4325ec-9602-4b79-9255-eb8f8017eaca</volume>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  </usage>
Feb  2 07:07:42 np0005604943 nova_compute[238883]: </secret>
Feb  2 07:07:42 np0005604943 nova_compute[238883]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.736 238887 DEBUG nova.virt.libvirt.vif [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:07:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1023312054',display_name='tempest-TransferEncryptedVolumeTest-server-1023312054',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1023312054',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFDy1jBskL6RDlru0VUyMYEuQYZdWj4mgPqYNbp/ZxOi/SP0295JAyJLHX3JiQjzCwuF8BsyBv7iV3J6nvrpEE+i/AXa4yixOsMe088OGvWt8cZiFnV/xX7EKx5mK84nug==',key_name='tempest-TransferEncryptedVolumeTest-704936637',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c7b49c49c104c079544033b07fb2f3d',ramdisk_id='',reservation_id='r-d4lddim5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-347797880',owner_user_name='tempest-TransferEncryptedVolumeTest-347797880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:07:38Z,user_data=None,user_id='cd5824e18d5e443cb24d3bf55ff2c553',uuid=63fa96af-eee7-4ee3-b95a-c4036a37b3bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "address": "fa:16:3e:66:ce:61", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5698cd0c-cd", "ovs_interfaceid": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.737 238887 DEBUG nova.network.os_vif_util [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converting VIF {"id": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "address": "fa:16:3e:66:ce:61", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5698cd0c-cd", "ovs_interfaceid": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.738 238887 DEBUG nova.network.os_vif_util [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:ce:61,bridge_name='br-int',has_traffic_filtering=True,id=5698cd0c-cd85-4888-ad3a-2c588d4e45cf,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5698cd0c-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.740 238887 DEBUG nova.objects.instance [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lazy-loading 'pci_devices' on Instance uuid 63fa96af-eee7-4ee3-b95a-c4036a37b3bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:07:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.765 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  <uuid>63fa96af-eee7-4ee3-b95a-c4036a37b3bb</uuid>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  <name>instance-00000019</name>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-1023312054</nova:name>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:07:41</nova:creationTime>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:        <nova:user uuid="cd5824e18d5e443cb24d3bf55ff2c553">tempest-TransferEncryptedVolumeTest-347797880-project-member</nova:user>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:        <nova:project uuid="4c7b49c49c104c079544033b07fb2f3d">tempest-TransferEncryptedVolumeTest-347797880</nova:project>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:        <nova:port uuid="5698cd0c-cd85-4888-ad3a-2c588d4e45cf">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <entry name="serial">63fa96af-eee7-4ee3-b95a-c4036a37b3bb</entry>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <entry name="uuid">63fa96af-eee7-4ee3-b95a-c4036a37b3bb</entry>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/63fa96af-eee7-4ee3-b95a-c4036a37b3bb_disk.config">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="volumes/volume-5b4325ec-9602-4b79-9255-eb8f8017eaca">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <serial>5b4325ec-9602-4b79-9255-eb8f8017eaca</serial>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <encryption format="luks">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:        <secret type="passphrase" uuid="c8ef0e8e-4953-4019-aa8f-71cbc9ddac92"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      </encryption>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:66:ce:61"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <target dev="tap5698cd0c-cd"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/63fa96af-eee7-4ee3-b95a-c4036a37b3bb/console.log" append="off"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:07:42 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:07:42 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:07:42 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:07:42 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.766 238887 DEBUG nova.compute.manager [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Preparing to wait for external event network-vif-plugged-5698cd0c-cd85-4888-ad3a-2c588d4e45cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.767 238887 DEBUG oslo_concurrency.lockutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.767 238887 DEBUG oslo_concurrency.lockutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.767 238887 DEBUG oslo_concurrency.lockutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.768 238887 DEBUG nova.virt.libvirt.vif [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:07:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1023312054',display_name='tempest-TransferEncryptedVolumeTest-server-1023312054',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1023312054',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFDy1jBskL6RDlru0VUyMYEuQYZdWj4mgPqYNbp/ZxOi/SP0295JAyJLHX3JiQjzCwuF8BsyBv7iV3J6nvrpEE+i/AXa4yixOsMe088OGvWt8cZiFnV/xX7EKx5mK84nug==',key_name='tempest-TransferEncryptedVolumeTest-704936637',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c7b49c49c104c079544033b07fb2f3d',ramdisk_id='',reservation_id='r-d4lddim5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-347797880',owner_user_name='tempest-TransferEncryptedVolumeTest-347797880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:07:38Z,user_data=None,user_id='cd5824e18d5e443cb24d3bf55ff2c553',uuid=63fa96af-eee7-4ee3-b95a-c4036a37b3bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "address": "fa:16:3e:66:ce:61", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5698cd0c-cd", "ovs_interfaceid": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.768 238887 DEBUG nova.network.os_vif_util [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converting VIF {"id": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "address": "fa:16:3e:66:ce:61", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5698cd0c-cd", "ovs_interfaceid": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.769 238887 DEBUG nova.network.os_vif_util [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:ce:61,bridge_name='br-int',has_traffic_filtering=True,id=5698cd0c-cd85-4888-ad3a-2c588d4e45cf,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5698cd0c-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.769 238887 DEBUG os_vif [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ce:61,bridge_name='br-int',has_traffic_filtering=True,id=5698cd0c-cd85-4888-ad3a-2c588d4e45cf,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5698cd0c-cd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.770 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.771 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.771 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.776 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.777 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5698cd0c-cd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.778 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5698cd0c-cd, col_values=(('external_ids', {'iface-id': '5698cd0c-cd85-4888-ad3a-2c588d4e45cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:ce:61', 'vm-uuid': '63fa96af-eee7-4ee3-b95a-c4036a37b3bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.780 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.782 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:07:42 np0005604943 NetworkManager[49093]: <info>  [1770034062.7815] manager: (tap5698cd0c-cd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.789 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.791 238887 INFO os_vif [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ce:61,bridge_name='br-int',has_traffic_filtering=True,id=5698cd0c-cd85-4888-ad3a-2c588d4e45cf,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5698cd0c-cd')#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.840 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.840 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.841 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] No VIF found with MAC fa:16:3e:66:ce:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.841 238887 INFO nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Using config drive#033[00m
Feb  2 07:07:42 np0005604943 nova_compute[238883]: 2026-02-02 12:07:42.866 238887 DEBUG nova.storage.rbd_utils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] rbd image 63fa96af-eee7-4ee3-b95a-c4036a37b3bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:07:42 np0005604943 podman[266793]: 2026-02-02 12:07:42.883327717 +0000 UTC m=+0.048832108 container create 60bf1bc7a5bef7e497c7e9aa44d73d451fececde37f381ccab87579de1135e5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_newton, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 07:07:42 np0005604943 systemd[1]: Started libpod-conmon-60bf1bc7a5bef7e497c7e9aa44d73d451fececde37f381ccab87579de1135e5c.scope.
Feb  2 07:07:42 np0005604943 podman[266793]: 2026-02-02 12:07:42.859953606 +0000 UTC m=+0.025458017 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:07:42 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:07:42 np0005604943 podman[266793]: 2026-02-02 12:07:42.976165021 +0000 UTC m=+0.141669432 container init 60bf1bc7a5bef7e497c7e9aa44d73d451fececde37f381ccab87579de1135e5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_newton, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 07:07:42 np0005604943 podman[266793]: 2026-02-02 12:07:42.9850289 +0000 UTC m=+0.150533291 container start 60bf1bc7a5bef7e497c7e9aa44d73d451fececde37f381ccab87579de1135e5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_newton, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:07:42 np0005604943 podman[266793]: 2026-02-02 12:07:42.989928283 +0000 UTC m=+0.155432694 container attach 60bf1bc7a5bef7e497c7e9aa44d73d451fececde37f381ccab87579de1135e5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_newton, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:07:42 np0005604943 elated_newton[266827]: 167 167
Feb  2 07:07:42 np0005604943 systemd[1]: libpod-60bf1bc7a5bef7e497c7e9aa44d73d451fececde37f381ccab87579de1135e5c.scope: Deactivated successfully.
Feb  2 07:07:42 np0005604943 conmon[266827]: conmon 60bf1bc7a5bef7e497c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-60bf1bc7a5bef7e497c7e9aa44d73d451fececde37f381ccab87579de1135e5c.scope/container/memory.events
Feb  2 07:07:42 np0005604943 podman[266793]: 2026-02-02 12:07:42.995085422 +0000 UTC m=+0.160589813 container died 60bf1bc7a5bef7e497c7e9aa44d73d451fececde37f381ccab87579de1135e5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:07:43 np0005604943 systemd[1]: var-lib-containers-storage-overlay-e372a4229ff630806bf6de2eab40a922d1d9ba2759504e9eb873e51cb5e815d6-merged.mount: Deactivated successfully.
Feb  2 07:07:43 np0005604943 podman[266793]: 2026-02-02 12:07:43.036398666 +0000 UTC m=+0.201903057 container remove 60bf1bc7a5bef7e497c7e9aa44d73d451fececde37f381ccab87579de1135e5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_newton, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 07:07:43 np0005604943 systemd[1]: libpod-conmon-60bf1bc7a5bef7e497c7e9aa44d73d451fececde37f381ccab87579de1135e5c.scope: Deactivated successfully.
Feb  2 07:07:43 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:07:43 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:07:43 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.170 238887 INFO nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Creating config drive at /var/lib/nova/instances/63fa96af-eee7-4ee3-b95a-c4036a37b3bb/disk.config#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.179 238887 DEBUG oslo_concurrency.processutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/63fa96af-eee7-4ee3-b95a-c4036a37b3bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfc3b7jxr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:43 np0005604943 podman[266851]: 2026-02-02 12:07:43.192910109 +0000 UTC m=+0.047604816 container create 1134c13b844a4f2ea14e8bd16947cfb31f3aff72845dd3fbc2a268b6fd2eb181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 07:07:43 np0005604943 systemd[1]: Started libpod-conmon-1134c13b844a4f2ea14e8bd16947cfb31f3aff72845dd3fbc2a268b6fd2eb181.scope.
Feb  2 07:07:43 np0005604943 podman[266851]: 2026-02-02 12:07:43.170411302 +0000 UTC m=+0.025106009 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:07:43 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:07:43 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5aa9779244a4846525978d8634c5940a762942446791c1e1d9c06f64bc50b91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:07:43 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5aa9779244a4846525978d8634c5940a762942446791c1e1d9c06f64bc50b91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:07:43 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5aa9779244a4846525978d8634c5940a762942446791c1e1d9c06f64bc50b91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:07:43 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5aa9779244a4846525978d8634c5940a762942446791c1e1d9c06f64bc50b91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:07:43 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5aa9779244a4846525978d8634c5940a762942446791c1e1d9c06f64bc50b91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 07:07:43 np0005604943 podman[266851]: 2026-02-02 12:07:43.305305201 +0000 UTC m=+0.159999918 container init 1134c13b844a4f2ea14e8bd16947cfb31f3aff72845dd3fbc2a268b6fd2eb181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb  2 07:07:43 np0005604943 podman[266851]: 2026-02-02 12:07:43.311586831 +0000 UTC m=+0.166281538 container start 1134c13b844a4f2ea14e8bd16947cfb31f3aff72845dd3fbc2a268b6fd2eb181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 07:07:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:07:43 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2458533814' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:07:43 np0005604943 podman[266851]: 2026-02-02 12:07:43.315185457 +0000 UTC m=+0.169880164 container attach 1134c13b844a4f2ea14e8bd16947cfb31f3aff72845dd3fbc2a268b6fd2eb181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.338 238887 DEBUG oslo_concurrency.processutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/63fa96af-eee7-4ee3-b95a-c4036a37b3bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfc3b7jxr" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.371 238887 DEBUG nova.storage.rbd_utils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] rbd image 63fa96af-eee7-4ee3-b95a-c4036a37b3bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.376 238887 DEBUG oslo_concurrency.processutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/63fa96af-eee7-4ee3-b95a-c4036a37b3bb/disk.config 63fa96af-eee7-4ee3-b95a-c4036a37b3bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.407 238887 DEBUG nova.network.neutron [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Successfully created port: 999bfeaf-3590-4070-95cb-80289feea19a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.524 238887 DEBUG oslo_concurrency.processutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/63fa96af-eee7-4ee3-b95a-c4036a37b3bb/disk.config 63fa96af-eee7-4ee3-b95a-c4036a37b3bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.525 238887 INFO nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Deleting local config drive /var/lib/nova/instances/63fa96af-eee7-4ee3-b95a-c4036a37b3bb/disk.config because it was imported into RBD.#033[00m
Feb  2 07:07:43 np0005604943 kernel: tap5698cd0c-cd: entered promiscuous mode
Feb  2 07:07:43 np0005604943 NetworkManager[49093]: <info>  [1770034063.5790] manager: (tap5698cd0c-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Feb  2 07:07:43 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:43Z|00237|binding|INFO|Claiming lport 5698cd0c-cd85-4888-ad3a-2c588d4e45cf for this chassis.
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.580 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:43 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:43Z|00238|binding|INFO|5698cd0c-cd85-4888-ad3a-2c588d4e45cf: Claiming fa:16:3e:66:ce:61 10.100.0.12
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.589 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:ce:61 10.100.0.12'], port_security=['fa:16:3e:66:ce:61 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '63fa96af-eee7-4ee3-b95a-c4036a37b3bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efa24ae1-9962-44ca-882a-8d146356fcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c7b49c49c104c079544033b07fb2f3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd3424b4-e169-47dd-816d-ac2340e28ccc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b6e8bcf-741b-41c8-a826-9b6dbb1c260b, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=5698cd0c-cd85-4888-ad3a-2c588d4e45cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.591 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 5698cd0c-cd85-4888-ad3a-2c588d4e45cf in datapath efa24ae1-9962-44ca-882a-8d146356fcca bound to our chassis#033[00m
Feb  2 07:07:43 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:43Z|00239|binding|INFO|Setting lport 5698cd0c-cd85-4888-ad3a-2c588d4e45cf ovn-installed in OVS
Feb  2 07:07:43 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:43Z|00240|binding|INFO|Setting lport 5698cd0c-cd85-4888-ad3a-2c588d4e45cf up in Southbound
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.592 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network efa24ae1-9962-44ca-882a-8d146356fcca#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.596 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.611 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.612 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[5d0243f9-8a27-4f21-8f0a-037d65528809]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.614 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapefa24ae1-91 in ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 07:07:43 np0005604943 systemd-udevd[266929]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.617 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapefa24ae1-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.617 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b25d1163-3b53-41b7-973f-d59f02ceb7af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.623 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[0da626a5-5822-4d69-87d9-7c7534dea728]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:43 np0005604943 systemd-machined[206973]: New machine qemu-25-instance-00000019.
Feb  2 07:07:43 np0005604943 NetworkManager[49093]: <info>  [1770034063.6422] device (tap5698cd0c-cd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:07:43 np0005604943 NetworkManager[49093]: <info>  [1770034063.6428] device (tap5698cd0c-cd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:07:43 np0005604943 systemd[1]: Started Virtual Machine qemu-25-instance-00000019.
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.645 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.646 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.647 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.648 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb65249-d876-41ca-9fd0-df3e365f4203]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.672 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.678 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d4ff9e-fead-4674-bf08-15000bd88e2d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.697 238887 DEBUG nova.compute.manager [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.698 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.699 238887 INFO nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Creating image(s)#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.702 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.702 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Ensure instance console log exists: /var/lib/nova/instances/2b79cd97-17e8-4d8d-bc7b-2c282a490be3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.702 238887 DEBUG oslo_concurrency.lockutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.703 238887 DEBUG oslo_concurrency.lockutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.703 238887 DEBUG oslo_concurrency.lockutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.716 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[29242a40-e06b-42ee-8e5a-d756ceb0062f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:43 np0005604943 NetworkManager[49093]: <info>  [1770034063.7252] manager: (tapefa24ae1-90): new Veth device (/org/freedesktop/NetworkManager/Devices/125)
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.724 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1931b0-b1dd-410d-bbb8-5c062b830abd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.755 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[c4149fe6-d0a1-42e1-a3ca-12f098842287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.759 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad15b10-2431-41fe-95b1-3215c47d81d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:43 np0005604943 NetworkManager[49093]: <info>  [1770034063.7843] device (tapefa24ae1-90): carrier: link connected
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.791 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[03c9497e-6c7a-4da0-baf9-55ecb2e86a36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.798 238887 DEBUG nova.compute.manager [req-00fdf4e3-0afc-4422-8f65-c42f39903854 req-e931f032-6695-4d98-b4f5-8a3816aaee11 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Received event network-vif-plugged-5698cd0c-cd85-4888-ad3a-2c588d4e45cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.798 238887 DEBUG oslo_concurrency.lockutils [req-00fdf4e3-0afc-4422-8f65-c42f39903854 req-e931f032-6695-4d98-b4f5-8a3816aaee11 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.798 238887 DEBUG oslo_concurrency.lockutils [req-00fdf4e3-0afc-4422-8f65-c42f39903854 req-e931f032-6695-4d98-b4f5-8a3816aaee11 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.799 238887 DEBUG oslo_concurrency.lockutils [req-00fdf4e3-0afc-4422-8f65-c42f39903854 req-e931f032-6695-4d98-b4f5-8a3816aaee11 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.799 238887 DEBUG nova.compute.manager [req-00fdf4e3-0afc-4422-8f65-c42f39903854 req-e931f032-6695-4d98-b4f5-8a3816aaee11 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Processing event network-vif-plugged-5698cd0c-cd85-4888-ad3a-2c588d4e45cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.808 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[685dfcec-a985-4b99-b595-0c4b006a9d5b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefa24ae1-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:4e:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450913, 'reachable_time': 27578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266971, 'error': None, 'target': 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.822 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d9500c65-0e0a-46de-8f86-2a4f831a72f8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:4ebf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 450913, 'tstamp': 450913}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266974, 'error': None, 'target': 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:43 np0005604943 focused_chaum[266871]: --> passed data devices: 0 physical, 3 LVM
Feb  2 07:07:43 np0005604943 focused_chaum[266871]: --> All data devices are unavailable
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.840 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[2b976a8c-4b3c-4211-95fc-556716df1f19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefa24ae1-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:4e:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450913, 'reachable_time': 27578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266976, 'error': None, 'target': 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.874 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9a9514c2-017b-4bfa-9867-7d8ad764b9e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:43 np0005604943 systemd[1]: libpod-1134c13b844a4f2ea14e8bd16947cfb31f3aff72845dd3fbc2a268b6fd2eb181.scope: Deactivated successfully.
Feb  2 07:07:43 np0005604943 podman[266851]: 2026-02-02 12:07:43.878813793 +0000 UTC m=+0.733508520 container died 1134c13b844a4f2ea14e8bd16947cfb31f3aff72845dd3fbc2a268b6fd2eb181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:07:43 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f5aa9779244a4846525978d8634c5940a762942446791c1e1d9c06f64bc50b91-merged.mount: Deactivated successfully.
Feb  2 07:07:43 np0005604943 podman[266851]: 2026-02-02 12:07:43.926790657 +0000 UTC m=+0.781485364 container remove 1134c13b844a4f2ea14e8bd16947cfb31f3aff72845dd3fbc2a268b6fd2eb181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:07:43 np0005604943 systemd[1]: libpod-conmon-1134c13b844a4f2ea14e8bd16947cfb31f3aff72845dd3fbc2a268b6fd2eb181.scope: Deactivated successfully.
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.983 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[40af60cf-75d9-4e45-b213-ae2467b404ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.987 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefa24ae1-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.987 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.988 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefa24ae1-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:43 np0005604943 kernel: tapefa24ae1-90: entered promiscuous mode
Feb  2 07:07:43 np0005604943 NetworkManager[49093]: <info>  [1770034063.9920] manager: (tapefa24ae1-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Feb  2 07:07:43 np0005604943 nova_compute[238883]: 2026-02-02 12:07:43.993 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.994 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapefa24ae1-90, col_values=(('external_ids', {'iface-id': '88fa0d04-0a79-4556-b2c6-d65a3a18ab58'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:43 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:43Z|00241|binding|INFO|Releasing lport 88fa0d04-0a79-4556-b2c6-d65a3a18ab58 from this chassis (sb_readonly=0)
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.997 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/efa24ae1-9962-44ca-882a-8d146356fcca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/efa24ae1-9962-44ca-882a-8d146356fcca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.998 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca6f0d1-255b-4640-b2ec-8f9eba7a9772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:43.999 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 07:07:43 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-efa24ae1-9962-44ca-882a-8d146356fcca
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/efa24ae1-9962-44ca-882a-8d146356fcca.pid.haproxy
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID efa24ae1-9962-44ca-882a-8d146356fcca
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 07:07:44 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:44.000 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'env', 'PROCESS_TAG=haproxy-efa24ae1-9962-44ca-882a-8d146356fcca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/efa24ae1-9962-44ca-882a-8d146356fcca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 07:07:44 np0005604943 nova_compute[238883]: 2026-02-02 12:07:44.002 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:44 np0005604943 nova_compute[238883]: 2026-02-02 12:07:44.170 238887 DEBUG nova.network.neutron [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Successfully updated port: 999bfeaf-3590-4070-95cb-80289feea19a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:07:44 np0005604943 nova_compute[238883]: 2026-02-02 12:07:44.186 238887 DEBUG oslo_concurrency.lockutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "refresh_cache-2b79cd97-17e8-4d8d-bc7b-2c282a490be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:07:44 np0005604943 nova_compute[238883]: 2026-02-02 12:07:44.187 238887 DEBUG oslo_concurrency.lockutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquired lock "refresh_cache-2b79cd97-17e8-4d8d-bc7b-2c282a490be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:07:44 np0005604943 nova_compute[238883]: 2026-02-02 12:07:44.187 238887 DEBUG nova.network.neutron [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:07:44 np0005604943 nova_compute[238883]: 2026-02-02 12:07:44.279 238887 DEBUG nova.compute.manager [req-6a040f1c-3e08-406d-a5cb-b119c1e038f3 req-42208b4a-3f9b-4e3c-96aa-ea774419352b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Received event network-changed-999bfeaf-3590-4070-95cb-80289feea19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:44 np0005604943 nova_compute[238883]: 2026-02-02 12:07:44.279 238887 DEBUG nova.compute.manager [req-6a040f1c-3e08-406d-a5cb-b119c1e038f3 req-42208b4a-3f9b-4e3c-96aa-ea774419352b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Refreshing instance network info cache due to event network-changed-999bfeaf-3590-4070-95cb-80289feea19a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:07:44 np0005604943 nova_compute[238883]: 2026-02-02 12:07:44.280 238887 DEBUG oslo_concurrency.lockutils [req-6a040f1c-3e08-406d-a5cb-b119c1e038f3 req-42208b4a-3f9b-4e3c-96aa-ea774419352b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-2b79cd97-17e8-4d8d-bc7b-2c282a490be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:07:44 np0005604943 podman[267100]: 2026-02-02 12:07:44.371188256 +0000 UTC m=+0.053178246 container create 89e5ebdba89e26fad1ee4cda51ff20e21ae12566e1376b90893d2c53e6a9879b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:07:44 np0005604943 nova_compute[238883]: 2026-02-02 12:07:44.398 238887 DEBUG nova.network.neutron [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:07:44 np0005604943 systemd[1]: Started libpod-conmon-89e5ebdba89e26fad1ee4cda51ff20e21ae12566e1376b90893d2c53e6a9879b.scope.
Feb  2 07:07:44 np0005604943 podman[267100]: 2026-02-02 12:07:44.346627024 +0000 UTC m=+0.028617014 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 07:07:44 np0005604943 podman[267125]: 2026-02-02 12:07:44.445710637 +0000 UTC m=+0.046242058 container create 859c19f837474207ee39584a24ac157835223ae5f1e3ca98d045b2c4141a4260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 07:07:44 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:07:44 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f521dfd34739cdeeb6dfd05828f923c9ae4259c120484d97d7e3391e1a2bbc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 07:07:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 535 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 119 op/s
Feb  2 07:07:44 np0005604943 podman[267100]: 2026-02-02 12:07:44.479191691 +0000 UTC m=+0.161181681 container init 89e5ebdba89e26fad1ee4cda51ff20e21ae12566e1376b90893d2c53e6a9879b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb  2 07:07:44 np0005604943 podman[267100]: 2026-02-02 12:07:44.488171282 +0000 UTC m=+0.170161272 container start 89e5ebdba89e26fad1ee4cda51ff20e21ae12566e1376b90893d2c53e6a9879b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 07:07:44 np0005604943 systemd[1]: Started libpod-conmon-859c19f837474207ee39584a24ac157835223ae5f1e3ca98d045b2c4141a4260.scope.
Feb  2 07:07:44 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[267139]: [NOTICE]   (267146) : New worker (267150) forked
Feb  2 07:07:44 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[267139]: [NOTICE]   (267146) : Loading success.
Feb  2 07:07:44 np0005604943 podman[267125]: 2026-02-02 12:07:44.425931984 +0000 UTC m=+0.026463435 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:07:44 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:07:44 np0005604943 podman[267125]: 2026-02-02 12:07:44.549758384 +0000 UTC m=+0.150289805 container init 859c19f837474207ee39584a24ac157835223ae5f1e3ca98d045b2c4141a4260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_turing, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Feb  2 07:07:44 np0005604943 podman[267125]: 2026-02-02 12:07:44.559266561 +0000 UTC m=+0.159797982 container start 859c19f837474207ee39584a24ac157835223ae5f1e3ca98d045b2c4141a4260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_turing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 07:07:44 np0005604943 podman[267125]: 2026-02-02 12:07:44.563813163 +0000 UTC m=+0.164344604 container attach 859c19f837474207ee39584a24ac157835223ae5f1e3ca98d045b2c4141a4260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Feb  2 07:07:44 np0005604943 frosty_turing[267148]: 167 167
Feb  2 07:07:44 np0005604943 systemd[1]: libpod-859c19f837474207ee39584a24ac157835223ae5f1e3ca98d045b2c4141a4260.scope: Deactivated successfully.
Feb  2 07:07:44 np0005604943 podman[267125]: 2026-02-02 12:07:44.567131502 +0000 UTC m=+0.167662923 container died 859c19f837474207ee39584a24ac157835223ae5f1e3ca98d045b2c4141a4260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb  2 07:07:44 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b48b01d301b7f0708fb02df2ca643aa821c449e60a1d99cff541f0e53d90c49c-merged.mount: Deactivated successfully.
Feb  2 07:07:44 np0005604943 podman[267125]: 2026-02-02 12:07:44.606076303 +0000 UTC m=+0.206607734 container remove 859c19f837474207ee39584a24ac157835223ae5f1e3ca98d045b2c4141a4260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 07:07:44 np0005604943 systemd[1]: libpod-conmon-859c19f837474207ee39584a24ac157835223ae5f1e3ca98d045b2c4141a4260.scope: Deactivated successfully.
Feb  2 07:07:44 np0005604943 nova_compute[238883]: 2026-02-02 12:07:44.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:07:44 np0005604943 nova_compute[238883]: 2026-02-02 12:07:44.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:07:44 np0005604943 nova_compute[238883]: 2026-02-02 12:07:44.671 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:07:44 np0005604943 nova_compute[238883]: 2026-02-02 12:07:44.672 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  2 07:07:44 np0005604943 podman[267182]: 2026-02-02 12:07:44.761791864 +0000 UTC m=+0.049017794 container create 583bf76f7cf5a4f9b4b6b218f8bc40479257dc77c10d7f80b8f3d155012d1569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_noether, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb  2 07:07:44 np0005604943 systemd[1]: Started libpod-conmon-583bf76f7cf5a4f9b4b6b218f8bc40479257dc77c10d7f80b8f3d155012d1569.scope.
Feb  2 07:07:44 np0005604943 podman[267182]: 2026-02-02 12:07:44.739656257 +0000 UTC m=+0.026882197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:07:44 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:07:44 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/771be8ba839c4504478e17d2e743633e5e6041614b18b18ac9d5c25dc8abe84f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:07:44 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/771be8ba839c4504478e17d2e743633e5e6041614b18b18ac9d5c25dc8abe84f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:07:44 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/771be8ba839c4504478e17d2e743633e5e6041614b18b18ac9d5c25dc8abe84f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:07:44 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/771be8ba839c4504478e17d2e743633e5e6041614b18b18ac9d5c25dc8abe84f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:07:44 np0005604943 podman[267182]: 2026-02-02 12:07:44.868329418 +0000 UTC m=+0.155555378 container init 583bf76f7cf5a4f9b4b6b218f8bc40479257dc77c10d7f80b8f3d155012d1569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:07:44 np0005604943 podman[267182]: 2026-02-02 12:07:44.876107628 +0000 UTC m=+0.163333558 container start 583bf76f7cf5a4f9b4b6b218f8bc40479257dc77c10d7f80b8f3d155012d1569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 07:07:44 np0005604943 podman[267182]: 2026-02-02 12:07:44.880698762 +0000 UTC m=+0.167924722 container attach 583bf76f7cf5a4f9b4b6b218f8bc40479257dc77c10d7f80b8f3d155012d1569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:07:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:07:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2103421893' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:07:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:07:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2103421893' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:07:45 np0005604943 sad_noether[267199]: {
Feb  2 07:07:45 np0005604943 sad_noether[267199]:    "0": [
Feb  2 07:07:45 np0005604943 sad_noether[267199]:        {
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "devices": [
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "/dev/loop3"
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            ],
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "lv_name": "ceph_lv0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "lv_size": "21470642176",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "name": "ceph_lv0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "tags": {
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.cluster_name": "ceph",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.crush_device_class": "",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.encrypted": "0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.objectstore": "bluestore",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.osd_id": "0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.type": "block",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.vdo": "0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.with_tpm": "0"
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            },
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "type": "block",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "vg_name": "ceph_vg0"
Feb  2 07:07:45 np0005604943 sad_noether[267199]:        }
Feb  2 07:07:45 np0005604943 sad_noether[267199]:    ],
Feb  2 07:07:45 np0005604943 sad_noether[267199]:    "1": [
Feb  2 07:07:45 np0005604943 sad_noether[267199]:        {
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "devices": [
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "/dev/loop4"
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            ],
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "lv_name": "ceph_lv1",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "lv_size": "21470642176",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "name": "ceph_lv1",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "tags": {
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.cluster_name": "ceph",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.crush_device_class": "",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.encrypted": "0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.objectstore": "bluestore",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.osd_id": "1",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.type": "block",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.vdo": "0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.with_tpm": "0"
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            },
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "type": "block",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "vg_name": "ceph_vg1"
Feb  2 07:07:45 np0005604943 sad_noether[267199]:        }
Feb  2 07:07:45 np0005604943 sad_noether[267199]:    ],
Feb  2 07:07:45 np0005604943 sad_noether[267199]:    "2": [
Feb  2 07:07:45 np0005604943 sad_noether[267199]:        {
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "devices": [
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "/dev/loop5"
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            ],
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "lv_name": "ceph_lv2",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "lv_size": "21470642176",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "name": "ceph_lv2",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "tags": {
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.cluster_name": "ceph",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.crush_device_class": "",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.encrypted": "0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.objectstore": "bluestore",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.osd_id": "2",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.type": "block",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.vdo": "0",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:                "ceph.with_tpm": "0"
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            },
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "type": "block",
Feb  2 07:07:45 np0005604943 sad_noether[267199]:            "vg_name": "ceph_vg2"
Feb  2 07:07:45 np0005604943 sad_noether[267199]:        }
Feb  2 07:07:45 np0005604943 sad_noether[267199]:    ]
Feb  2 07:07:45 np0005604943 sad_noether[267199]: }
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.167 238887 DEBUG nova.network.neutron [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Updating instance_info_cache with network_info: [{"id": "999bfeaf-3590-4070-95cb-80289feea19a", "address": "fa:16:3e:19:8b:14", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap999bfeaf-35", "ovs_interfaceid": "999bfeaf-3590-4070-95cb-80289feea19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:45 np0005604943 systemd[1]: libpod-583bf76f7cf5a4f9b4b6b218f8bc40479257dc77c10d7f80b8f3d155012d1569.scope: Deactivated successfully.
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.186 238887 DEBUG oslo_concurrency.lockutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Releasing lock "refresh_cache-2b79cd97-17e8-4d8d-bc7b-2c282a490be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.186 238887 DEBUG nova.compute.manager [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Instance network_info: |[{"id": "999bfeaf-3590-4070-95cb-80289feea19a", "address": "fa:16:3e:19:8b:14", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap999bfeaf-35", "ovs_interfaceid": "999bfeaf-3590-4070-95cb-80289feea19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.187 238887 DEBUG oslo_concurrency.lockutils [req-6a040f1c-3e08-406d-a5cb-b119c1e038f3 req-42208b4a-3f9b-4e3c-96aa-ea774419352b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-2b79cd97-17e8-4d8d-bc7b-2c282a490be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.187 238887 DEBUG nova.network.neutron [req-6a040f1c-3e08-406d-a5cb-b119c1e038f3 req-42208b4a-3f9b-4e3c-96aa-ea774419352b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Refreshing network info cache for port 999bfeaf-3590-4070-95cb-80289feea19a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.191 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Start _get_guest_xml network_info=[{"id": "999bfeaf-3590-4070-95cb-80289feea19a", "address": "fa:16:3e:19:8b:14", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap999bfeaf-35", "ovs_interfaceid": "999bfeaf-3590-4070-95cb-80289feea19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'attachment_id': '41b37ba5-4ace-4f95-8c02-20ba41e31139', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c883c4de-4ba6-485f-86f8-0f0ce82aee70', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c883c4de-4ba6-485f-86f8-0f0ce82aee70', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '2b79cd97-17e8-4d8d-bc7b-2c282a490be3', 'attached_at': '', 'detached_at': '', 'volume_id': 'c883c4de-4ba6-485f-86f8-0f0ce82aee70', 'serial': 'c883c4de-4ba6-485f-86f8-0f0ce82aee70'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.208 238887 WARNING nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.215 238887 DEBUG nova.virt.libvirt.host [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.216 238887 DEBUG nova.virt.libvirt.host [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.220 238887 DEBUG nova.virt.libvirt.host [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.220 238887 DEBUG nova.virt.libvirt.host [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.221 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.221 238887 DEBUG nova.virt.hardware [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.221 238887 DEBUG nova.virt.hardware [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.221 238887 DEBUG nova.virt.hardware [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.221 238887 DEBUG nova.virt.hardware [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.222 238887 DEBUG nova.virt.hardware [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.222 238887 DEBUG nova.virt.hardware [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.222 238887 DEBUG nova.virt.hardware [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.222 238887 DEBUG nova.virt.hardware [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.222 238887 DEBUG nova.virt.hardware [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.222 238887 DEBUG nova.virt.hardware [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.223 238887 DEBUG nova.virt.hardware [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:07:45 np0005604943 podman[267208]: 2026-02-02 12:07:45.228636269 +0000 UTC m=+0.031354138 container died 583bf76f7cf5a4f9b4b6b218f8bc40479257dc77c10d7f80b8f3d155012d1569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb  2 07:07:45 np0005604943 systemd[1]: var-lib-containers-storage-overlay-771be8ba839c4504478e17d2e743633e5e6041614b18b18ac9d5c25dc8abe84f-merged.mount: Deactivated successfully.
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.254 238887 DEBUG nova.storage.rbd_utils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] rbd image 2b79cd97-17e8-4d8d-bc7b-2c282a490be3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.261 238887 DEBUG oslo_concurrency.processutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:45 np0005604943 podman[267208]: 2026-02-02 12:07:45.270055626 +0000 UTC m=+0.072773495 container remove 583bf76f7cf5a4f9b4b6b218f8bc40479257dc77c10d7f80b8f3d155012d1569 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_noether, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 07:07:45 np0005604943 systemd[1]: libpod-conmon-583bf76f7cf5a4f9b4b6b218f8bc40479257dc77c10d7f80b8f3d155012d1569.scope: Deactivated successfully.
Feb  2 07:07:45 np0005604943 podman[267323]: 2026-02-02 12:07:45.801707299 +0000 UTC m=+0.055726885 container create 8d75b51d6999d583b464029971ba4034d63f7476e529c7f67002b46524f2cd4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_merkle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 07:07:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:07:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3175227443' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:07:45 np0005604943 systemd[1]: Started libpod-conmon-8d75b51d6999d583b464029971ba4034d63f7476e529c7f67002b46524f2cd4d.scope.
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.858 238887 DEBUG oslo_concurrency.processutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:45 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.873 238887 DEBUG nova.compute.manager [req-c59b7350-779e-410e-9282-9bc158b2bac4 req-ac087d36-1a92-4bc1-b84a-99b18b2c8d25 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Received event network-vif-plugged-5698cd0c-cd85-4888-ad3a-2c588d4e45cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.873 238887 DEBUG oslo_concurrency.lockutils [req-c59b7350-779e-410e-9282-9bc158b2bac4 req-ac087d36-1a92-4bc1-b84a-99b18b2c8d25 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.873 238887 DEBUG oslo_concurrency.lockutils [req-c59b7350-779e-410e-9282-9bc158b2bac4 req-ac087d36-1a92-4bc1-b84a-99b18b2c8d25 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.874 238887 DEBUG oslo_concurrency.lockutils [req-c59b7350-779e-410e-9282-9bc158b2bac4 req-ac087d36-1a92-4bc1-b84a-99b18b2c8d25 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:45 np0005604943 podman[267323]: 2026-02-02 12:07:45.783881648 +0000 UTC m=+0.037901264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.878 238887 DEBUG nova.compute.manager [req-c59b7350-779e-410e-9282-9bc158b2bac4 req-ac087d36-1a92-4bc1-b84a-99b18b2c8d25 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] No waiting events found dispatching network-vif-plugged-5698cd0c-cd85-4888-ad3a-2c588d4e45cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.879 238887 WARNING nova.compute.manager [req-c59b7350-779e-410e-9282-9bc158b2bac4 req-ac087d36-1a92-4bc1-b84a-99b18b2c8d25 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Received unexpected event network-vif-plugged-5698cd0c-cd85-4888-ad3a-2c588d4e45cf for instance with vm_state building and task_state spawning.#033[00m
Feb  2 07:07:45 np0005604943 podman[267323]: 2026-02-02 12:07:45.88808735 +0000 UTC m=+0.142106966 container init 8d75b51d6999d583b464029971ba4034d63f7476e529c7f67002b46524f2cd4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:07:45 np0005604943 podman[267323]: 2026-02-02 12:07:45.896749833 +0000 UTC m=+0.150769419 container start 8d75b51d6999d583b464029971ba4034d63f7476e529c7f67002b46524f2cd4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_merkle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:07:45 np0005604943 podman[267323]: 2026-02-02 12:07:45.901886132 +0000 UTC m=+0.155905888 container attach 8d75b51d6999d583b464029971ba4034d63f7476e529c7f67002b46524f2cd4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_merkle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 07:07:45 np0005604943 systemd[1]: libpod-8d75b51d6999d583b464029971ba4034d63f7476e529c7f67002b46524f2cd4d.scope: Deactivated successfully.
Feb  2 07:07:45 np0005604943 goofy_merkle[267341]: 167 167
Feb  2 07:07:45 np0005604943 conmon[267341]: conmon 8d75b51d6999d583b464 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d75b51d6999d583b464029971ba4034d63f7476e529c7f67002b46524f2cd4d.scope/container/memory.events
Feb  2 07:07:45 np0005604943 podman[267323]: 2026-02-02 12:07:45.908545511 +0000 UTC m=+0.162565097 container died 8d75b51d6999d583b464029971ba4034d63f7476e529c7f67002b46524f2cd4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb  2 07:07:45 np0005604943 systemd[1]: var-lib-containers-storage-overlay-053856ce8df4278661be83f3cfb0fab0cfc5471d7f211fcf55dc3a2a7b0199fc-merged.mount: Deactivated successfully.
Feb  2 07:07:45 np0005604943 podman[267323]: 2026-02-02 12:07:45.967584424 +0000 UTC m=+0.221604010 container remove 8d75b51d6999d583b464029971ba4034d63f7476e529c7f67002b46524f2cd4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:07:45 np0005604943 systemd[1]: libpod-conmon-8d75b51d6999d583b464029971ba4034d63f7476e529c7f67002b46524f2cd4d.scope: Deactivated successfully.
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.978 238887 DEBUG os_brick.encryptors [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Using volume encryption metadata '{'encryption_key_id': '9c621940-18a7-4ff7-a898-747804f3bbe4', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c883c4de-4ba6-485f-86f8-0f0ce82aee70', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c883c4de-4ba6-485f-86f8-0f0ce82aee70', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '2b79cd97-17e8-4d8d-bc7b-2c282a490be3', 'attached_at': '', 'detached_at': '', 'volume_id': 'c883c4de-4ba6-485f-86f8-0f0ce82aee70', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.980 238887 DEBUG barbicanclient.client [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.995 238887 DEBUG barbicanclient.v1.secrets [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/9c621940-18a7-4ff7-a898-747804f3bbe4 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 07:07:45 np0005604943 nova_compute[238883]: 2026-02-02 12:07:45.995 238887 INFO barbicanclient.base [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9c621940-18a7-4ff7-a898-747804f3bbe4#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.018 238887 DEBUG barbicanclient.client [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.018 238887 INFO barbicanclient.base [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9c621940-18a7-4ff7-a898-747804f3bbe4#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.040 238887 DEBUG barbicanclient.client [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.041 238887 INFO barbicanclient.base [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9c621940-18a7-4ff7-a898-747804f3bbe4#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.066 238887 DEBUG barbicanclient.client [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.067 238887 INFO barbicanclient.base [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9c621940-18a7-4ff7-a898-747804f3bbe4#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.089 238887 DEBUG barbicanclient.client [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.091 238887 INFO barbicanclient.base [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9c621940-18a7-4ff7-a898-747804f3bbe4#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.115 238887 DEBUG barbicanclient.client [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.116 238887 INFO barbicanclient.base [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9c621940-18a7-4ff7-a898-747804f3bbe4#033[00m
Feb  2 07:07:46 np0005604943 podman[267365]: 2026-02-02 12:07:46.119081691 +0000 UTC m=+0.038447768 container create 94b9a228344f4463861117b3f34808bf42c96111d0878c1a6204d0735b0830c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ptolemy, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.139 238887 DEBUG barbicanclient.client [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.141 238887 INFO barbicanclient.base [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9c621940-18a7-4ff7-a898-747804f3bbe4#033[00m
Feb  2 07:07:46 np0005604943 systemd[1]: Started libpod-conmon-94b9a228344f4463861117b3f34808bf42c96111d0878c1a6204d0735b0830c0.scope.
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.172 238887 DEBUG barbicanclient.client [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.173 238887 INFO barbicanclient.base [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9c621940-18a7-4ff7-a898-747804f3bbe4#033[00m
Feb  2 07:07:46 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:07:46 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0650341ed20d328a8e049184ae336f24e72048f73e476748ccb06c15631c7cf0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.199 238887 DEBUG barbicanclient.client [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.199 238887 INFO barbicanclient.base [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9c621940-18a7-4ff7-a898-747804f3bbe4#033[00m
Feb  2 07:07:46 np0005604943 podman[267365]: 2026-02-02 12:07:46.102964106 +0000 UTC m=+0.022330203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:07:46 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0650341ed20d328a8e049184ae336f24e72048f73e476748ccb06c15631c7cf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:07:46 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0650341ed20d328a8e049184ae336f24e72048f73e476748ccb06c15631c7cf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:07:46 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0650341ed20d328a8e049184ae336f24e72048f73e476748ccb06c15631c7cf0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.218 238887 DEBUG barbicanclient.client [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.219 238887 INFO barbicanclient.base [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9c621940-18a7-4ff7-a898-747804f3bbe4#033[00m
Feb  2 07:07:46 np0005604943 podman[267365]: 2026-02-02 12:07:46.224905816 +0000 UTC m=+0.144271903 container init 94b9a228344f4463861117b3f34808bf42c96111d0878c1a6204d0735b0830c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ptolemy, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:07:46 np0005604943 podman[267380]: 2026-02-02 12:07:46.229650434 +0000 UTC m=+0.073681979 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  2 07:07:46 np0005604943 podman[267365]: 2026-02-02 12:07:46.232898732 +0000 UTC m=+0.152264809 container start 94b9a228344f4463861117b3f34808bf42c96111d0878c1a6204d0735b0830c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ptolemy, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:07:46 np0005604943 podman[267365]: 2026-02-02 12:07:46.237746153 +0000 UTC m=+0.157112260 container attach 94b9a228344f4463861117b3f34808bf42c96111d0878c1a6204d0735b0830c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.238 238887 DEBUG barbicanclient.client [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.238 238887 INFO barbicanclient.base [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9c621940-18a7-4ff7-a898-747804f3bbe4#033[00m
Feb  2 07:07:46 np0005604943 podman[267379]: 2026-02-02 12:07:46.257079854 +0000 UTC m=+0.100612196 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.258 238887 DEBUG barbicanclient.client [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.262 238887 INFO barbicanclient.base [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9c621940-18a7-4ff7-a898-747804f3bbe4#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.279 238887 DEBUG barbicanclient.client [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.280 238887 INFO barbicanclient.base [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9c621940-18a7-4ff7-a898-747804f3bbe4#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.297 238887 DEBUG barbicanclient.client [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.298 238887 INFO barbicanclient.base [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9c621940-18a7-4ff7-a898-747804f3bbe4#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.321 238887 DEBUG barbicanclient.client [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.321 238887 INFO barbicanclient.base [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Calculated Secrets uuid ref: secrets/9c621940-18a7-4ff7-a898-747804f3bbe4#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.349 238887 DEBUG barbicanclient.client [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.350 238887 DEBUG nova.virt.libvirt.host [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  <usage type="volume">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <volume>c883c4de-4ba6-485f-86f8-0f0ce82aee70</volume>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  </usage>
Feb  2 07:07:46 np0005604943 nova_compute[238883]: </secret>
Feb  2 07:07:46 np0005604943 nova_compute[238883]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.380 238887 DEBUG nova.virt.libvirt.vif [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:07:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1206460550',display_name='tempest-TestEncryptedCinderVolumes-server-1206460550',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1206460550',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4sEG9hObpGnevoIlqMdkrX6LtyepBRCjADAYBnTUNxH7zE9sXens2JsebTT1q5zN1V4atJxK/wradQkp5n2K1zuz899xdCKCopiRNmhKseY0+RU/9UYAZOT5nySAcl7g==',key_name='tempest-TestEncryptedCinderVolumes-1244227927',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851fb6d80faf43cc9b2fef1913323704',ramdisk_id='',reservation_id='r-hvadytr8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1976450145',owner_user_name='tempest-TestEncryptedCinderVolumes-1976450145-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:07:42Z,user_data=None,user_id='084f489a7b4c4fecba7b0942ed1b7203',uuid=2b79cd97-17e8-4d8d-bc7b-2c282a490be3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "999bfeaf-3590-4070-95cb-80289feea19a", "address": "fa:16:3e:19:8b:14", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap999bfeaf-35", "ovs_interfaceid": "999bfeaf-3590-4070-95cb-80289feea19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.381 238887 DEBUG nova.network.os_vif_util [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converting VIF {"id": "999bfeaf-3590-4070-95cb-80289feea19a", "address": "fa:16:3e:19:8b:14", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap999bfeaf-35", "ovs_interfaceid": "999bfeaf-3590-4070-95cb-80289feea19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.381 238887 DEBUG nova.network.os_vif_util [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:8b:14,bridge_name='br-int',has_traffic_filtering=True,id=999bfeaf-3590-4070-95cb-80289feea19a,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap999bfeaf-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.383 238887 DEBUG nova.objects.instance [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b79cd97-17e8-4d8d-bc7b-2c282a490be3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.395 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  <uuid>2b79cd97-17e8-4d8d-bc7b-2c282a490be3</uuid>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  <name>instance-0000001a</name>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-1206460550</nova:name>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:07:45</nova:creationTime>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:        <nova:user uuid="084f489a7b4c4fecba7b0942ed1b7203">tempest-TestEncryptedCinderVolumes-1976450145-project-member</nova:user>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:        <nova:project uuid="851fb6d80faf43cc9b2fef1913323704">tempest-TestEncryptedCinderVolumes-1976450145</nova:project>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:        <nova:port uuid="999bfeaf-3590-4070-95cb-80289feea19a">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <entry name="serial">2b79cd97-17e8-4d8d-bc7b-2c282a490be3</entry>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <entry name="uuid">2b79cd97-17e8-4d8d-bc7b-2c282a490be3</entry>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/2b79cd97-17e8-4d8d-bc7b-2c282a490be3_disk.config">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="volumes/volume-c883c4de-4ba6-485f-86f8-0f0ce82aee70">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <serial>c883c4de-4ba6-485f-86f8-0f0ce82aee70</serial>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <encryption format="luks">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:        <secret type="passphrase" uuid="2fa253ce-dbcc-4b44-b393-6e082f62607f"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      </encryption>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:19:8b:14"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <target dev="tap999bfeaf-35"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/2b79cd97-17e8-4d8d-bc7b-2c282a490be3/console.log" append="off"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:07:46 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:07:46 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:07:46 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:07:46 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.396 238887 DEBUG nova.compute.manager [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Preparing to wait for external event network-vif-plugged-999bfeaf-3590-4070-95cb-80289feea19a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.396 238887 DEBUG oslo_concurrency.lockutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.396 238887 DEBUG oslo_concurrency.lockutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.397 238887 DEBUG oslo_concurrency.lockutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.398 238887 DEBUG nova.virt.libvirt.vif [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:07:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1206460550',display_name='tempest-TestEncryptedCinderVolumes-server-1206460550',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1206460550',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4sEG9hObpGnevoIlqMdkrX6LtyepBRCjADAYBnTUNxH7zE9sXens2JsebTT1q5zN1V4atJxK/wradQkp5n2K1zuz899xdCKCopiRNmhKseY0+RU/9UYAZOT5nySAcl7g==',key_name='tempest-TestEncryptedCinderVolumes-1244227927',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851fb6d80faf43cc9b2fef1913323704',ramdisk_id='',reservation_id='r-hvadytr8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1976450145',owner_user_name='tempest-TestEncryptedCinderVolumes-1976450145-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:07:42Z,user_data=None,user_id='084f489a7b4c4fecba7b0942ed1b7203',uuid=2b79cd97-17e8-4d8d-bc7b-2c282a490be3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "999bfeaf-3590-4070-95cb-80289feea19a", "address": "fa:16:3e:19:8b:14", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap999bfeaf-35", "ovs_interfaceid": "999bfeaf-3590-4070-95cb-80289feea19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.398 238887 DEBUG nova.network.os_vif_util [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converting VIF {"id": "999bfeaf-3590-4070-95cb-80289feea19a", "address": "fa:16:3e:19:8b:14", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap999bfeaf-35", "ovs_interfaceid": "999bfeaf-3590-4070-95cb-80289feea19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.399 238887 DEBUG nova.network.os_vif_util [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:8b:14,bridge_name='br-int',has_traffic_filtering=True,id=999bfeaf-3590-4070-95cb-80289feea19a,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap999bfeaf-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.399 238887 DEBUG os_vif [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:8b:14,bridge_name='br-int',has_traffic_filtering=True,id=999bfeaf-3590-4070-95cb-80289feea19a,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap999bfeaf-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.400 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.400 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.401 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.405 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.406 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap999bfeaf-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.406 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap999bfeaf-35, col_values=(('external_ids', {'iface-id': '999bfeaf-3590-4070-95cb-80289feea19a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:8b:14', 'vm-uuid': '2b79cd97-17e8-4d8d-bc7b-2c282a490be3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.407 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.409 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:07:46 np0005604943 NetworkManager[49093]: <info>  [1770034066.4094] manager: (tap999bfeaf-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.414 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.415 238887 INFO os_vif [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:8b:14,bridge_name='br-int',has_traffic_filtering=True,id=999bfeaf-3590-4070-95cb-80289feea19a,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap999bfeaf-35')#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.462 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.463 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.463 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] No VIF found with MAC fa:16:3e:19:8b:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.463 238887 INFO nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Using config drive#033[00m
Feb  2 07:07:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 535 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 19 KiB/s wr, 113 op/s
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.483 238887 DEBUG nova.storage.rbd_utils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] rbd image 2b79cd97-17e8-4d8d-bc7b-2c282a490be3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.735 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.977 238887 DEBUG nova.compute.manager [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.986 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034066.9854615, 63fa96af-eee7-4ee3-b95a-c4036a37b3bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.986 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] VM Started (Lifecycle Event)#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.989 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:07:46 np0005604943 lvm[267529]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:07:46 np0005604943 lvm[267529]: VG ceph_vg0 finished
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.993 238887 INFO nova.virt.libvirt.driver [-] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Instance spawned successfully.#033[00m
Feb  2 07:07:46 np0005604943 nova_compute[238883]: 2026-02-02 12:07:46.994 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 07:07:47 np0005604943 lvm[267532]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 07:07:47 np0005604943 lvm[267532]: VG ceph_vg1 finished
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.019 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.025 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.026 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.027 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.028 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.028 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.029 238887 DEBUG nova.virt.libvirt.driver [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.035 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:07:47 np0005604943 lvm[267533]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 07:07:47 np0005604943 lvm[267533]: VG ceph_vg2 finished
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.076 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.077 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034066.985627, 63fa96af-eee7-4ee3-b95a-c4036a37b3bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.077 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.101 238887 INFO nova.compute.manager [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Took 7.62 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.101 238887 DEBUG nova.compute.manager [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.103 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.110 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034066.9890506, 63fa96af-eee7-4ee3-b95a-c4036a37b3bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.111 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:07:47 np0005604943 adoring_ptolemy[267400]: {}
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.133 238887 INFO nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Creating config drive at /var/lib/nova/instances/2b79cd97-17e8-4d8d-bc7b-2c282a490be3/disk.config#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.142 238887 DEBUG oslo_concurrency.processutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b79cd97-17e8-4d8d-bc7b-2c282a490be3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp82qhjfpx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:47 np0005604943 systemd[1]: libpod-94b9a228344f4463861117b3f34808bf42c96111d0878c1a6204d0735b0830c0.scope: Deactivated successfully.
Feb  2 07:07:47 np0005604943 systemd[1]: libpod-94b9a228344f4463861117b3f34808bf42c96111d0878c1a6204d0735b0830c0.scope: Consumed 1.246s CPU time.
Feb  2 07:07:47 np0005604943 podman[267365]: 2026-02-02 12:07:47.160378494 +0000 UTC m=+1.079744601 container died 94b9a228344f4463861117b3f34808bf42c96111d0878c1a6204d0735b0830c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ptolemy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.169 238887 DEBUG nova.network.neutron [req-6a040f1c-3e08-406d-a5cb-b119c1e038f3 req-42208b4a-3f9b-4e3c-96aa-ea774419352b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Updated VIF entry in instance network info cache for port 999bfeaf-3590-4070-95cb-80289feea19a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.171 238887 DEBUG nova.network.neutron [req-6a040f1c-3e08-406d-a5cb-b119c1e038f3 req-42208b4a-3f9b-4e3c-96aa-ea774419352b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Updating instance_info_cache with network_info: [{"id": "999bfeaf-3590-4070-95cb-80289feea19a", "address": "fa:16:3e:19:8b:14", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap999bfeaf-35", "ovs_interfaceid": "999bfeaf-3590-4070-95cb-80289feea19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.174 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.186 238887 INFO nova.compute.manager [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Took 9.92 seconds to build instance.#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.190 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:07:47 np0005604943 systemd[1]: var-lib-containers-storage-overlay-0650341ed20d328a8e049184ae336f24e72048f73e476748ccb06c15631c7cf0-merged.mount: Deactivated successfully.
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.198 238887 DEBUG oslo_concurrency.lockutils [req-6a040f1c-3e08-406d-a5cb-b119c1e038f3 req-42208b4a-3f9b-4e3c-96aa-ea774419352b 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-2b79cd97-17e8-4d8d-bc7b-2c282a490be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:07:47 np0005604943 podman[267365]: 2026-02-02 12:07:47.217520985 +0000 UTC m=+1.136887062 container remove 94b9a228344f4463861117b3f34808bf42c96111d0878c1a6204d0735b0830c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_ptolemy, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.223 238887 DEBUG oslo_concurrency.lockutils [None req-8f2ee0ca-e1cf-4683-bb7d-333f135c6d70 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:47 np0005604943 systemd[1]: libpod-conmon-94b9a228344f4463861117b3f34808bf42c96111d0878c1a6204d0735b0830c0.scope: Deactivated successfully.
Feb  2 07:07:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.271 238887 DEBUG oslo_concurrency.processutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b79cd97-17e8-4d8d-bc7b-2c282a490be3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp82qhjfpx" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:07:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:07:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.309 238887 DEBUG nova.storage.rbd_utils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] rbd image 2b79cd97-17e8-4d8d-bc7b-2c282a490be3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.313 238887 DEBUG oslo_concurrency.processutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b79cd97-17e8-4d8d-bc7b-2c282a490be3/disk.config 2b79cd97-17e8-4d8d-bc7b-2c282a490be3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:47 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:47Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:03:f7 10.100.0.11
Feb  2 07:07:47 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:47Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:03:f7 10.100.0.11
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.339 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770034052.338617, a8bef119-c694-432a-984b-0f0f2b570103 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.340 238887 INFO nova.compute.manager [-] [instance: a8bef119-c694-432a-984b-0f0f2b570103] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.362 238887 DEBUG nova.compute.manager [None req-c75639a0-439b-4f83-92eb-2d9d3cb091c2 - - - - - -] [instance: a8bef119-c694-432a-984b-0f0f2b570103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.460 238887 DEBUG oslo_concurrency.processutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b79cd97-17e8-4d8d-bc7b-2c282a490be3/disk.config 2b79cd97-17e8-4d8d-bc7b-2c282a490be3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.461 238887 INFO nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Deleting local config drive /var/lib/nova/instances/2b79cd97-17e8-4d8d-bc7b-2c282a490be3/disk.config because it was imported into RBD.#033[00m
Feb  2 07:07:47 np0005604943 kernel: tap999bfeaf-35: entered promiscuous mode
Feb  2 07:07:47 np0005604943 NetworkManager[49093]: <info>  [1770034067.5190] manager: (tap999bfeaf-35): new Tun device (/org/freedesktop/NetworkManager/Devices/128)
Feb  2 07:07:47 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:47Z|00242|binding|INFO|Claiming lport 999bfeaf-3590-4070-95cb-80289feea19a for this chassis.
Feb  2 07:07:47 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:47Z|00243|binding|INFO|999bfeaf-3590-4070-95cb-80289feea19a: Claiming fa:16:3e:19:8b:14 10.100.0.8
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.527 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.529 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.532 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:8b:14 10.100.0.8'], port_security=['fa:16:3e:19:8b:14 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2b79cd97-17e8-4d8d-bc7b-2c282a490be3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb13b2a6-b763-41ef-a5c4-123372e94249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851fb6d80faf43cc9b2fef1913323704', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e0a2abe2-60a1-49ea-89b8-ea7fffedac5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10f2dc12-4c00-4783-968f-4cacec86630e, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=999bfeaf-3590-4070-95cb-80289feea19a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:07:47 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:47Z|00244|binding|INFO|Setting lport 999bfeaf-3590-4070-95cb-80289feea19a ovn-installed in OVS
Feb  2 07:07:47 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:47Z|00245|binding|INFO|Setting lport 999bfeaf-3590-4070-95cb-80289feea19a up in Southbound
Feb  2 07:07:47 np0005604943 systemd-udevd[267522]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.533 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 999bfeaf-3590-4070-95cb-80289feea19a in datapath fb13b2a6-b763-41ef-a5c4-123372e94249 bound to our chassis#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.533 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.536 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fb13b2a6-b763-41ef-a5c4-123372e94249#033[00m
Feb  2 07:07:47 np0005604943 NetworkManager[49093]: <info>  [1770034067.5512] device (tap999bfeaf-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.550 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[da46315c-e392-4f36-96df-381326a708df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.551 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfb13b2a6-b1 in ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 07:07:47 np0005604943 NetworkManager[49093]: <info>  [1770034067.5529] device (tap999bfeaf-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.555 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfb13b2a6-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.555 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e726ca-9da0-43d4-899a-3b633017e11b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.556 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[41be1940-6114-4f4d-a975-e0af45f93196]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:47 np0005604943 systemd-machined[206973]: New machine qemu-26-instance-0000001a.
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.570 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[766f33be-8fd5-48c3-bbb3-216df7e8f679]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:47 np0005604943 systemd[1]: Started Virtual Machine qemu-26-instance-0000001a.
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.586 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[4591b2e0-db74-4eeb-a74b-42dc953224ca]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.617 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[9789e25c-3282-48be-b5e6-1bbf465f5d6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:47 np0005604943 NetworkManager[49093]: <info>  [1770034067.6255] manager: (tapfb13b2a6-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/129)
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.624 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[83bc3dc6-45ec-4f90-8cb7-38e3886fbbdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.652 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[29cc4346-6e17-4dff-be8f-3a18014fb0ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.656 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[f5dbcfef-9530-40c3-b204-d033ee2b720a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:47 np0005604943 NetworkManager[49093]: <info>  [1770034067.6797] device (tapfb13b2a6-b0): carrier: link connected
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.686 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[a6fdf0eb-eb5e-4e45-96e6-085b7115b0e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.704 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a09ac554-c6b2-4565-ade0-476a0e236106]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb13b2a6-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:41:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451302, 'reachable_time': 26252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267656, 'error': None, 'target': 'ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.719 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0070be-7ba0-41bb-afc2-cc2f0f8fc6b7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed1:4144'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451302, 'tstamp': 451302}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267657, 'error': None, 'target': 'ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.731 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b986566c-933e-4acc-8d4c-6a610b8dc46b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb13b2a6-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:41:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451302, 'reachable_time': 26252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267658, 'error': None, 'target': 'ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.758 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[54504429-012d-4ed3-8820-0d69d3f4ac9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.807 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce4a510-3e31-4f37-aa44-0bfba9861cd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.809 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb13b2a6-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.809 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.809 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb13b2a6-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.811 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:47 np0005604943 NetworkManager[49093]: <info>  [1770034067.8123] manager: (tapfb13b2a6-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Feb  2 07:07:47 np0005604943 kernel: tapfb13b2a6-b0: entered promiscuous mode
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.816 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.817 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfb13b2a6-b0, col_values=(('external_ids', {'iface-id': '1d9983aa-de5e-40a5-bc99-8bde08c14b08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.818 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.819 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:47 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:47Z|00246|binding|INFO|Releasing lport 1d9983aa-de5e-40a5-bc99-8bde08c14b08 from this chassis (sb_readonly=0)
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.821 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fb13b2a6-b763-41ef-a5c4-123372e94249.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fb13b2a6-b763-41ef-a5c4-123372e94249.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.822 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b0c79b-6417-46ba-aad7-1f3aa4c96e37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.823 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-fb13b2a6-b763-41ef-a5c4-123372e94249
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/fb13b2a6-b763-41ef-a5c4-123372e94249.pid.haproxy
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID fb13b2a6-b763-41ef-a5c4-123372e94249
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 07:07:47 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:47.823 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249', 'env', 'PROCESS_TAG=haproxy-fb13b2a6-b763-41ef-a5c4-123372e94249', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fb13b2a6-b763-41ef-a5c4-123372e94249.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 07:07:47 np0005604943 nova_compute[238883]: 2026-02-02 12:07:47.826 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:48 np0005604943 nova_compute[238883]: 2026-02-02 12:07:48.051 238887 DEBUG nova.compute.manager [req-a3fcbc60-3736-47e6-b04e-2555dd0198ec req-d5c95784-1550-4adb-b81c-f27b41850ad9 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Received event network-vif-plugged-999bfeaf-3590-4070-95cb-80289feea19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:48 np0005604943 nova_compute[238883]: 2026-02-02 12:07:48.051 238887 DEBUG oslo_concurrency.lockutils [req-a3fcbc60-3736-47e6-b04e-2555dd0198ec req-d5c95784-1550-4adb-b81c-f27b41850ad9 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:48 np0005604943 nova_compute[238883]: 2026-02-02 12:07:48.051 238887 DEBUG oslo_concurrency.lockutils [req-a3fcbc60-3736-47e6-b04e-2555dd0198ec req-d5c95784-1550-4adb-b81c-f27b41850ad9 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:48 np0005604943 nova_compute[238883]: 2026-02-02 12:07:48.052 238887 DEBUG oslo_concurrency.lockutils [req-a3fcbc60-3736-47e6-b04e-2555dd0198ec req-d5c95784-1550-4adb-b81c-f27b41850ad9 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:48 np0005604943 nova_compute[238883]: 2026-02-02 12:07:48.052 238887 DEBUG nova.compute.manager [req-a3fcbc60-3736-47e6-b04e-2555dd0198ec req-d5c95784-1550-4adb-b81c-f27b41850ad9 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Processing event network-vif-plugged-999bfeaf-3590-4070-95cb-80289feea19a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:07:48 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:07:48 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:07:48 np0005604943 podman[267691]: 2026-02-02 12:07:48.227551714 +0000 UTC m=+0.081708886 container create cc7f995f8acb01a3bb3ad458344963c94dac27e80a92dd7c4a9581355e66dbee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3)
Feb  2 07:07:48 np0005604943 systemd[1]: Started libpod-conmon-cc7f995f8acb01a3bb3ad458344963c94dac27e80a92dd7c4a9581355e66dbee.scope.
Feb  2 07:07:48 np0005604943 podman[267691]: 2026-02-02 12:07:48.17847527 +0000 UTC m=+0.032632462 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 07:07:48 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:07:48 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8574191cf729eee8c99d25f6d7c106188b9e840a156d0ad749f021f4d602e45e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 07:07:48 np0005604943 podman[267691]: 2026-02-02 12:07:48.328349053 +0000 UTC m=+0.182506255 container init cc7f995f8acb01a3bb3ad458344963c94dac27e80a92dd7c4a9581355e66dbee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3)
Feb  2 07:07:48 np0005604943 podman[267691]: 2026-02-02 12:07:48.334601712 +0000 UTC m=+0.188758884 container start cc7f995f8acb01a3bb3ad458344963c94dac27e80a92dd7c4a9581355e66dbee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 07:07:48 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[267721]: [NOTICE]   (267746) : New worker (267748) forked
Feb  2 07:07:48 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[267721]: [NOTICE]   (267746) : Loading success.
Feb  2 07:07:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 548 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 513 KiB/s wr, 143 op/s
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.160 238887 DEBUG nova.compute.manager [req-241e4d9d-9b0f-4a52-b390-d44764a05529 req-826c41ff-11b5-4e98-807d-da74466e0263 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Received event network-vif-plugged-999bfeaf-3590-4070-95cb-80289feea19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.160 238887 DEBUG oslo_concurrency.lockutils [req-241e4d9d-9b0f-4a52-b390-d44764a05529 req-826c41ff-11b5-4e98-807d-da74466e0263 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.160 238887 DEBUG oslo_concurrency.lockutils [req-241e4d9d-9b0f-4a52-b390-d44764a05529 req-826c41ff-11b5-4e98-807d-da74466e0263 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.161 238887 DEBUG oslo_concurrency.lockutils [req-241e4d9d-9b0f-4a52-b390-d44764a05529 req-826c41ff-11b5-4e98-807d-da74466e0263 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.161 238887 DEBUG nova.compute.manager [req-241e4d9d-9b0f-4a52-b390-d44764a05529 req-826c41ff-11b5-4e98-807d-da74466e0263 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] No waiting events found dispatching network-vif-plugged-999bfeaf-3590-4070-95cb-80289feea19a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.161 238887 WARNING nova.compute.manager [req-241e4d9d-9b0f-4a52-b390-d44764a05529 req-826c41ff-11b5-4e98-807d-da74466e0263 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Received unexpected event network-vif-plugged-999bfeaf-3590-4070-95cb-80289feea19a for instance with vm_state building and task_state spawning.#033[00m
Feb  2 07:07:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 549 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 513 KiB/s wr, 119 op/s
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.835 238887 DEBUG nova.compute.manager [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.837 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034070.8365097, 2b79cd97-17e8-4d8d-bc7b-2c282a490be3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.837 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] VM Started (Lifecycle Event)#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.844 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.848 238887 INFO nova.virt.libvirt.driver [-] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Instance spawned successfully.#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.849 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.882 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.886 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.887 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.887 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.888 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.888 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.888 238887 DEBUG nova.virt.libvirt.driver [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.894 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.968 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.969 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034070.8393507, 2b79cd97-17e8-4d8d-bc7b-2c282a490be3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.969 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.993 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.997 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034070.8437235, 2b79cd97-17e8-4d8d-bc7b-2c282a490be3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:07:50 np0005604943 nova_compute[238883]: 2026-02-02 12:07:50.997 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:07:51 np0005604943 nova_compute[238883]: 2026-02-02 12:07:51.003 238887 INFO nova.compute.manager [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Took 7.31 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:07:51 np0005604943 nova_compute[238883]: 2026-02-02 12:07:51.004 238887 DEBUG nova.compute.manager [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:51 np0005604943 nova_compute[238883]: 2026-02-02 12:07:51.015 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:07:51 np0005604943 nova_compute[238883]: 2026-02-02 12:07:51.020 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:07:51 np0005604943 nova_compute[238883]: 2026-02-02 12:07:51.049 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:07:51 np0005604943 nova_compute[238883]: 2026-02-02 12:07:51.067 238887 INFO nova.compute.manager [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Took 9.81 seconds to build instance.#033[00m
Feb  2 07:07:51 np0005604943 nova_compute[238883]: 2026-02-02 12:07:51.086 238887 DEBUG oslo_concurrency.lockutils [None req-2195a6a1-30e7-4dd3-b13f-c4613a8d98df 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:51 np0005604943 nova_compute[238883]: 2026-02-02 12:07:51.408 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:51 np0005604943 nova_compute[238883]: 2026-02-02 12:07:51.738 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:52 np0005604943 nova_compute[238883]: 2026-02-02 12:07:52.243 238887 DEBUG nova.compute.manager [req-8a4b61aa-0304-45c1-bb0c-041f70df50b3 req-9b1917c6-7a35-4e50-8d68-47b0d72a4244 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Received event network-changed-5698cd0c-cd85-4888-ad3a-2c588d4e45cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:52 np0005604943 nova_compute[238883]: 2026-02-02 12:07:52.243 238887 DEBUG nova.compute.manager [req-8a4b61aa-0304-45c1-bb0c-041f70df50b3 req-9b1917c6-7a35-4e50-8d68-47b0d72a4244 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Refreshing instance network info cache due to event network-changed-5698cd0c-cd85-4888-ad3a-2c588d4e45cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:07:52 np0005604943 nova_compute[238883]: 2026-02-02 12:07:52.244 238887 DEBUG oslo_concurrency.lockutils [req-8a4b61aa-0304-45c1-bb0c-041f70df50b3 req-9b1917c6-7a35-4e50-8d68-47b0d72a4244 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-63fa96af-eee7-4ee3-b95a-c4036a37b3bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:07:52 np0005604943 nova_compute[238883]: 2026-02-02 12:07:52.244 238887 DEBUG oslo_concurrency.lockutils [req-8a4b61aa-0304-45c1-bb0c-041f70df50b3 req-9b1917c6-7a35-4e50-8d68-47b0d72a4244 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-63fa96af-eee7-4ee3-b95a-c4036a37b3bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:07:52 np0005604943 nova_compute[238883]: 2026-02-02 12:07:52.244 238887 DEBUG nova.network.neutron [req-8a4b61aa-0304-45c1-bb0c-041f70df50b3 req-9b1917c6-7a35-4e50-8d68-47b0d72a4244 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Refreshing network info cache for port 5698cd0c-cd85-4888-ad3a-2c588d4e45cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:07:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 553 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 605 KiB/s wr, 173 op/s
Feb  2 07:07:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:07:53 np0005604943 nova_compute[238883]: 2026-02-02 12:07:53.552 238887 DEBUG nova.network.neutron [req-8a4b61aa-0304-45c1-bb0c-041f70df50b3 req-9b1917c6-7a35-4e50-8d68-47b0d72a4244 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Updated VIF entry in instance network info cache for port 5698cd0c-cd85-4888-ad3a-2c588d4e45cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:07:53 np0005604943 nova_compute[238883]: 2026-02-02 12:07:53.553 238887 DEBUG nova.network.neutron [req-8a4b61aa-0304-45c1-bb0c-041f70df50b3 req-9b1917c6-7a35-4e50-8d68-47b0d72a4244 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Updating instance_info_cache with network_info: [{"id": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "address": "fa:16:3e:66:ce:61", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5698cd0c-cd", "ovs_interfaceid": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:53 np0005604943 nova_compute[238883]: 2026-02-02 12:07:53.574 238887 DEBUG oslo_concurrency.lockutils [req-8a4b61aa-0304-45c1-bb0c-041f70df50b3 req-9b1917c6-7a35-4e50-8d68-47b0d72a4244 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-63fa96af-eee7-4ee3-b95a-c4036a37b3bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:07:54 np0005604943 nova_compute[238883]: 2026-02-02 12:07:54.313 238887 DEBUG nova.compute.manager [req-d332a7eb-1ec0-4f16-a287-d7f2a0d3c88e req-247097c6-0344-40d1-94f5-d48c2f62b975 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Received event network-changed-999bfeaf-3590-4070-95cb-80289feea19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:54 np0005604943 nova_compute[238883]: 2026-02-02 12:07:54.313 238887 DEBUG nova.compute.manager [req-d332a7eb-1ec0-4f16-a287-d7f2a0d3c88e req-247097c6-0344-40d1-94f5-d48c2f62b975 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Refreshing instance network info cache due to event network-changed-999bfeaf-3590-4070-95cb-80289feea19a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:07:54 np0005604943 nova_compute[238883]: 2026-02-02 12:07:54.314 238887 DEBUG oslo_concurrency.lockutils [req-d332a7eb-1ec0-4f16-a287-d7f2a0d3c88e req-247097c6-0344-40d1-94f5-d48c2f62b975 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-2b79cd97-17e8-4d8d-bc7b-2c282a490be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:07:54 np0005604943 nova_compute[238883]: 2026-02-02 12:07:54.314 238887 DEBUG oslo_concurrency.lockutils [req-d332a7eb-1ec0-4f16-a287-d7f2a0d3c88e req-247097c6-0344-40d1-94f5-d48c2f62b975 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-2b79cd97-17e8-4d8d-bc7b-2c282a490be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:07:54 np0005604943 nova_compute[238883]: 2026-02-02 12:07:54.314 238887 DEBUG nova.network.neutron [req-d332a7eb-1ec0-4f16-a287-d7f2a0d3c88e req-247097c6-0344-40d1-94f5-d48c2f62b975 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Refreshing network info cache for port 999bfeaf-3590-4070-95cb-80289feea19a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:07:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 553 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 605 KiB/s wr, 203 op/s
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.110 238887 DEBUG oslo_concurrency.lockutils [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "e1504ff5-76c4-4676-b71d-745b31db4308" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.111 238887 DEBUG oslo_concurrency.lockutils [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "e1504ff5-76c4-4676-b71d-745b31db4308" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.111 238887 DEBUG oslo_concurrency.lockutils [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.111 238887 DEBUG oslo_concurrency.lockutils [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.112 238887 DEBUG oslo_concurrency.lockutils [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.113 238887 INFO nova.compute.manager [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Terminating instance#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.115 238887 DEBUG nova.compute.manager [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:07:55 np0005604943 kernel: tapaa33b4ec-15 (unregistering): left promiscuous mode
Feb  2 07:07:55 np0005604943 NetworkManager[49093]: <info>  [1770034075.2705] device (tapaa33b4ec-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:07:55 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:55Z|00247|binding|INFO|Releasing lport aa33b4ec-1599-4737-b61c-25704b712543 from this chassis (sb_readonly=0)
Feb  2 07:07:55 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:55Z|00248|binding|INFO|Setting lport aa33b4ec-1599-4737-b61c-25704b712543 down in Southbound
Feb  2 07:07:55 np0005604943 ovn_controller[145056]: 2026-02-02T12:07:55Z|00249|binding|INFO|Removing iface tapaa33b4ec-15 ovn-installed in OVS
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.278 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.285 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:55.292 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:03:f7 10.100.0.11'], port_security=['fa:16:3e:b8:03:f7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e1504ff5-76c4-4676-b71d-745b31db4308', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34290362-cccd-452d-8e7e-22a6057fdb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e66ed51ccbb840f083b8a86476696747', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f445a686-10d3-4653-b101-b0c161d236b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c1fa263-7715-4982-bfcc-ab441fef3c03, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=aa33b4ec-1599-4737-b61c-25704b712543) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:07:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:55.294 155011 INFO neutron.agent.ovn.metadata.agent [-] Port aa33b4ec-1599-4737-b61c-25704b712543 in datapath 34290362-cccd-452d-8e7e-22a6057fdb60 unbound from our chassis#033[00m
Feb  2 07:07:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:55.297 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34290362-cccd-452d-8e7e-22a6057fdb60#033[00m
Feb  2 07:07:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:55.317 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c1c52d90-0c36-4e48-84be-3facb54c4696]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:55 np0005604943 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Deactivated successfully.
Feb  2 07:07:55 np0005604943 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Consumed 13.081s CPU time.
Feb  2 07:07:55 np0005604943 systemd-machined[206973]: Machine qemu-24-instance-00000018 terminated.
Feb  2 07:07:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:55.353 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd75bb2-ddd0-4ddd-a044-088d2902d2b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:55.357 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[13523174-135e-48a9-94f0-f12db70e2958]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:55.392 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5aea14-c90f-417b-a1f6-4e64b054e9e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:55.408 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[edabe8e1-0a78-4f17-a35c-7616d400f7cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34290362-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:39:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446009, 'reachable_time': 23880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267776, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:55.421 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[71b53ed0-adb0-4919-98c4-9719d47f5d67]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap34290362-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446022, 'tstamp': 446022}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267777, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap34290362-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446024, 'tstamp': 446024}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267777, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:07:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:55.424 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34290362-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.426 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.429 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:55.430 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34290362-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:55.430 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:07:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:55.430 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34290362-c0, col_values=(('external_ids', {'iface-id': '54e08aa4-a6e9-4ac1-8982-6a9d41e98e5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:07:55.431 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.539 238887 DEBUG nova.network.neutron [req-d332a7eb-1ec0-4f16-a287-d7f2a0d3c88e req-247097c6-0344-40d1-94f5-d48c2f62b975 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Updated VIF entry in instance network info cache for port 999bfeaf-3590-4070-95cb-80289feea19a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.540 238887 DEBUG nova.network.neutron [req-d332a7eb-1ec0-4f16-a287-d7f2a0d3c88e req-247097c6-0344-40d1-94f5-d48c2f62b975 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Updating instance_info_cache with network_info: [{"id": "999bfeaf-3590-4070-95cb-80289feea19a", "address": "fa:16:3e:19:8b:14", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap999bfeaf-35", "ovs_interfaceid": "999bfeaf-3590-4070-95cb-80289feea19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.550 238887 INFO nova.virt.libvirt.driver [-] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Instance destroyed successfully.#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.551 238887 DEBUG nova.objects.instance [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lazy-loading 'resources' on Instance uuid e1504ff5-76c4-4676-b71d-745b31db4308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.565 238887 DEBUG oslo_concurrency.lockutils [req-d332a7eb-1ec0-4f16-a287-d7f2a0d3c88e req-247097c6-0344-40d1-94f5-d48c2f62b975 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-2b79cd97-17e8-4d8d-bc7b-2c282a490be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.568 238887 DEBUG nova.virt.libvirt.vif [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:07:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-642961730',display_name='tempest-TestVolumeBootPattern-server-642961730',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-642961730',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO8pS3TOdyjX/N+jIFJqRkOzhDpnnQvMuyVIbWIYhdDa58/4gu4+MtK78TaoPi0KBaxHL0lWzg2GYnnuAmOLK3vOMGsshwGNfMmLTGNRIjuKqnaNrr1v/EHYLJ6m8LkFkQ==',key_name='tempest-TestVolumeBootPattern-656783760',keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:07:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-aabq3tp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:07:34Z,user_data=None,user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=e1504ff5-76c4-4676-b71d-745b31db4308,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa33b4ec-1599-4737-b61c-25704b712543", "address": "fa:16:3e:b8:03:f7", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa33b4ec-15", "ovs_interfaceid": "aa33b4ec-1599-4737-b61c-25704b712543", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.568 238887 DEBUG nova.network.os_vif_util [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "aa33b4ec-1599-4737-b61c-25704b712543", "address": "fa:16:3e:b8:03:f7", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa33b4ec-15", "ovs_interfaceid": "aa33b4ec-1599-4737-b61c-25704b712543", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.569 238887 DEBUG nova.network.os_vif_util [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:03:f7,bridge_name='br-int',has_traffic_filtering=True,id=aa33b4ec-1599-4737-b61c-25704b712543,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa33b4ec-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.569 238887 DEBUG os_vif [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:03:f7,bridge_name='br-int',has_traffic_filtering=True,id=aa33b4ec-1599-4737-b61c-25704b712543,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa33b4ec-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.571 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.571 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa33b4ec-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.572 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.573 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.576 238887 INFO os_vif [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:03:f7,bridge_name='br-int',has_traffic_filtering=True,id=aa33b4ec-1599-4737-b61c-25704b712543,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa33b4ec-15')#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.730 238887 INFO nova.virt.libvirt.driver [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Deleting instance files /var/lib/nova/instances/e1504ff5-76c4-4676-b71d-745b31db4308_del#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.731 238887 INFO nova.virt.libvirt.driver [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Deletion of /var/lib/nova/instances/e1504ff5-76c4-4676-b71d-745b31db4308_del complete#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.789 238887 INFO nova.compute.manager [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.790 238887 DEBUG oslo.service.loopingcall [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.790 238887 DEBUG nova.compute.manager [-] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:07:55 np0005604943 nova_compute[238883]: 2026-02-02 12:07:55.790 238887 DEBUG nova.network.neutron [-] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.415 238887 DEBUG nova.compute.manager [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Received event network-changed-aa33b4ec-1599-4737-b61c-25704b712543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.415 238887 DEBUG nova.compute.manager [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Refreshing instance network info cache due to event network-changed-aa33b4ec-1599-4737-b61c-25704b712543. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.416 238887 DEBUG oslo_concurrency.lockutils [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-e1504ff5-76c4-4676-b71d-745b31db4308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.416 238887 DEBUG oslo_concurrency.lockutils [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-e1504ff5-76c4-4676-b71d-745b31db4308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.416 238887 DEBUG nova.network.neutron [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Refreshing network info cache for port aa33b4ec-1599-4737-b61c-25704b712543 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.447 238887 DEBUG nova.network.neutron [-] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 553 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 593 KiB/s wr, 202 op/s
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.478 238887 INFO nova.compute.manager [-] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Took 0.69 seconds to deallocate network for instance.#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.538 238887 DEBUG nova.compute.manager [req-8a602cba-fecf-477b-b38f-2ff6a3a62bb5 req-df61b149-1af7-448e-9b67-17228d9e2435 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Received event network-vif-deleted-aa33b4ec-1599-4737-b61c-25704b712543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.648 238887 INFO nova.compute.manager [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Took 0.17 seconds to detach 1 volumes for instance.#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.692 238887 DEBUG oslo_concurrency.lockutils [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.693 238887 DEBUG oslo_concurrency.lockutils [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.741 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.779 238887 INFO nova.network.neutron [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Port aa33b4ec-1599-4737-b61c-25704b712543 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.780 238887 DEBUG nova.network.neutron [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.795 238887 DEBUG oslo_concurrency.lockutils [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-e1504ff5-76c4-4676-b71d-745b31db4308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.795 238887 DEBUG nova.compute.manager [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Received event network-vif-unplugged-aa33b4ec-1599-4737-b61c-25704b712543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.796 238887 DEBUG oslo_concurrency.lockutils [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.796 238887 DEBUG oslo_concurrency.lockutils [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.796 238887 DEBUG oslo_concurrency.lockutils [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.796 238887 DEBUG nova.compute.manager [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] No waiting events found dispatching network-vif-unplugged-aa33b4ec-1599-4737-b61c-25704b712543 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.796 238887 DEBUG nova.compute.manager [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Received event network-vif-unplugged-aa33b4ec-1599-4737-b61c-25704b712543 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.797 238887 DEBUG nova.compute.manager [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Received event network-vif-plugged-aa33b4ec-1599-4737-b61c-25704b712543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.797 238887 DEBUG oslo_concurrency.lockutils [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.797 238887 DEBUG oslo_concurrency.lockutils [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.797 238887 DEBUG oslo_concurrency.lockutils [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "e1504ff5-76c4-4676-b71d-745b31db4308-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.797 238887 DEBUG nova.compute.manager [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] No waiting events found dispatching network-vif-plugged-aa33b4ec-1599-4737-b61c-25704b712543 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.798 238887 WARNING nova.compute.manager [req-d93527a2-cebe-4c1f-b890-fa7611fa5c34 req-19a927a5-7c6e-45b3-a063-803bbd2a5856 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Received unexpected event network-vif-plugged-aa33b4ec-1599-4737-b61c-25704b712543 for instance with vm_state active and task_state deleting.#033[00m
Feb  2 07:07:56 np0005604943 nova_compute[238883]: 2026-02-02 12:07:56.806 238887 DEBUG oslo_concurrency.processutils [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:07:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:07:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3042737671' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:07:57 np0005604943 nova_compute[238883]: 2026-02-02 12:07:57.390 238887 DEBUG oslo_concurrency.processutils [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:07:57 np0005604943 nova_compute[238883]: 2026-02-02 12:07:57.396 238887 DEBUG nova.compute.provider_tree [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:07:57 np0005604943 nova_compute[238883]: 2026-02-02 12:07:57.412 238887 DEBUG nova.scheduler.client.report [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:07:57 np0005604943 nova_compute[238883]: 2026-02-02 12:07:57.432 238887 DEBUG oslo_concurrency.lockutils [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:57 np0005604943 nova_compute[238883]: 2026-02-02 12:07:57.456 238887 INFO nova.scheduler.client.report [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Deleted allocations for instance e1504ff5-76c4-4676-b71d-745b31db4308#033[00m
Feb  2 07:07:57 np0005604943 nova_compute[238883]: 2026-02-02 12:07:57.522 238887 DEBUG oslo_concurrency.lockutils [None req-0e663558-b9bc-445d-9f88-8abd74a4e881 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "e1504ff5-76c4-4676-b71d-745b31db4308" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:07:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:07:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:07:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1585848301' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:07:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:07:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1585848301' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:07:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 553 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 593 KiB/s wr, 218 op/s
Feb  2 07:08:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Feb  2 07:08:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Feb  2 07:08:00 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Feb  2 07:08:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 552 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 113 KiB/s wr, 179 op/s
Feb  2 07:08:00 np0005604943 nova_compute[238883]: 2026-02-02 12:08:00.614 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:00 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:00Z|00052|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.11 does not match offer 10.100.0.12
Feb  2 07:08:00 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:00Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:66:ce:61 10.100.0.12
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.295 238887 DEBUG nova.compute.manager [req-62fd7336-6dc7-4e92-bb86-a1ec9848db3b req-e9ca7ea8-d72a-40b7-bbf9-33f012afba5f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Received event network-changed-0afadb99-91e4-4b90-8cad-6f4e97daf0f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.296 238887 DEBUG nova.compute.manager [req-62fd7336-6dc7-4e92-bb86-a1ec9848db3b req-e9ca7ea8-d72a-40b7-bbf9-33f012afba5f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Refreshing instance network info cache due to event network-changed-0afadb99-91e4-4b90-8cad-6f4e97daf0f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.296 238887 DEBUG oslo_concurrency.lockutils [req-62fd7336-6dc7-4e92-bb86-a1ec9848db3b req-e9ca7ea8-d72a-40b7-bbf9-33f012afba5f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-140c7b65-c11d-4032-aaf8-db6b3df5127e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.296 238887 DEBUG oslo_concurrency.lockutils [req-62fd7336-6dc7-4e92-bb86-a1ec9848db3b req-e9ca7ea8-d72a-40b7-bbf9-33f012afba5f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-140c7b65-c11d-4032-aaf8-db6b3df5127e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.296 238887 DEBUG nova.network.neutron [req-62fd7336-6dc7-4e92-bb86-a1ec9848db3b req-e9ca7ea8-d72a-40b7-bbf9-33f012afba5f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Refreshing network info cache for port 0afadb99-91e4-4b90-8cad-6f4e97daf0f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.369 238887 DEBUG oslo_concurrency.lockutils [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "140c7b65-c11d-4032-aaf8-db6b3df5127e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.370 238887 DEBUG oslo_concurrency.lockutils [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "140c7b65-c11d-4032-aaf8-db6b3df5127e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.371 238887 DEBUG oslo_concurrency.lockutils [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.371 238887 DEBUG oslo_concurrency.lockutils [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.372 238887 DEBUG oslo_concurrency.lockutils [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.373 238887 INFO nova.compute.manager [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Terminating instance#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.374 238887 DEBUG nova.compute.manager [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:08:01 np0005604943 kernel: tap0afadb99-91 (unregistering): left promiscuous mode
Feb  2 07:08:01 np0005604943 NetworkManager[49093]: <info>  [1770034081.6110] device (tap0afadb99-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:08:01 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:01Z|00250|binding|INFO|Releasing lport 0afadb99-91e4-4b90-8cad-6f4e97daf0f9 from this chassis (sb_readonly=0)
Feb  2 07:08:01 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:01Z|00251|binding|INFO|Setting lport 0afadb99-91e4-4b90-8cad-6f4e97daf0f9 down in Southbound
Feb  2 07:08:01 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:01Z|00252|binding|INFO|Removing iface tap0afadb99-91 ovn-installed in OVS
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.661 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:01 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:01.669 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:be:eb 10.100.0.9'], port_security=['fa:16:3e:18:be:eb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '140c7b65-c11d-4032-aaf8-db6b3df5127e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34290362-cccd-452d-8e7e-22a6057fdb60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e66ed51ccbb840f083b8a86476696747', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f445a686-10d3-4653-b101-b0c161d236b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c1fa263-7715-4982-bfcc-ab441fef3c03, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=0afadb99-91e4-4b90-8cad-6f4e97daf0f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:08:01 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:01.670 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 0afadb99-91e4-4b90-8cad-6f4e97daf0f9 in datapath 34290362-cccd-452d-8e7e-22a6057fdb60 unbound from our chassis#033[00m
Feb  2 07:08:01 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:01.672 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34290362-cccd-452d-8e7e-22a6057fdb60, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.672 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:01 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:01.673 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b4585323-6dd6-4226-8ba2-7dfc42b87a60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:01 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:01.674 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 namespace which is not needed anymore#033[00m
Feb  2 07:08:01 np0005604943 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Deactivated successfully.
Feb  2 07:08:01 np0005604943 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Consumed 14.490s CPU time.
Feb  2 07:08:01 np0005604943 systemd-machined[206973]: Machine qemu-22-instance-00000016 terminated.
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.742 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:01 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[265716]: [NOTICE]   (265720) : haproxy version is 2.8.14-c23fe91
Feb  2 07:08:01 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[265716]: [NOTICE]   (265720) : path to executable is /usr/sbin/haproxy
Feb  2 07:08:01 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[265716]: [WARNING]  (265720) : Exiting Master process...
Feb  2 07:08:01 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[265716]: [ALERT]    (265720) : Current worker (265722) exited with code 143 (Terminated)
Feb  2 07:08:01 np0005604943 neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60[265716]: [WARNING]  (265720) : All workers exited. Exiting... (0)
Feb  2 07:08:01 np0005604943 systemd[1]: libpod-117476dc13cd843cf890153d44d2b7959038792c689ab5f923aa17dddf3a1c26.scope: Deactivated successfully.
Feb  2 07:08:01 np0005604943 conmon[265716]: conmon 117476dc13cd843cf890 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-117476dc13cd843cf890153d44d2b7959038792c689ab5f923aa17dddf3a1c26.scope/container/memory.events
Feb  2 07:08:01 np0005604943 podman[267855]: 2026-02-02 12:08:01.80477788 +0000 UTC m=+0.049124836 container died 117476dc13cd843cf890153d44d2b7959038792c689ab5f923aa17dddf3a1c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.806 238887 INFO nova.virt.libvirt.driver [-] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Instance destroyed successfully.#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.806 238887 DEBUG nova.objects.instance [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lazy-loading 'resources' on Instance uuid 140c7b65-c11d-4032-aaf8-db6b3df5127e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.819 238887 DEBUG nova.virt.libvirt.vif [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:06:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1233051117',display_name='tempest-TestVolumeBootPattern-server-1233051117',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1233051117',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO8pS3TOdyjX/N+jIFJqRkOzhDpnnQvMuyVIbWIYhdDa58/4gu4+MtK78TaoPi0KBaxHL0lWzg2GYnnuAmOLK3vOMGsshwGNfMmLTGNRIjuKqnaNrr1v/EHYLJ6m8LkFkQ==',key_name='tempest-TestVolumeBootPattern-656783760',keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:06:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e66ed51ccbb840f083b8a86476696747',ramdisk_id='',reservation_id='r-wcv20q0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1059348902',owner_user_name='tempest-TestVolumeBootPattern-1059348902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:06:54Z,user_data=None,user_id='5e3fc9d8415541ecaa0da4968c9fa242',uuid=140c7b65-c11d-4032-aaf8-db6b3df5127e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "address": "fa:16:3e:18:be:eb", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afadb99-91", "ovs_interfaceid": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.820 238887 DEBUG nova.network.os_vif_util [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converting VIF {"id": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "address": "fa:16:3e:18:be:eb", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afadb99-91", "ovs_interfaceid": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.820 238887 DEBUG nova.network.os_vif_util [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:be:eb,bridge_name='br-int',has_traffic_filtering=True,id=0afadb99-91e4-4b90-8cad-6f4e97daf0f9,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0afadb99-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.821 238887 DEBUG os_vif [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:be:eb,bridge_name='br-int',has_traffic_filtering=True,id=0afadb99-91e4-4b90-8cad-6f4e97daf0f9,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0afadb99-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.822 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.823 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0afadb99-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.824 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.826 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.830 238887 INFO os_vif [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:be:eb,bridge_name='br-int',has_traffic_filtering=True,id=0afadb99-91e4-4b90-8cad-6f4e97daf0f9,network=Network(34290362-cccd-452d-8e7e-22a6057fdb60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0afadb99-91')#033[00m
Feb  2 07:08:01 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-117476dc13cd843cf890153d44d2b7959038792c689ab5f923aa17dddf3a1c26-userdata-shm.mount: Deactivated successfully.
Feb  2 07:08:01 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a92e0eb499f31ca803eb1dac543b24a235b941239a470adad3c760fd7b418e13-merged.mount: Deactivated successfully.
Feb  2 07:08:01 np0005604943 podman[267855]: 2026-02-02 12:08:01.846394992 +0000 UTC m=+0.090741938 container cleanup 117476dc13cd843cf890153d44d2b7959038792c689ab5f923aa17dddf3a1c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:08:01 np0005604943 systemd[1]: libpod-conmon-117476dc13cd843cf890153d44d2b7959038792c689ab5f923aa17dddf3a1c26.scope: Deactivated successfully.
Feb  2 07:08:01 np0005604943 podman[267908]: 2026-02-02 12:08:01.916100353 +0000 UTC m=+0.048204601 container remove 117476dc13cd843cf890153d44d2b7959038792c689ab5f923aa17dddf3a1c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb  2 07:08:01 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:01.926 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a625a479-ee71-408e-8c8f-946ec4915070]: (4, ('Mon Feb  2 12:08:01 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 (117476dc13cd843cf890153d44d2b7959038792c689ab5f923aa17dddf3a1c26)\n117476dc13cd843cf890153d44d2b7959038792c689ab5f923aa17dddf3a1c26\nMon Feb  2 12:08:01 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 (117476dc13cd843cf890153d44d2b7959038792c689ab5f923aa17dddf3a1c26)\n117476dc13cd843cf890153d44d2b7959038792c689ab5f923aa17dddf3a1c26\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:01 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:01.929 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[409d35f5-84e8-4792-9d50-a8d8d0acb42d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:01 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:01.932 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34290362-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:08:01 np0005604943 kernel: tap34290362-c0: left promiscuous mode
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.939 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.946 238887 DEBUG nova.compute.manager [req-0018574e-a60d-49e6-9113-c51f65c97823 req-2af1c6a6-e05d-4e2b-9932-04884abd4f62 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Received event network-vif-unplugged-0afadb99-91e4-4b90-8cad-6f4e97daf0f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.947 238887 DEBUG oslo_concurrency.lockutils [req-0018574e-a60d-49e6-9113-c51f65c97823 req-2af1c6a6-e05d-4e2b-9932-04884abd4f62 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.947 238887 DEBUG oslo_concurrency.lockutils [req-0018574e-a60d-49e6-9113-c51f65c97823 req-2af1c6a6-e05d-4e2b-9932-04884abd4f62 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.947 238887 DEBUG oslo_concurrency.lockutils [req-0018574e-a60d-49e6-9113-c51f65c97823 req-2af1c6a6-e05d-4e2b-9932-04884abd4f62 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.947 238887 DEBUG nova.compute.manager [req-0018574e-a60d-49e6-9113-c51f65c97823 req-2af1c6a6-e05d-4e2b-9932-04884abd4f62 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] No waiting events found dispatching network-vif-unplugged-0afadb99-91e4-4b90-8cad-6f4e97daf0f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.948 238887 DEBUG nova.compute.manager [req-0018574e-a60d-49e6-9113-c51f65c97823 req-2af1c6a6-e05d-4e2b-9932-04884abd4f62 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Received event network-vif-unplugged-0afadb99-91e4-4b90-8cad-6f4e97daf0f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:08:01 np0005604943 nova_compute[238883]: 2026-02-02 12:08:01.948 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:01 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:01.950 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[5a557239-e1cf-47fc-84ab-50bb26ba6c55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:01 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:01.967 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc4804c-98e2-4dcf-a920-bbb0f5aaa32e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:01 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:01.969 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c21c7b47-e2e9-447c-8556-3f3a57f15cbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:01 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:01.986 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a927b56c-77cf-4b33-9a0f-ff8fca6a0822]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446000, 'reachable_time': 25361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267926, 'error': None, 'target': 'ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:01 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:01.989 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-34290362-cccd-452d-8e7e-22a6057fdb60 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 07:08:01 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:01.989 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3c1e6d-5e25-4be5-8c2b-4529169d05ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:01 np0005604943 systemd[1]: run-netns-ovnmeta\x2d34290362\x2dcccd\x2d452d\x2d8e7e\x2d22a6057fdb60.mount: Deactivated successfully.
Feb  2 07:08:02 np0005604943 nova_compute[238883]: 2026-02-02 12:08:02.019 238887 INFO nova.virt.libvirt.driver [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Deleting instance files /var/lib/nova/instances/140c7b65-c11d-4032-aaf8-db6b3df5127e_del#033[00m
Feb  2 07:08:02 np0005604943 nova_compute[238883]: 2026-02-02 12:08:02.020 238887 INFO nova.virt.libvirt.driver [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Deletion of /var/lib/nova/instances/140c7b65-c11d-4032-aaf8-db6b3df5127e_del complete#033[00m
Feb  2 07:08:02 np0005604943 nova_compute[238883]: 2026-02-02 12:08:02.073 238887 INFO nova.compute.manager [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:08:02 np0005604943 nova_compute[238883]: 2026-02-02 12:08:02.074 238887 DEBUG oslo.service.loopingcall [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:08:02 np0005604943 nova_compute[238883]: 2026-02-02 12:08:02.074 238887 DEBUG nova.compute.manager [-] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:08:02 np0005604943 nova_compute[238883]: 2026-02-02 12:08:02.074 238887 DEBUG nova.network.neutron [-] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:08:02 np0005604943 nova_compute[238883]: 2026-02-02 12:08:02.328 238887 DEBUG nova.network.neutron [req-62fd7336-6dc7-4e92-bb86-a1ec9848db3b req-e9ca7ea8-d72a-40b7-bbf9-33f012afba5f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Updated VIF entry in instance network info cache for port 0afadb99-91e4-4b90-8cad-6f4e97daf0f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:08:02 np0005604943 nova_compute[238883]: 2026-02-02 12:08:02.329 238887 DEBUG nova.network.neutron [req-62fd7336-6dc7-4e92-bb86-a1ec9848db3b req-e9ca7ea8-d72a-40b7-bbf9-33f012afba5f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Updating instance_info_cache with network_info: [{"id": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "address": "fa:16:3e:18:be:eb", "network": {"id": "34290362-cccd-452d-8e7e-22a6057fdb60", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-233453080-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e66ed51ccbb840f083b8a86476696747", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0afadb99-91", "ovs_interfaceid": "0afadb99-91e4-4b90-8cad-6f4e97daf0f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:08:02 np0005604943 nova_compute[238883]: 2026-02-02 12:08:02.349 238887 DEBUG oslo_concurrency.lockutils [req-62fd7336-6dc7-4e92-bb86-a1ec9848db3b req-e9ca7ea8-d72a-40b7-bbf9-33f012afba5f 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-140c7b65-c11d-4032-aaf8-db6b3df5127e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:08:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 535 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 11 KiB/s wr, 136 op/s
Feb  2 07:08:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:08:02 np0005604943 nova_compute[238883]: 2026-02-02 12:08:02.951 238887 DEBUG nova.network.neutron [-] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:08:02 np0005604943 nova_compute[238883]: 2026-02-02 12:08:02.970 238887 INFO nova.compute.manager [-] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Took 0.90 seconds to deallocate network for instance.#033[00m
Feb  2 07:08:03 np0005604943 nova_compute[238883]: 2026-02-02 12:08:03.109 238887 INFO nova.compute.manager [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Took 0.14 seconds to detach 1 volumes for instance.#033[00m
Feb  2 07:08:03 np0005604943 nova_compute[238883]: 2026-02-02 12:08:03.165 238887 DEBUG oslo_concurrency.lockutils [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:03 np0005604943 nova_compute[238883]: 2026-02-02 12:08:03.166 238887 DEBUG oslo_concurrency.lockutils [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:03 np0005604943 nova_compute[238883]: 2026-02-02 12:08:03.251 238887 DEBUG oslo_concurrency.processutils [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:08:03 np0005604943 nova_compute[238883]: 2026-02-02 12:08:03.380 238887 DEBUG nova.compute.manager [req-2a939ea1-16bb-4368-98be-b218cccf81f1 req-653b5b79-2566-4894-b99b-006d92f49a10 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Received event network-vif-deleted-0afadb99-91e4-4b90-8cad-6f4e97daf0f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:08:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:08:03 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2833205466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:08:03 np0005604943 nova_compute[238883]: 2026-02-02 12:08:03.790 238887 DEBUG oslo_concurrency.processutils [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:08:03 np0005604943 nova_compute[238883]: 2026-02-02 12:08:03.796 238887 DEBUG nova.compute.provider_tree [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:08:03 np0005604943 nova_compute[238883]: 2026-02-02 12:08:03.813 238887 DEBUG nova.scheduler.client.report [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:08:03 np0005604943 nova_compute[238883]: 2026-02-02 12:08:03.836 238887 DEBUG oslo_concurrency.lockutils [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:03 np0005604943 nova_compute[238883]: 2026-02-02 12:08:03.881 238887 INFO nova.scheduler.client.report [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Deleted allocations for instance 140c7b65-c11d-4032-aaf8-db6b3df5127e#033[00m
Feb  2 07:08:03 np0005604943 nova_compute[238883]: 2026-02-02 12:08:03.953 238887 DEBUG oslo_concurrency.lockutils [None req-7fa89db8-e4ee-4c03-8fab-21b7957f4db2 5e3fc9d8415541ecaa0da4968c9fa242 e66ed51ccbb840f083b8a86476696747 - - default default] Lock "140c7b65-c11d-4032-aaf8-db6b3df5127e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:04 np0005604943 nova_compute[238883]: 2026-02-02 12:08:04.015 238887 DEBUG nova.compute.manager [req-8e44a2eb-317b-4987-bb3a-3d41f7c473b2 req-32033320-c972-48bf-971e-98a927a204d8 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Received event network-vif-plugged-0afadb99-91e4-4b90-8cad-6f4e97daf0f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:08:04 np0005604943 nova_compute[238883]: 2026-02-02 12:08:04.016 238887 DEBUG oslo_concurrency.lockutils [req-8e44a2eb-317b-4987-bb3a-3d41f7c473b2 req-32033320-c972-48bf-971e-98a927a204d8 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:04 np0005604943 nova_compute[238883]: 2026-02-02 12:08:04.016 238887 DEBUG oslo_concurrency.lockutils [req-8e44a2eb-317b-4987-bb3a-3d41f7c473b2 req-32033320-c972-48bf-971e-98a927a204d8 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:04 np0005604943 nova_compute[238883]: 2026-02-02 12:08:04.017 238887 DEBUG oslo_concurrency.lockutils [req-8e44a2eb-317b-4987-bb3a-3d41f7c473b2 req-32033320-c972-48bf-971e-98a927a204d8 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "140c7b65-c11d-4032-aaf8-db6b3df5127e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:04 np0005604943 nova_compute[238883]: 2026-02-02 12:08:04.017 238887 DEBUG nova.compute.manager [req-8e44a2eb-317b-4987-bb3a-3d41f7c473b2 req-32033320-c972-48bf-971e-98a927a204d8 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] No waiting events found dispatching network-vif-plugged-0afadb99-91e4-4b90-8cad-6f4e97daf0f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:08:04 np0005604943 nova_compute[238883]: 2026-02-02 12:08:04.017 238887 WARNING nova.compute.manager [req-8e44a2eb-317b-4987-bb3a-3d41f7c473b2 req-32033320-c972-48bf-971e-98a927a204d8 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Received unexpected event network-vif-plugged-0afadb99-91e4-4b90-8cad-6f4e97daf0f9 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:08:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 535 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 126 op/s
Feb  2 07:08:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:08:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2459065146' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:08:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:08:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2459065146' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:08:05 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:05Z|00054|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.11 does not match offer 10.100.0.12
Feb  2 07:08:05 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:05Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:66:ce:61 10.100.0.12
Feb  2 07:08:05 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:05Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:ce:61 10.100.0.12
Feb  2 07:08:05 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:05Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:ce:61 10.100.0.12
Feb  2 07:08:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 535 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 126 op/s
Feb  2 07:08:06 np0005604943 nova_compute[238883]: 2026-02-02 12:08:06.744 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:06 np0005604943 nova_compute[238883]: 2026-02-02 12:08:06.825 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:07 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:07Z|00058|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.8
Feb  2 07:08:07 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:07Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:19:8b:14 10.100.0.8
Feb  2 07:08:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:08:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Feb  2 07:08:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Feb  2 07:08:07 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Feb  2 07:08:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 488 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 512 KiB/s wr, 164 op/s
Feb  2 07:08:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_12:08:09
Feb  2 07:08:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 07:08:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 07:08:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'volumes', 'images']
Feb  2 07:08:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 07:08:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:10.032 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:10.033 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:10.034 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 466 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 172 op/s
Feb  2 07:08:10 np0005604943 nova_compute[238883]: 2026-02-02 12:08:10.545 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770034075.5440717, e1504ff5-76c4-4676-b71d-745b31db4308 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:08:10 np0005604943 nova_compute[238883]: 2026-02-02 12:08:10.545 238887 INFO nova.compute.manager [-] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:08:10 np0005604943 nova_compute[238883]: 2026-02-02 12:08:10.578 238887 DEBUG nova.compute.manager [None req-d369d21e-0485-4c4e-9c68-6a7f53df28a8 - - - - - -] [instance: e1504ff5-76c4-4676-b71d-745b31db4308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:08:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:08:11 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:11Z|00253|binding|INFO|Releasing lport 1d9983aa-de5e-40a5-bc99-8bde08c14b08 from this chassis (sb_readonly=0)
Feb  2 07:08:11 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:11Z|00254|binding|INFO|Releasing lport 88fa0d04-0a79-4556-b2c6-d65a3a18ab58 from this chassis (sb_readonly=0)
Feb  2 07:08:11 np0005604943 nova_compute[238883]: 2026-02-02 12:08:11.719 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:11 np0005604943 nova_compute[238883]: 2026-02-02 12:08:11.746 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:11 np0005604943 nova_compute[238883]: 2026-02-02 12:08:11.827 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:12 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:12Z|00060|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.8
Feb  2 07:08:12 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:12Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:19:8b:14 10.100.0.8
Feb  2 07:08:12 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:12Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:8b:14 10.100.0.8
Feb  2 07:08:12 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:12Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:8b:14 10.100.0.8
Feb  2 07:08:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 466 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 104 op/s
Feb  2 07:08:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:08:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 470 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 82 op/s
Feb  2 07:08:15 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:15Z|00255|binding|INFO|Releasing lport 1d9983aa-de5e-40a5-bc99-8bde08c14b08 from this chassis (sb_readonly=0)
Feb  2 07:08:15 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:15Z|00256|binding|INFO|Releasing lport 88fa0d04-0a79-4556-b2c6-d65a3a18ab58 from this chassis (sb_readonly=0)
Feb  2 07:08:15 np0005604943 nova_compute[238883]: 2026-02-02 12:08:15.564 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:16.053 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:08:16 np0005604943 nova_compute[238883]: 2026-02-02 12:08:16.054 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:16 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:16.055 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 07:08:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 470 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 82 op/s
Feb  2 07:08:16 np0005604943 nova_compute[238883]: 2026-02-02 12:08:16.748 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:16 np0005604943 nova_compute[238883]: 2026-02-02 12:08:16.801 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770034081.8003247, 140c7b65-c11d-4032-aaf8-db6b3df5127e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:08:16 np0005604943 nova_compute[238883]: 2026-02-02 12:08:16.802 238887 INFO nova.compute.manager [-] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:08:16 np0005604943 nova_compute[238883]: 2026-02-02 12:08:16.820 238887 DEBUG nova.compute.manager [None req-5c7a2f28-9022-48a6-b35e-1c2beae4bdc5 - - - - - -] [instance: 140c7b65-c11d-4032-aaf8-db6b3df5127e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:08:16 np0005604943 nova_compute[238883]: 2026-02-02 12:08:16.829 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:17 np0005604943 podman[267953]: 2026-02-02 12:08:17.074153595 +0000 UTC m=+0.082697849 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 07:08:17 np0005604943 podman[267952]: 2026-02-02 12:08:17.074445422 +0000 UTC m=+0.085184483 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 07:08:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:08:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 470 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 682 KiB/s rd, 1.2 MiB/s wr, 38 op/s
Feb  2 07:08:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 470 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 610 KiB/s rd, 1.0 MiB/s wr, 35 op/s
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.053181367002133e-06 of space, bias 1.0, pg target 0.00211595441010064 quantized to 32 (current 32)
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005708850702364137 of space, bias 1.0, pg target 1.7126552107092412 quantized to 32 (current 32)
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.0206661563432877e-06 of space, bias 1.0, pg target 0.000604179180746643 quantized to 32 (current 32)
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006667445505038753 of space, bias 1.0, pg target 0.1993566206006587 quantized to 32 (current 32)
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0648804297164338e-06 of space, bias 4.0, pg target 0.0012735969939408549 quantized to 16 (current 16)
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:08:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Feb  2 07:08:21 np0005604943 nova_compute[238883]: 2026-02-02 12:08:21.750 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Feb  2 07:08:21 np0005604943 nova_compute[238883]: 2026-02-02 12:08:21.831 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Feb  2 07:08:21 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Feb  2 07:08:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 478 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 411 KiB/s rd, 964 KiB/s wr, 9 op/s
Feb  2 07:08:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:08:23 np0005604943 nova_compute[238883]: 2026-02-02 12:08:23.824 238887 DEBUG oslo_concurrency.lockutils [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:23 np0005604943 nova_compute[238883]: 2026-02-02 12:08:23.825 238887 DEBUG oslo_concurrency.lockutils [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:23 np0005604943 nova_compute[238883]: 2026-02-02 12:08:23.825 238887 DEBUG oslo_concurrency.lockutils [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:23 np0005604943 nova_compute[238883]: 2026-02-02 12:08:23.825 238887 DEBUG oslo_concurrency.lockutils [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:23 np0005604943 nova_compute[238883]: 2026-02-02 12:08:23.825 238887 DEBUG oslo_concurrency.lockutils [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:23 np0005604943 nova_compute[238883]: 2026-02-02 12:08:23.827 238887 INFO nova.compute.manager [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Terminating instance#033[00m
Feb  2 07:08:23 np0005604943 nova_compute[238883]: 2026-02-02 12:08:23.827 238887 DEBUG nova.compute.manager [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:08:23 np0005604943 kernel: tap5698cd0c-cd (unregistering): left promiscuous mode
Feb  2 07:08:23 np0005604943 NetworkManager[49093]: <info>  [1770034103.8818] device (tap5698cd0c-cd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:08:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:23Z|00257|binding|INFO|Releasing lport 5698cd0c-cd85-4888-ad3a-2c588d4e45cf from this chassis (sb_readonly=0)
Feb  2 07:08:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:23Z|00258|binding|INFO|Setting lport 5698cd0c-cd85-4888-ad3a-2c588d4e45cf down in Southbound
Feb  2 07:08:23 np0005604943 nova_compute[238883]: 2026-02-02 12:08:23.888 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:23 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:23Z|00259|binding|INFO|Removing iface tap5698cd0c-cd ovn-installed in OVS
Feb  2 07:08:23 np0005604943 nova_compute[238883]: 2026-02-02 12:08:23.892 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:23.897 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:ce:61 10.100.0.12'], port_security=['fa:16:3e:66:ce:61 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '63fa96af-eee7-4ee3-b95a-c4036a37b3bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efa24ae1-9962-44ca-882a-8d146356fcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c7b49c49c104c079544033b07fb2f3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd3424b4-e169-47dd-816d-ac2340e28ccc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b6e8bcf-741b-41c8-a826-9b6dbb1c260b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=5698cd0c-cd85-4888-ad3a-2c588d4e45cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:08:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:23.898 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 5698cd0c-cd85-4888-ad3a-2c588d4e45cf in datapath efa24ae1-9962-44ca-882a-8d146356fcca unbound from our chassis#033[00m
Feb  2 07:08:23 np0005604943 nova_compute[238883]: 2026-02-02 12:08:23.898 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:23.900 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network efa24ae1-9962-44ca-882a-8d146356fcca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:08:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:23.901 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[148324dc-7a51-4b71-a101-434f81ff2b10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:23 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:23.901 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca namespace which is not needed anymore#033[00m
Feb  2 07:08:23 np0005604943 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Deactivated successfully.
Feb  2 07:08:23 np0005604943 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Consumed 16.446s CPU time.
Feb  2 07:08:23 np0005604943 systemd-machined[206973]: Machine qemu-25-instance-00000019 terminated.
Feb  2 07:08:24 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[267139]: [NOTICE]   (267146) : haproxy version is 2.8.14-c23fe91
Feb  2 07:08:24 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[267139]: [NOTICE]   (267146) : path to executable is /usr/sbin/haproxy
Feb  2 07:08:24 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[267139]: [WARNING]  (267146) : Exiting Master process...
Feb  2 07:08:24 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[267139]: [ALERT]    (267146) : Current worker (267150) exited with code 143 (Terminated)
Feb  2 07:08:24 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[267139]: [WARNING]  (267146) : All workers exited. Exiting... (0)
Feb  2 07:08:24 np0005604943 systemd[1]: libpod-89e5ebdba89e26fad1ee4cda51ff20e21ae12566e1376b90893d2c53e6a9879b.scope: Deactivated successfully.
Feb  2 07:08:24 np0005604943 podman[268022]: 2026-02-02 12:08:24.03273953 +0000 UTC m=+0.047091015 container died 89e5ebdba89e26fad1ee4cda51ff20e21ae12566e1376b90893d2c53e6a9879b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb  2 07:08:24 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-89e5ebdba89e26fad1ee4cda51ff20e21ae12566e1376b90893d2c53e6a9879b-userdata-shm.mount: Deactivated successfully.
Feb  2 07:08:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:24.058 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:08:24 np0005604943 systemd[1]: var-lib-containers-storage-overlay-78f521dfd34739cdeeb6dfd05828f923c9ae4259c120484d97d7e3391e1a2bbc-merged.mount: Deactivated successfully.
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.069 238887 INFO nova.virt.libvirt.driver [-] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Instance destroyed successfully.#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.071 238887 DEBUG nova.objects.instance [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lazy-loading 'resources' on Instance uuid 63fa96af-eee7-4ee3-b95a-c4036a37b3bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:08:24 np0005604943 podman[268022]: 2026-02-02 12:08:24.081376505 +0000 UTC m=+0.095727980 container cleanup 89e5ebdba89e26fad1ee4cda51ff20e21ae12566e1376b90893d2c53e6a9879b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.086 238887 DEBUG nova.virt.libvirt.vif [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:07:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1023312054',display_name='tempest-TransferEncryptedVolumeTest-server-1023312054',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1023312054',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFDy1jBskL6RDlru0VUyMYEuQYZdWj4mgPqYNbp/ZxOi/SP0295JAyJLHX3JiQjzCwuF8BsyBv7iV3J6nvrpEE+i/AXa4yixOsMe088OGvWt8cZiFnV/xX7EKx5mK84nug==',key_name='tempest-TransferEncryptedVolumeTest-704936637',keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:07:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4c7b49c49c104c079544033b07fb2f3d',ramdisk_id='',reservation_id='r-d4lddim5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-347797880',owner_user_name='tempest-TransferEncryptedVolumeTest-347797880-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:07:47Z,user_data=None,user_id='cd5824e18d5e443cb24d3bf55ff2c553',uuid=63fa96af-eee7-4ee3-b95a-c4036a37b3bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "address": "fa:16:3e:66:ce:61", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5698cd0c-cd", "ovs_interfaceid": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.087 238887 DEBUG nova.network.os_vif_util [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converting VIF {"id": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "address": "fa:16:3e:66:ce:61", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5698cd0c-cd", "ovs_interfaceid": "5698cd0c-cd85-4888-ad3a-2c588d4e45cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.088 238887 DEBUG nova.network.os_vif_util [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:ce:61,bridge_name='br-int',has_traffic_filtering=True,id=5698cd0c-cd85-4888-ad3a-2c588d4e45cf,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5698cd0c-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:08:24 np0005604943 systemd[1]: libpod-conmon-89e5ebdba89e26fad1ee4cda51ff20e21ae12566e1376b90893d2c53e6a9879b.scope: Deactivated successfully.
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.089 238887 DEBUG os_vif [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:ce:61,bridge_name='br-int',has_traffic_filtering=True,id=5698cd0c-cd85-4888-ad3a-2c588d4e45cf,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5698cd0c-cd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.091 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.092 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5698cd0c-cd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.094 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.096 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.102 238887 INFO os_vif [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:ce:61,bridge_name='br-int',has_traffic_filtering=True,id=5698cd0c-cd85-4888-ad3a-2c588d4e45cf,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5698cd0c-cd')#033[00m
Feb  2 07:08:24 np0005604943 podman[268059]: 2026-02-02 12:08:24.152876099 +0000 UTC m=+0.048704888 container remove 89e5ebdba89e26fad1ee4cda51ff20e21ae12566e1376b90893d2c53e6a9879b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb  2 07:08:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:24.158 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[09473e42-5f08-4bf8-a8c4-c06b03c6aa03]: (4, ('Mon Feb  2 12:08:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca (89e5ebdba89e26fad1ee4cda51ff20e21ae12566e1376b90893d2c53e6a9879b)\n89e5ebdba89e26fad1ee4cda51ff20e21ae12566e1376b90893d2c53e6a9879b\nMon Feb  2 12:08:24 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca (89e5ebdba89e26fad1ee4cda51ff20e21ae12566e1376b90893d2c53e6a9879b)\n89e5ebdba89e26fad1ee4cda51ff20e21ae12566e1376b90893d2c53e6a9879b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:24.159 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[69f60284-716e-4c04-830e-d0ee9fe40f00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:24.160 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefa24ae1-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.162 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:24 np0005604943 kernel: tapefa24ae1-90: left promiscuous mode
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.168 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:24.171 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe64029-3e08-48ee-a2d7-6adf100d0954]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:24.183 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[2a49ae84-f810-47bf-b8de-f7af687e36d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:24.185 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[97172d21-79e6-4720-b427-949b45205d83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:24.199 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[130fb70f-e728-490c-b00b-56c2994f166b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450906, 'reachable_time': 15703, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268092, 'error': None, 'target': 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:24 np0005604943 systemd[1]: run-netns-ovnmeta\x2defa24ae1\x2d9962\x2d44ca\x2d882a\x2d8d146356fcca.mount: Deactivated successfully.
Feb  2 07:08:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:24.202 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 07:08:24 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:24.203 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[77b38716-83a0-4b1a-aef7-60151c7daa62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.268 238887 INFO nova.virt.libvirt.driver [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Deleting instance files /var/lib/nova/instances/63fa96af-eee7-4ee3-b95a-c4036a37b3bb_del#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.269 238887 INFO nova.virt.libvirt.driver [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Deletion of /var/lib/nova/instances/63fa96af-eee7-4ee3-b95a-c4036a37b3bb_del complete#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.332 238887 INFO nova.compute.manager [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Took 0.50 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.333 238887 DEBUG oslo.service.loopingcall [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.334 238887 DEBUG nova.compute.manager [-] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.334 238887 DEBUG nova.network.neutron [-] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.352 238887 DEBUG nova.compute.manager [req-14b35fd2-7442-4314-94ab-2c9446aabd27 req-8fd4d794-9b78-465f-acca-f06e19cc7261 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Received event network-vif-unplugged-5698cd0c-cd85-4888-ad3a-2c588d4e45cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.352 238887 DEBUG oslo_concurrency.lockutils [req-14b35fd2-7442-4314-94ab-2c9446aabd27 req-8fd4d794-9b78-465f-acca-f06e19cc7261 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.353 238887 DEBUG oslo_concurrency.lockutils [req-14b35fd2-7442-4314-94ab-2c9446aabd27 req-8fd4d794-9b78-465f-acca-f06e19cc7261 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.353 238887 DEBUG oslo_concurrency.lockutils [req-14b35fd2-7442-4314-94ab-2c9446aabd27 req-8fd4d794-9b78-465f-acca-f06e19cc7261 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.353 238887 DEBUG nova.compute.manager [req-14b35fd2-7442-4314-94ab-2c9446aabd27 req-8fd4d794-9b78-465f-acca-f06e19cc7261 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] No waiting events found dispatching network-vif-unplugged-5698cd0c-cd85-4888-ad3a-2c588d4e45cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.353 238887 DEBUG nova.compute.manager [req-14b35fd2-7442-4314-94ab-2c9446aabd27 req-8fd4d794-9b78-465f-acca-f06e19cc7261 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Received event network-vif-unplugged-5698cd0c-cd85-4888-ad3a-2c588d4e45cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:08:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 478 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 563 KiB/s rd, 542 KiB/s wr, 19 op/s
Feb  2 07:08:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:08:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1332896464' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:08:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:08:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1332896464' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:08:24 np0005604943 nova_compute[238883]: 2026-02-02 12:08:24.981 238887 DEBUG nova.network.neutron [-] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:08:25 np0005604943 nova_compute[238883]: 2026-02-02 12:08:25.000 238887 INFO nova.compute.manager [-] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Took 0.67 seconds to deallocate network for instance.#033[00m
Feb  2 07:08:25 np0005604943 nova_compute[238883]: 2026-02-02 12:08:25.069 238887 DEBUG nova.compute.manager [req-22e1633d-c8c0-4bae-967a-98d228ed1350 req-bf9debca-83f1-4440-9f9f-a2db0f081dfd 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Received event network-vif-deleted-5698cd0c-cd85-4888-ad3a-2c588d4e45cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:08:25 np0005604943 nova_compute[238883]: 2026-02-02 12:08:25.175 238887 INFO nova.compute.manager [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Took 0.18 seconds to detach 1 volumes for instance.#033[00m
Feb  2 07:08:25 np0005604943 nova_compute[238883]: 2026-02-02 12:08:25.231 238887 DEBUG oslo_concurrency.lockutils [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:25 np0005604943 nova_compute[238883]: 2026-02-02 12:08:25.231 238887 DEBUG oslo_concurrency.lockutils [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:25 np0005604943 nova_compute[238883]: 2026-02-02 12:08:25.308 238887 DEBUG oslo_concurrency.processutils [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:08:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:08:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1739022362' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:08:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:08:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1739022362' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:08:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:08:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784840855' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:08:25 np0005604943 nova_compute[238883]: 2026-02-02 12:08:25.906 238887 DEBUG oslo_concurrency.processutils [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:08:25 np0005604943 nova_compute[238883]: 2026-02-02 12:08:25.915 238887 DEBUG nova.compute.provider_tree [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:08:25 np0005604943 nova_compute[238883]: 2026-02-02 12:08:25.933 238887 DEBUG nova.scheduler.client.report [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:08:25 np0005604943 nova_compute[238883]: 2026-02-02 12:08:25.955 238887 DEBUG oslo_concurrency.lockutils [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:25 np0005604943 nova_compute[238883]: 2026-02-02 12:08:25.984 238887 INFO nova.scheduler.client.report [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Deleted allocations for instance 63fa96af-eee7-4ee3-b95a-c4036a37b3bb#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.031 238887 DEBUG oslo_concurrency.lockutils [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.032 238887 DEBUG oslo_concurrency.lockutils [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.032 238887 DEBUG oslo_concurrency.lockutils [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.033 238887 DEBUG oslo_concurrency.lockutils [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.033 238887 DEBUG oslo_concurrency.lockutils [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.034 238887 INFO nova.compute.manager [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Terminating instance#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.035 238887 DEBUG nova.compute.manager [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.042 238887 DEBUG oslo_concurrency.lockutils [None req-c37744a5-bc23-4e2c-acd7-1ff90c42f224 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:26 np0005604943 kernel: tap999bfeaf-35 (unregistering): left promiscuous mode
Feb  2 07:08:26 np0005604943 NetworkManager[49093]: <info>  [1770034106.0788] device (tap999bfeaf-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.083 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:26 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:26Z|00260|binding|INFO|Releasing lport 999bfeaf-3590-4070-95cb-80289feea19a from this chassis (sb_readonly=0)
Feb  2 07:08:26 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:26Z|00261|binding|INFO|Setting lport 999bfeaf-3590-4070-95cb-80289feea19a down in Southbound
Feb  2 07:08:26 np0005604943 ovn_controller[145056]: 2026-02-02T12:08:26Z|00262|binding|INFO|Removing iface tap999bfeaf-35 ovn-installed in OVS
Feb  2 07:08:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:26.090 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:8b:14 10.100.0.8'], port_security=['fa:16:3e:19:8b:14 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2b79cd97-17e8-4d8d-bc7b-2c282a490be3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb13b2a6-b763-41ef-a5c4-123372e94249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851fb6d80faf43cc9b2fef1913323704', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e0a2abe2-60a1-49ea-89b8-ea7fffedac5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10f2dc12-4c00-4783-968f-4cacec86630e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=999bfeaf-3590-4070-95cb-80289feea19a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:08:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:26.091 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 999bfeaf-3590-4070-95cb-80289feea19a in datapath fb13b2a6-b763-41ef-a5c4-123372e94249 unbound from our chassis#033[00m
Feb  2 07:08:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:26.092 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fb13b2a6-b763-41ef-a5c4-123372e94249, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:08:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:26.093 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d35f13d4-1fd7-407b-b30b-2701d7827842]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:26.094 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249 namespace which is not needed anymore#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.094 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:26 np0005604943 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Feb  2 07:08:26 np0005604943 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Consumed 19.159s CPU time.
Feb  2 07:08:26 np0005604943 systemd-machined[206973]: Machine qemu-26-instance-0000001a terminated.
Feb  2 07:08:26 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[267721]: [NOTICE]   (267746) : haproxy version is 2.8.14-c23fe91
Feb  2 07:08:26 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[267721]: [NOTICE]   (267746) : path to executable is /usr/sbin/haproxy
Feb  2 07:08:26 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[267721]: [WARNING]  (267746) : Exiting Master process...
Feb  2 07:08:26 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[267721]: [ALERT]    (267746) : Current worker (267748) exited with code 143 (Terminated)
Feb  2 07:08:26 np0005604943 neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249[267721]: [WARNING]  (267746) : All workers exited. Exiting... (0)
Feb  2 07:08:26 np0005604943 systemd[1]: libpod-cc7f995f8acb01a3bb3ad458344963c94dac27e80a92dd7c4a9581355e66dbee.scope: Deactivated successfully.
Feb  2 07:08:26 np0005604943 podman[268138]: 2026-02-02 12:08:26.208836829 +0000 UTC m=+0.042046223 container died cc7f995f8acb01a3bb3ad458344963c94dac27e80a92dd7c4a9581355e66dbee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 07:08:26 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cc7f995f8acb01a3bb3ad458344963c94dac27e80a92dd7c4a9581355e66dbee-userdata-shm.mount: Deactivated successfully.
Feb  2 07:08:26 np0005604943 systemd[1]: var-lib-containers-storage-overlay-8574191cf729eee8c99d25f6d7c106188b9e840a156d0ad749f021f4d602e45e-merged.mount: Deactivated successfully.
Feb  2 07:08:26 np0005604943 podman[268138]: 2026-02-02 12:08:26.243164469 +0000 UTC m=+0.076373863 container cleanup cc7f995f8acb01a3bb3ad458344963c94dac27e80a92dd7c4a9581355e66dbee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 07:08:26 np0005604943 systemd[1]: libpod-conmon-cc7f995f8acb01a3bb3ad458344963c94dac27e80a92dd7c4a9581355e66dbee.scope: Deactivated successfully.
Feb  2 07:08:26 np0005604943 NetworkManager[49093]: <info>  [1770034106.2538] manager: (tap999bfeaf-35): new Tun device (/org/freedesktop/NetworkManager/Devices/131)
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.272 238887 INFO nova.virt.libvirt.driver [-] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Instance destroyed successfully.#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.272 238887 DEBUG nova.objects.instance [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lazy-loading 'resources' on Instance uuid 2b79cd97-17e8-4d8d-bc7b-2c282a490be3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.287 238887 DEBUG nova.virt.libvirt.vif [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:07:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1206460550',display_name='tempest-TestEncryptedCinderVolumes-server-1206460550',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1206460550',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4sEG9hObpGnevoIlqMdkrX6LtyepBRCjADAYBnTUNxH7zE9sXens2JsebTT1q5zN1V4atJxK/wradQkp5n2K1zuz899xdCKCopiRNmhKseY0+RU/9UYAZOT5nySAcl7g==',key_name='tempest-TestEncryptedCinderVolumes-1244227927',keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:07:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='851fb6d80faf43cc9b2fef1913323704',ramdisk_id='',reservation_id='r-hvadytr8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-1976450145',owner_user_name='tempest-TestEncryptedCinderVolumes-1976450145-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:07:51Z,user_data=None,user_id='084f489a7b4c4fecba7b0942ed1b7203',uuid=2b79cd97-17e8-4d8d-bc7b-2c282a490be3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "999bfeaf-3590-4070-95cb-80289feea19a", "address": "fa:16:3e:19:8b:14", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap999bfeaf-35", "ovs_interfaceid": "999bfeaf-3590-4070-95cb-80289feea19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.288 238887 DEBUG nova.network.os_vif_util [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converting VIF {"id": "999bfeaf-3590-4070-95cb-80289feea19a", "address": "fa:16:3e:19:8b:14", "network": {"id": "fb13b2a6-b763-41ef-a5c4-123372e94249", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1877054829-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851fb6d80faf43cc9b2fef1913323704", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap999bfeaf-35", "ovs_interfaceid": "999bfeaf-3590-4070-95cb-80289feea19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.288 238887 DEBUG nova.network.os_vif_util [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:8b:14,bridge_name='br-int',has_traffic_filtering=True,id=999bfeaf-3590-4070-95cb-80289feea19a,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap999bfeaf-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.289 238887 DEBUG os_vif [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:8b:14,bridge_name='br-int',has_traffic_filtering=True,id=999bfeaf-3590-4070-95cb-80289feea19a,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap999bfeaf-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.291 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.292 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap999bfeaf-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.294 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.297 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.300 238887 INFO os_vif [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:8b:14,bridge_name='br-int',has_traffic_filtering=True,id=999bfeaf-3590-4070-95cb-80289feea19a,network=Network(fb13b2a6-b763-41ef-a5c4-123372e94249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap999bfeaf-35')#033[00m
Feb  2 07:08:26 np0005604943 podman[268170]: 2026-02-02 12:08:26.307330591 +0000 UTC m=+0.043824500 container remove cc7f995f8acb01a3bb3ad458344963c94dac27e80a92dd7c4a9581355e66dbee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:08:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:26.311 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[03067b44-069c-4241-83b8-590b885d9ba8]: (4, ('Mon Feb  2 12:08:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249 (cc7f995f8acb01a3bb3ad458344963c94dac27e80a92dd7c4a9581355e66dbee)\ncc7f995f8acb01a3bb3ad458344963c94dac27e80a92dd7c4a9581355e66dbee\nMon Feb  2 12:08:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249 (cc7f995f8acb01a3bb3ad458344963c94dac27e80a92dd7c4a9581355e66dbee)\ncc7f995f8acb01a3bb3ad458344963c94dac27e80a92dd7c4a9581355e66dbee\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:26.314 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b17ab56e-445e-4aeb-869c-f0e00e81af5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:26.316 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb13b2a6-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:08:26 np0005604943 kernel: tapfb13b2a6-b0: left promiscuous mode
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.323 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:26.327 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[0baaf001-9672-4078-bf65-aaa2cb2adea0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:26.343 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1ecd3b0a-9a24-45e4-bff7-8e9494ea5691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:26.345 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[bf305d51-816b-45ec-a445-24eee121e52d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:26.361 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[655c8171-f24c-414b-bcde-f475259ea98c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451296, 'reachable_time': 32316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268210, 'error': None, 'target': 'ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:26 np0005604943 systemd[1]: run-netns-ovnmeta\x2dfb13b2a6\x2db763\x2d41ef\x2da5c4\x2d123372e94249.mount: Deactivated successfully.
Feb  2 07:08:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:26.363 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fb13b2a6-b763-41ef-a5c4-123372e94249 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 07:08:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:08:26.363 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[ceea574f-266d-4d24-b9c7-83fe6201d0e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.431 238887 DEBUG nova.compute.manager [req-e3e6c161-1f29-49d6-a2c0-8cd3aecd27ec req-47960cbb-8689-4cb7-ade7-7d9a568471d2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Received event network-vif-plugged-5698cd0c-cd85-4888-ad3a-2c588d4e45cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.432 238887 DEBUG oslo_concurrency.lockutils [req-e3e6c161-1f29-49d6-a2c0-8cd3aecd27ec req-47960cbb-8689-4cb7-ade7-7d9a568471d2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.432 238887 DEBUG oslo_concurrency.lockutils [req-e3e6c161-1f29-49d6-a2c0-8cd3aecd27ec req-47960cbb-8689-4cb7-ade7-7d9a568471d2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.432 238887 DEBUG oslo_concurrency.lockutils [req-e3e6c161-1f29-49d6-a2c0-8cd3aecd27ec req-47960cbb-8689-4cb7-ade7-7d9a568471d2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "63fa96af-eee7-4ee3-b95a-c4036a37b3bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.432 238887 DEBUG nova.compute.manager [req-e3e6c161-1f29-49d6-a2c0-8cd3aecd27ec req-47960cbb-8689-4cb7-ade7-7d9a568471d2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] No waiting events found dispatching network-vif-plugged-5698cd0c-cd85-4888-ad3a-2c588d4e45cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.432 238887 WARNING nova.compute.manager [req-e3e6c161-1f29-49d6-a2c0-8cd3aecd27ec req-47960cbb-8689-4cb7-ade7-7d9a568471d2 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Received unexpected event network-vif-plugged-5698cd0c-cd85-4888-ad3a-2c588d4e45cf for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.474 238887 INFO nova.virt.libvirt.driver [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Deleting instance files /var/lib/nova/instances/2b79cd97-17e8-4d8d-bc7b-2c282a490be3_del#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.475 238887 INFO nova.virt.libvirt.driver [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Deletion of /var/lib/nova/instances/2b79cd97-17e8-4d8d-bc7b-2c282a490be3_del complete#033[00m
Feb  2 07:08:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 478 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 563 KiB/s rd, 542 KiB/s wr, 19 op/s
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.540 238887 INFO nova.compute.manager [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Took 0.50 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.541 238887 DEBUG oslo.service.loopingcall [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.541 238887 DEBUG nova.compute.manager [-] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.541 238887 DEBUG nova.network.neutron [-] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:08:26 np0005604943 nova_compute[238883]: 2026-02-02 12:08:26.753 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:08:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/526957143' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:08:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:08:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/526957143' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:08:27 np0005604943 nova_compute[238883]: 2026-02-02 12:08:27.271 238887 DEBUG nova.compute.manager [req-625f03e2-3289-4bdb-a200-c89690ee1dbb req-e8a8fea4-8150-4511-9ee0-74aa32f3a807 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Received event network-vif-unplugged-999bfeaf-3590-4070-95cb-80289feea19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:08:27 np0005604943 nova_compute[238883]: 2026-02-02 12:08:27.272 238887 DEBUG oslo_concurrency.lockutils [req-625f03e2-3289-4bdb-a200-c89690ee1dbb req-e8a8fea4-8150-4511-9ee0-74aa32f3a807 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:27 np0005604943 nova_compute[238883]: 2026-02-02 12:08:27.273 238887 DEBUG oslo_concurrency.lockutils [req-625f03e2-3289-4bdb-a200-c89690ee1dbb req-e8a8fea4-8150-4511-9ee0-74aa32f3a807 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:27 np0005604943 nova_compute[238883]: 2026-02-02 12:08:27.273 238887 DEBUG oslo_concurrency.lockutils [req-625f03e2-3289-4bdb-a200-c89690ee1dbb req-e8a8fea4-8150-4511-9ee0-74aa32f3a807 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:27 np0005604943 nova_compute[238883]: 2026-02-02 12:08:27.274 238887 DEBUG nova.compute.manager [req-625f03e2-3289-4bdb-a200-c89690ee1dbb req-e8a8fea4-8150-4511-9ee0-74aa32f3a807 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] No waiting events found dispatching network-vif-unplugged-999bfeaf-3590-4070-95cb-80289feea19a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:08:27 np0005604943 nova_compute[238883]: 2026-02-02 12:08:27.274 238887 DEBUG nova.compute.manager [req-625f03e2-3289-4bdb-a200-c89690ee1dbb req-e8a8fea4-8150-4511-9ee0-74aa32f3a807 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Received event network-vif-unplugged-999bfeaf-3590-4070-95cb-80289feea19a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:08:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:08:28 np0005604943 nova_compute[238883]: 2026-02-02 12:08:28.001 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:08:28 np0005604943 nova_compute[238883]: 2026-02-02 12:08:28.046 238887 WARNING nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Feb  2 07:08:28 np0005604943 nova_compute[238883]: 2026-02-02 12:08:28.046 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Triggering sync for uuid 2b79cd97-17e8-4d8d-bc7b-2c282a490be3 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb  2 07:08:28 np0005604943 nova_compute[238883]: 2026-02-02 12:08:28.047 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 477 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 979 KiB/s rd, 538 KiB/s wr, 89 op/s
Feb  2 07:08:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:08:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/985547966' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:08:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:08:28 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/985547966' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:08:29 np0005604943 nova_compute[238883]: 2026-02-02 12:08:29.243 238887 DEBUG nova.network.neutron [-] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:08:29 np0005604943 nova_compute[238883]: 2026-02-02 12:08:29.267 238887 INFO nova.compute.manager [-] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Took 2.73 seconds to deallocate network for instance.#033[00m
Feb  2 07:08:29 np0005604943 nova_compute[238883]: 2026-02-02 12:08:29.504 238887 DEBUG nova.compute.manager [req-5f2d558c-682f-4891-9924-a0b6c0c39e32 req-45ba4d5a-d1a2-48dc-847c-be0c1629d73e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Received event network-vif-plugged-999bfeaf-3590-4070-95cb-80289feea19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:08:29 np0005604943 nova_compute[238883]: 2026-02-02 12:08:29.505 238887 DEBUG oslo_concurrency.lockutils [req-5f2d558c-682f-4891-9924-a0b6c0c39e32 req-45ba4d5a-d1a2-48dc-847c-be0c1629d73e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:29 np0005604943 nova_compute[238883]: 2026-02-02 12:08:29.505 238887 DEBUG oslo_concurrency.lockutils [req-5f2d558c-682f-4891-9924-a0b6c0c39e32 req-45ba4d5a-d1a2-48dc-847c-be0c1629d73e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:29 np0005604943 nova_compute[238883]: 2026-02-02 12:08:29.505 238887 DEBUG oslo_concurrency.lockutils [req-5f2d558c-682f-4891-9924-a0b6c0c39e32 req-45ba4d5a-d1a2-48dc-847c-be0c1629d73e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:29 np0005604943 nova_compute[238883]: 2026-02-02 12:08:29.506 238887 DEBUG nova.compute.manager [req-5f2d558c-682f-4891-9924-a0b6c0c39e32 req-45ba4d5a-d1a2-48dc-847c-be0c1629d73e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] No waiting events found dispatching network-vif-plugged-999bfeaf-3590-4070-95cb-80289feea19a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:08:29 np0005604943 nova_compute[238883]: 2026-02-02 12:08:29.506 238887 WARNING nova.compute.manager [req-5f2d558c-682f-4891-9924-a0b6c0c39e32 req-45ba4d5a-d1a2-48dc-847c-be0c1629d73e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Received unexpected event network-vif-plugged-999bfeaf-3590-4070-95cb-80289feea19a for instance with vm_state active and task_state deleting.#033[00m
Feb  2 07:08:29 np0005604943 nova_compute[238883]: 2026-02-02 12:08:29.506 238887 DEBUG nova.compute.manager [req-5f2d558c-682f-4891-9924-a0b6c0c39e32 req-45ba4d5a-d1a2-48dc-847c-be0c1629d73e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Received event network-vif-deleted-999bfeaf-3590-4070-95cb-80289feea19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:08:29 np0005604943 nova_compute[238883]: 2026-02-02 12:08:29.638 238887 INFO nova.compute.manager [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Took 0.37 seconds to detach 1 volumes for instance.#033[00m
Feb  2 07:08:29 np0005604943 nova_compute[238883]: 2026-02-02 12:08:29.682 238887 DEBUG oslo_concurrency.lockutils [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:29 np0005604943 nova_compute[238883]: 2026-02-02 12:08:29.683 238887 DEBUG oslo_concurrency.lockutils [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:29 np0005604943 nova_compute[238883]: 2026-02-02 12:08:29.931 238887 DEBUG oslo_concurrency.processutils [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:08:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:08:30 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2317607465' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:08:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:08:30 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2317607465' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:08:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:08:30 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2370512259' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:08:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 477 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 529 KiB/s wr, 121 op/s
Feb  2 07:08:30 np0005604943 nova_compute[238883]: 2026-02-02 12:08:30.517 238887 DEBUG oslo_concurrency.processutils [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:08:30 np0005604943 nova_compute[238883]: 2026-02-02 12:08:30.523 238887 DEBUG nova.compute.provider_tree [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:08:30 np0005604943 nova_compute[238883]: 2026-02-02 12:08:30.545 238887 DEBUG nova.scheduler.client.report [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:08:30 np0005604943 nova_compute[238883]: 2026-02-02 12:08:30.575 238887 DEBUG oslo_concurrency.lockutils [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:30 np0005604943 nova_compute[238883]: 2026-02-02 12:08:30.629 238887 INFO nova.scheduler.client.report [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Deleted allocations for instance 2b79cd97-17e8-4d8d-bc7b-2c282a490be3#033[00m
Feb  2 07:08:30 np0005604943 nova_compute[238883]: 2026-02-02 12:08:30.739 238887 DEBUG oslo_concurrency.lockutils [None req-b4e3d252-f35b-4536-b63e-67211cd81587 084f489a7b4c4fecba7b0942ed1b7203 851fb6d80faf43cc9b2fef1913323704 - - default default] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:30 np0005604943 nova_compute[238883]: 2026-02-02 12:08:30.741 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:30 np0005604943 nova_compute[238883]: 2026-02-02 12:08:30.742 238887 INFO nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Feb  2 07:08:30 np0005604943 nova_compute[238883]: 2026-02-02 12:08:30.742 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "2b79cd97-17e8-4d8d-bc7b-2c282a490be3" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Feb  2 07:08:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Feb  2 07:08:31 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Feb  2 07:08:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:08:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1622745723' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:08:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:08:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1622745723' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:08:31 np0005604943 nova_compute[238883]: 2026-02-02 12:08:31.295 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:31 np0005604943 nova_compute[238883]: 2026-02-02 12:08:31.755 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:08:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2914906088' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:08:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:08:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2914906088' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:08:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:08:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/404155359' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:08:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:08:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/404155359' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:08:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 345 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 11 KiB/s wr, 162 op/s
Feb  2 07:08:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:08:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:08:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1739323496' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:08:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:08:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1739323496' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:08:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 271 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 600 KiB/s rd, 6.9 KiB/s wr, 201 op/s
Feb  2 07:08:34 np0005604943 nova_compute[238883]: 2026-02-02 12:08:34.687 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:08:36 np0005604943 nova_compute[238883]: 2026-02-02 12:08:36.329 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 271 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 600 KiB/s rd, 6.9 KiB/s wr, 201 op/s
Feb  2 07:08:36 np0005604943 nova_compute[238883]: 2026-02-02 12:08:36.757 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:37 np0005604943 nova_compute[238883]: 2026-02-02 12:08:37.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:08:37 np0005604943 nova_compute[238883]: 2026-02-02 12:08:37.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:08:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:08:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Feb  2 07:08:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Feb  2 07:08:37 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Feb  2 07:08:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 271 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 4.4 KiB/s wr, 133 op/s
Feb  2 07:08:38 np0005604943 nova_compute[238883]: 2026-02-02 12:08:38.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:08:38 np0005604943 nova_compute[238883]: 2026-02-02 12:08:38.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 07:08:38 np0005604943 nova_compute[238883]: 2026-02-02 12:08:38.662 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 07:08:39 np0005604943 nova_compute[238883]: 2026-02-02 12:08:39.064 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770034104.0630643, 63fa96af-eee7-4ee3-b95a-c4036a37b3bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:08:39 np0005604943 nova_compute[238883]: 2026-02-02 12:08:39.064 238887 INFO nova.compute.manager [-] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:08:39 np0005604943 nova_compute[238883]: 2026-02-02 12:08:39.087 238887 DEBUG nova.compute.manager [None req-6e11b1a2-deec-47bf-a43e-299bf3d9cfc5 - - - - - -] [instance: 63fa96af-eee7-4ee3-b95a-c4036a37b3bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:08:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Feb  2 07:08:39 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Feb  2 07:08:39 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Feb  2 07:08:39 np0005604943 nova_compute[238883]: 2026-02-02 12:08:39.390 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:39 np0005604943 nova_compute[238883]: 2026-02-02 12:08:39.483 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:39 np0005604943 nova_compute[238883]: 2026-02-02 12:08:39.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:08:39 np0005604943 nova_compute[238883]: 2026-02-02 12:08:39.682 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:39 np0005604943 nova_compute[238883]: 2026-02-02 12:08:39.683 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:39 np0005604943 nova_compute[238883]: 2026-02-02 12:08:39.683 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:39 np0005604943 nova_compute[238883]: 2026-02-02 12:08:39.683 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 07:08:39 np0005604943 nova_compute[238883]: 2026-02-02 12:08:39.684 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:08:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:08:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1061494187' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:08:40 np0005604943 nova_compute[238883]: 2026-02-02 12:08:40.248 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:08:40 np0005604943 nova_compute[238883]: 2026-02-02 12:08:40.450 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:08:40 np0005604943 nova_compute[238883]: 2026-02-02 12:08:40.451 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4324MB free_disk=59.98814160190523GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 07:08:40 np0005604943 nova_compute[238883]: 2026-02-02 12:08:40.452 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:08:40 np0005604943 nova_compute[238883]: 2026-02-02 12:08:40.452 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:08:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 271 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 4.4 KiB/s wr, 77 op/s
Feb  2 07:08:40 np0005604943 nova_compute[238883]: 2026-02-02 12:08:40.554 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 07:08:40 np0005604943 nova_compute[238883]: 2026-02-02 12:08:40.554 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 07:08:40 np0005604943 nova_compute[238883]: 2026-02-02 12:08:40.576 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:08:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:08:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:08:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:08:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:08:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:08:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:08:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:08:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2837603209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:08:41 np0005604943 nova_compute[238883]: 2026-02-02 12:08:41.145 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:08:41 np0005604943 nova_compute[238883]: 2026-02-02 12:08:41.152 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:08:41 np0005604943 nova_compute[238883]: 2026-02-02 12:08:41.200 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:08:41 np0005604943 nova_compute[238883]: 2026-02-02 12:08:41.268 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770034106.2678392, 2b79cd97-17e8-4d8d-bc7b-2c282a490be3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:08:41 np0005604943 nova_compute[238883]: 2026-02-02 12:08:41.269 238887 INFO nova.compute.manager [-] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:08:41 np0005604943 nova_compute[238883]: 2026-02-02 12:08:41.311 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 07:08:41 np0005604943 nova_compute[238883]: 2026-02-02 12:08:41.312 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:08:41 np0005604943 nova_compute[238883]: 2026-02-02 12:08:41.322 238887 DEBUG nova.compute.manager [None req-425f0193-ca0c-45e7-abf1-f62385669d45 - - - - - -] [instance: 2b79cd97-17e8-4d8d-bc7b-2c282a490be3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:08:41 np0005604943 nova_compute[238883]: 2026-02-02 12:08:41.332 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:41 np0005604943 nova_compute[238883]: 2026-02-02 12:08:41.759 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Feb  2 07:08:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Feb  2 07:08:42 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Feb  2 07:08:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 271 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 KiB/s wr, 55 op/s
Feb  2 07:08:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:08:44 np0005604943 nova_compute[238883]: 2026-02-02 12:08:44.312 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:08:44 np0005604943 nova_compute[238883]: 2026-02-02 12:08:44.313 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:08:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 271 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.4 KiB/s wr, 71 op/s
Feb  2 07:08:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Feb  2 07:08:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Feb  2 07:08:45 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Feb  2 07:08:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:08:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/362954443' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:08:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:08:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/362954443' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:08:45 np0005604943 nova_compute[238883]: 2026-02-02 12:08:45.637 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:08:45 np0005604943 nova_compute[238883]: 2026-02-02 12:08:45.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:08:45 np0005604943 nova_compute[238883]: 2026-02-02 12:08:45.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 07:08:46 np0005604943 nova_compute[238883]: 2026-02-02 12:08:46.337 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 271 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.9 KiB/s wr, 65 op/s
Feb  2 07:08:46 np0005604943 nova_compute[238883]: 2026-02-02 12:08:46.784 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:47 np0005604943 podman[268306]: 2026-02-02 12:08:47.567925674 +0000 UTC m=+0.093610135 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:08:47 np0005604943 podman[268305]: 2026-02-02 12:08:47.578905102 +0000 UTC m=+0.104164811 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:08:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e412 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:08:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:08:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:08:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 07:08:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:08:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 07:08:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:08:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 07:08:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 07:08:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 07:08:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:08:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:08:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:08:48 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:08:48 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:08:48 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:08:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 271 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 38 KiB/s wr, 132 op/s
Feb  2 07:08:48 np0005604943 podman[268468]: 2026-02-02 12:08:48.510473591 +0000 UTC m=+0.114478422 container create c66a0d2feaa84df0d5208d936e4b4cd8b8de40eb87eb728e3772b5fd5d023dfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:08:48 np0005604943 podman[268468]: 2026-02-02 12:08:48.417115673 +0000 UTC m=+0.021120524 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:08:48 np0005604943 systemd[1]: Started libpod-conmon-c66a0d2feaa84df0d5208d936e4b4cd8b8de40eb87eb728e3772b5fd5d023dfc.scope.
Feb  2 07:08:48 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:08:48 np0005604943 podman[268468]: 2026-02-02 12:08:48.625321571 +0000 UTC m=+0.229326422 container init c66a0d2feaa84df0d5208d936e4b4cd8b8de40eb87eb728e3772b5fd5d023dfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb  2 07:08:48 np0005604943 podman[268468]: 2026-02-02 12:08:48.635248861 +0000 UTC m=+0.239253712 container start c66a0d2feaa84df0d5208d936e4b4cd8b8de40eb87eb728e3772b5fd5d023dfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:08:48 np0005604943 podman[268468]: 2026-02-02 12:08:48.639243615 +0000 UTC m=+0.243248476 container attach c66a0d2feaa84df0d5208d936e4b4cd8b8de40eb87eb728e3772b5fd5d023dfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:08:48 np0005604943 systemd[1]: libpod-c66a0d2feaa84df0d5208d936e4b4cd8b8de40eb87eb728e3772b5fd5d023dfc.scope: Deactivated successfully.
Feb  2 07:08:48 np0005604943 musing_brown[268484]: 167 167
Feb  2 07:08:48 np0005604943 conmon[268484]: conmon c66a0d2feaa84df0d520 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c66a0d2feaa84df0d5208d936e4b4cd8b8de40eb87eb728e3772b5fd5d023dfc.scope/container/memory.events
Feb  2 07:08:48 np0005604943 podman[268468]: 2026-02-02 12:08:48.644714408 +0000 UTC m=+0.248719239 container died c66a0d2feaa84df0d5208d936e4b4cd8b8de40eb87eb728e3772b5fd5d023dfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:08:48 np0005604943 systemd[1]: var-lib-containers-storage-overlay-300c87f91abdff01767df23da768edca409885d4c64346d2c52e517062778376-merged.mount: Deactivated successfully.
Feb  2 07:08:48 np0005604943 podman[268468]: 2026-02-02 12:08:48.691862564 +0000 UTC m=+0.295867385 container remove c66a0d2feaa84df0d5208d936e4b4cd8b8de40eb87eb728e3772b5fd5d023dfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 07:08:48 np0005604943 systemd[1]: libpod-conmon-c66a0d2feaa84df0d5208d936e4b4cd8b8de40eb87eb728e3772b5fd5d023dfc.scope: Deactivated successfully.
Feb  2 07:08:48 np0005604943 podman[268507]: 2026-02-02 12:08:48.828127046 +0000 UTC m=+0.045332299 container create eb933a47fa3c1002fbb48c6240d28efda2ad518da958baffbc0a10067e5f761c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 07:08:48 np0005604943 systemd[1]: Started libpod-conmon-eb933a47fa3c1002fbb48c6240d28efda2ad518da958baffbc0a10067e5f761c.scope.
Feb  2 07:08:48 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:08:48 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcf8dac3ee62a4d51c2557a99885544e9ed0594080eb2688d0728b48601948a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:08:48 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcf8dac3ee62a4d51c2557a99885544e9ed0594080eb2688d0728b48601948a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:08:48 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcf8dac3ee62a4d51c2557a99885544e9ed0594080eb2688d0728b48601948a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:08:48 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcf8dac3ee62a4d51c2557a99885544e9ed0594080eb2688d0728b48601948a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:08:48 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcf8dac3ee62a4d51c2557a99885544e9ed0594080eb2688d0728b48601948a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 07:08:48 np0005604943 podman[268507]: 2026-02-02 12:08:48.807828664 +0000 UTC m=+0.025033947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:08:48 np0005604943 podman[268507]: 2026-02-02 12:08:48.912438447 +0000 UTC m=+0.129643720 container init eb933a47fa3c1002fbb48c6240d28efda2ad518da958baffbc0a10067e5f761c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Feb  2 07:08:48 np0005604943 podman[268507]: 2026-02-02 12:08:48.920518438 +0000 UTC m=+0.137723691 container start eb933a47fa3c1002fbb48c6240d28efda2ad518da958baffbc0a10067e5f761c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:08:48 np0005604943 podman[268507]: 2026-02-02 12:08:48.924520743 +0000 UTC m=+0.141726006 container attach eb933a47fa3c1002fbb48c6240d28efda2ad518da958baffbc0a10067e5f761c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 07:08:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:08:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/33894970' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:08:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:08:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/33894970' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:08:49 np0005604943 flamboyant_euclid[268524]: --> passed data devices: 0 physical, 3 LVM
Feb  2 07:08:49 np0005604943 flamboyant_euclid[268524]: --> All data devices are unavailable
Feb  2 07:08:49 np0005604943 systemd[1]: libpod-eb933a47fa3c1002fbb48c6240d28efda2ad518da958baffbc0a10067e5f761c.scope: Deactivated successfully.
Feb  2 07:08:49 np0005604943 podman[268507]: 2026-02-02 12:08:49.384971842 +0000 UTC m=+0.602177095 container died eb933a47fa3c1002fbb48c6240d28efda2ad518da958baffbc0a10067e5f761c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:08:49 np0005604943 systemd[1]: var-lib-containers-storage-overlay-dcf8dac3ee62a4d51c2557a99885544e9ed0594080eb2688d0728b48601948a3-merged.mount: Deactivated successfully.
Feb  2 07:08:49 np0005604943 podman[268507]: 2026-02-02 12:08:49.430344851 +0000 UTC m=+0.647550104 container remove eb933a47fa3c1002fbb48c6240d28efda2ad518da958baffbc0a10067e5f761c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 07:08:49 np0005604943 systemd[1]: libpod-conmon-eb933a47fa3c1002fbb48c6240d28efda2ad518da958baffbc0a10067e5f761c.scope: Deactivated successfully.
Feb  2 07:08:49 np0005604943 podman[268616]: 2026-02-02 12:08:49.878700564 +0000 UTC m=+0.038215864 container create 740700d9cd7e1da2054a311d32b70f89aef4be5f7a32f26fb2ce1142a7bbb655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_colden, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 07:08:49 np0005604943 systemd[1]: Started libpod-conmon-740700d9cd7e1da2054a311d32b70f89aef4be5f7a32f26fb2ce1142a7bbb655.scope.
Feb  2 07:08:49 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:08:49 np0005604943 podman[268616]: 2026-02-02 12:08:49.860989979 +0000 UTC m=+0.020505309 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:08:49 np0005604943 podman[268616]: 2026-02-02 12:08:49.966599917 +0000 UTC m=+0.126115267 container init 740700d9cd7e1da2054a311d32b70f89aef4be5f7a32f26fb2ce1142a7bbb655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_colden, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 07:08:49 np0005604943 podman[268616]: 2026-02-02 12:08:49.972716558 +0000 UTC m=+0.132231858 container start 740700d9cd7e1da2054a311d32b70f89aef4be5f7a32f26fb2ce1142a7bbb655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_colden, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:08:49 np0005604943 objective_colden[268633]: 167 167
Feb  2 07:08:49 np0005604943 podman[268616]: 2026-02-02 12:08:49.976271981 +0000 UTC m=+0.135787291 container attach 740700d9cd7e1da2054a311d32b70f89aef4be5f7a32f26fb2ce1142a7bbb655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_colden, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Feb  2 07:08:49 np0005604943 systemd[1]: libpod-740700d9cd7e1da2054a311d32b70f89aef4be5f7a32f26fb2ce1142a7bbb655.scope: Deactivated successfully.
Feb  2 07:08:49 np0005604943 podman[268616]: 2026-02-02 12:08:49.976890638 +0000 UTC m=+0.136405938 container died 740700d9cd7e1da2054a311d32b70f89aef4be5f7a32f26fb2ce1142a7bbb655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb  2 07:08:49 np0005604943 systemd[1]: var-lib-containers-storage-overlay-07c9b7fca459e5907002c30df256d37dcbc7627721de719ae9762df6defb5ca2-merged.mount: Deactivated successfully.
Feb  2 07:08:50 np0005604943 podman[268616]: 2026-02-02 12:08:50.011417632 +0000 UTC m=+0.170932932 container remove 740700d9cd7e1da2054a311d32b70f89aef4be5f7a32f26fb2ce1142a7bbb655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_colden, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:08:50 np0005604943 systemd[1]: libpod-conmon-740700d9cd7e1da2054a311d32b70f89aef4be5f7a32f26fb2ce1142a7bbb655.scope: Deactivated successfully.
Feb  2 07:08:50 np0005604943 podman[268658]: 2026-02-02 12:08:50.162235385 +0000 UTC m=+0.042771931 container create ba226444c9a34f0ce06e3935e0289526f46810533a91e17133d856de199738db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_engelbart, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:08:50 np0005604943 systemd[1]: Started libpod-conmon-ba226444c9a34f0ce06e3935e0289526f46810533a91e17133d856de199738db.scope.
Feb  2 07:08:50 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:08:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f2d0200ab8018391fa5355316fe79f59da4064587f623da06f87fe151c08da0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:08:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f2d0200ab8018391fa5355316fe79f59da4064587f623da06f87fe151c08da0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:08:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f2d0200ab8018391fa5355316fe79f59da4064587f623da06f87fe151c08da0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:08:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f2d0200ab8018391fa5355316fe79f59da4064587f623da06f87fe151c08da0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:08:50 np0005604943 podman[268658]: 2026-02-02 12:08:50.233824932 +0000 UTC m=+0.114361498 container init ba226444c9a34f0ce06e3935e0289526f46810533a91e17133d856de199738db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:08:50 np0005604943 podman[268658]: 2026-02-02 12:08:50.239066079 +0000 UTC m=+0.119602625 container start ba226444c9a34f0ce06e3935e0289526f46810533a91e17133d856de199738db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 07:08:50 np0005604943 podman[268658]: 2026-02-02 12:08:50.142919069 +0000 UTC m=+0.023455645 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:08:50 np0005604943 podman[268658]: 2026-02-02 12:08:50.244177434 +0000 UTC m=+0.124714010 container attach ba226444c9a34f0ce06e3935e0289526f46810533a91e17133d856de199738db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_engelbart, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:08:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Feb  2 07:08:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Feb  2 07:08:50 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Feb  2 07:08:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 271 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 38 KiB/s wr, 141 op/s
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]: {
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:    "0": [
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:        {
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "devices": [
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "/dev/loop3"
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            ],
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "lv_name": "ceph_lv0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "lv_size": "21470642176",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "name": "ceph_lv0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "tags": {
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.cluster_name": "ceph",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.crush_device_class": "",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.encrypted": "0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.objectstore": "bluestore",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.osd_id": "0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.type": "block",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.vdo": "0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.with_tpm": "0"
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            },
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "type": "block",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "vg_name": "ceph_vg0"
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:        }
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:    ],
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:    "1": [
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:        {
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "devices": [
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "/dev/loop4"
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            ],
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "lv_name": "ceph_lv1",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "lv_size": "21470642176",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "name": "ceph_lv1",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "tags": {
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.cluster_name": "ceph",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.crush_device_class": "",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.encrypted": "0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.objectstore": "bluestore",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.osd_id": "1",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.type": "block",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.vdo": "0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.with_tpm": "0"
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            },
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "type": "block",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "vg_name": "ceph_vg1"
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:        }
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:    ],
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:    "2": [
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:        {
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "devices": [
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "/dev/loop5"
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            ],
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "lv_name": "ceph_lv2",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "lv_size": "21470642176",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "name": "ceph_lv2",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "tags": {
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.cluster_name": "ceph",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.crush_device_class": "",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.encrypted": "0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.objectstore": "bluestore",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.osd_id": "2",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.type": "block",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.vdo": "0",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:                "ceph.with_tpm": "0"
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            },
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "type": "block",
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:            "vg_name": "ceph_vg2"
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:        }
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]:    ]
Feb  2 07:08:50 np0005604943 wizardly_engelbart[268675]: }
Feb  2 07:08:50 np0005604943 systemd[1]: libpod-ba226444c9a34f0ce06e3935e0289526f46810533a91e17133d856de199738db.scope: Deactivated successfully.
Feb  2 07:08:50 np0005604943 podman[268658]: 2026-02-02 12:08:50.568491544 +0000 UTC m=+0.449028120 container died ba226444c9a34f0ce06e3935e0289526f46810533a91e17133d856de199738db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 07:08:50 np0005604943 systemd[1]: var-lib-containers-storage-overlay-9f2d0200ab8018391fa5355316fe79f59da4064587f623da06f87fe151c08da0-merged.mount: Deactivated successfully.
Feb  2 07:08:50 np0005604943 podman[268658]: 2026-02-02 12:08:50.612430166 +0000 UTC m=+0.492966712 container remove ba226444c9a34f0ce06e3935e0289526f46810533a91e17133d856de199738db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_engelbart, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 07:08:50 np0005604943 systemd[1]: libpod-conmon-ba226444c9a34f0ce06e3935e0289526f46810533a91e17133d856de199738db.scope: Deactivated successfully.
Feb  2 07:08:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:08:50 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/197682223' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:08:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:08:50 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/197682223' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:08:51 np0005604943 podman[268761]: 2026-02-02 12:08:51.119745953 +0000 UTC m=+0.044954819 container create 672b6742335e53dfff9b58993bfe261f5b142387035661ae0079c10273e21ff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:08:51 np0005604943 systemd[1]: Started libpod-conmon-672b6742335e53dfff9b58993bfe261f5b142387035661ae0079c10273e21ff3.scope.
Feb  2 07:08:51 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:08:51 np0005604943 podman[268761]: 2026-02-02 12:08:51.099865293 +0000 UTC m=+0.025074189 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:08:51 np0005604943 podman[268761]: 2026-02-02 12:08:51.208700136 +0000 UTC m=+0.133909032 container init 672b6742335e53dfff9b58993bfe261f5b142387035661ae0079c10273e21ff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Feb  2 07:08:51 np0005604943 podman[268761]: 2026-02-02 12:08:51.217137757 +0000 UTC m=+0.142346623 container start 672b6742335e53dfff9b58993bfe261f5b142387035661ae0079c10273e21ff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_moser, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:08:51 np0005604943 podman[268761]: 2026-02-02 12:08:51.220429833 +0000 UTC m=+0.145638719 container attach 672b6742335e53dfff9b58993bfe261f5b142387035661ae0079c10273e21ff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_moser, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb  2 07:08:51 np0005604943 zen_moser[268777]: 167 167
Feb  2 07:08:51 np0005604943 systemd[1]: libpod-672b6742335e53dfff9b58993bfe261f5b142387035661ae0079c10273e21ff3.scope: Deactivated successfully.
Feb  2 07:08:51 np0005604943 podman[268761]: 2026-02-02 12:08:51.223477122 +0000 UTC m=+0.148686008 container died 672b6742335e53dfff9b58993bfe261f5b142387035661ae0079c10273e21ff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_moser, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:08:51 np0005604943 systemd[1]: var-lib-containers-storage-overlay-235c9bc8ee5cae03cc726d2096419e820afbceefaecc1c616af7af757e40cbd3-merged.mount: Deactivated successfully.
Feb  2 07:08:51 np0005604943 podman[268761]: 2026-02-02 12:08:51.256880198 +0000 UTC m=+0.182089064 container remove 672b6742335e53dfff9b58993bfe261f5b142387035661ae0079c10273e21ff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 07:08:51 np0005604943 systemd[1]: libpod-conmon-672b6742335e53dfff9b58993bfe261f5b142387035661ae0079c10273e21ff3.scope: Deactivated successfully.
Feb  2 07:08:51 np0005604943 nova_compute[238883]: 2026-02-02 12:08:51.339 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:51 np0005604943 podman[268800]: 2026-02-02 12:08:51.407116836 +0000 UTC m=+0.043793469 container create afb4bcb3930bda330c27e5a88e6ea0de9e7398ace1653bfef90b658faeab2358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_keller, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:08:51 np0005604943 systemd[1]: Started libpod-conmon-afb4bcb3930bda330c27e5a88e6ea0de9e7398ace1653bfef90b658faeab2358.scope.
Feb  2 07:08:51 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:08:51 np0005604943 podman[268800]: 2026-02-02 12:08:51.385550131 +0000 UTC m=+0.022226794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:08:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9944bd59281f7b819266ea75f0719d03705171891690170636935108ea5e52a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:08:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9944bd59281f7b819266ea75f0719d03705171891690170636935108ea5e52a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:08:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9944bd59281f7b819266ea75f0719d03705171891690170636935108ea5e52a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:08:51 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9944bd59281f7b819266ea75f0719d03705171891690170636935108ea5e52a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:08:51 np0005604943 podman[268800]: 2026-02-02 12:08:51.499590009 +0000 UTC m=+0.136266662 container init afb4bcb3930bda330c27e5a88e6ea0de9e7398ace1653bfef90b658faeab2358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_keller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:08:51 np0005604943 podman[268800]: 2026-02-02 12:08:51.505772752 +0000 UTC m=+0.142449385 container start afb4bcb3930bda330c27e5a88e6ea0de9e7398ace1653bfef90b658faeab2358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:08:51 np0005604943 podman[268800]: 2026-02-02 12:08:51.510909426 +0000 UTC m=+0.147586059 container attach afb4bcb3930bda330c27e5a88e6ea0de9e7398ace1653bfef90b658faeab2358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_keller, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 07:08:51 np0005604943 nova_compute[238883]: 2026-02-02 12:08:51.786 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:52 np0005604943 lvm[268895]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 07:08:52 np0005604943 lvm[268894]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:08:52 np0005604943 lvm[268894]: VG ceph_vg0 finished
Feb  2 07:08:52 np0005604943 lvm[268895]: VG ceph_vg1 finished
Feb  2 07:08:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Feb  2 07:08:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Feb  2 07:08:52 np0005604943 lvm[268897]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 07:08:52 np0005604943 lvm[268897]: VG ceph_vg2 finished
Feb  2 07:08:52 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Feb  2 07:08:52 np0005604943 strange_keller[268816]: {}
Feb  2 07:08:52 np0005604943 systemd[1]: libpod-afb4bcb3930bda330c27e5a88e6ea0de9e7398ace1653bfef90b658faeab2358.scope: Deactivated successfully.
Feb  2 07:08:52 np0005604943 systemd[1]: libpod-afb4bcb3930bda330c27e5a88e6ea0de9e7398ace1653bfef90b658faeab2358.scope: Consumed 1.298s CPU time.
Feb  2 07:08:52 np0005604943 podman[268800]: 2026-02-02 12:08:52.408613337 +0000 UTC m=+1.045289970 container died afb4bcb3930bda330c27e5a88e6ea0de9e7398ace1653bfef90b658faeab2358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_keller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:08:52 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f9944bd59281f7b819266ea75f0719d03705171891690170636935108ea5e52a-merged.mount: Deactivated successfully.
Feb  2 07:08:52 np0005604943 podman[268800]: 2026-02-02 12:08:52.463575698 +0000 UTC m=+1.100252321 container remove afb4bcb3930bda330c27e5a88e6ea0de9e7398ace1653bfef90b658faeab2358 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 07:08:52 np0005604943 systemd[1]: libpod-conmon-afb4bcb3930bda330c27e5a88e6ea0de9e7398ace1653bfef90b658faeab2358.scope: Deactivated successfully.
Feb  2 07:08:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 271 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 161 KiB/s rd, 41 KiB/s wr, 204 op/s
Feb  2 07:08:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:08:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:08:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:08:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:08:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:08:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1771511399' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:08:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:08:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1771511399' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:08:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:08:53 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:08:53 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:08:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:08:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/501922420' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:08:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:08:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/501922420' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:08:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 271 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 188 KiB/s rd, 39 KiB/s wr, 239 op/s
Feb  2 07:08:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Feb  2 07:08:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Feb  2 07:08:56 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Feb  2 07:08:56 np0005604943 nova_compute[238883]: 2026-02-02 12:08:56.383 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 271 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 131 KiB/s rd, 2.9 KiB/s wr, 164 op/s
Feb  2 07:08:56 np0005604943 nova_compute[238883]: 2026-02-02 12:08:56.788 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:08:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:08:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Feb  2 07:08:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Feb  2 07:08:57 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Feb  2 07:08:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 271 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 156 KiB/s rd, 4.2 KiB/s wr, 200 op/s
Feb  2 07:08:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Feb  2 07:08:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Feb  2 07:08:59 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Feb  2 07:09:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 271 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 2.8 KiB/s wr, 65 op/s
Feb  2 07:09:01 np0005604943 nova_compute[238883]: 2026-02-02 12:09:01.386 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:01 np0005604943 nova_compute[238883]: 2026-02-02 12:09:01.789 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 347 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 147 KiB/s rd, 12 MiB/s wr, 203 op/s
Feb  2 07:09:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:09:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Feb  2 07:09:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Feb  2 07:09:02 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Feb  2 07:09:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 385 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 140 KiB/s rd, 17 MiB/s wr, 194 op/s
Feb  2 07:09:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Feb  2 07:09:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Feb  2 07:09:04 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Feb  2 07:09:05 np0005604943 nova_compute[238883]: 2026-02-02 12:09:05.405 238887 DEBUG oslo_concurrency.lockutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "f425e716-a5bd-4c8e-8135-829321a4281c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:05 np0005604943 nova_compute[238883]: 2026-02-02 12:09:05.406 238887 DEBUG oslo_concurrency.lockutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "f425e716-a5bd-4c8e-8135-829321a4281c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:09:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/250294772' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:09:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:09:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/250294772' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:09:05 np0005604943 nova_compute[238883]: 2026-02-02 12:09:05.439 238887 DEBUG nova.compute.manager [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:09:05 np0005604943 nova_compute[238883]: 2026-02-02 12:09:05.530 238887 DEBUG oslo_concurrency.lockutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:05 np0005604943 nova_compute[238883]: 2026-02-02 12:09:05.530 238887 DEBUG oslo_concurrency.lockutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:05 np0005604943 nova_compute[238883]: 2026-02-02 12:09:05.542 238887 DEBUG nova.virt.hardware [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:09:05 np0005604943 nova_compute[238883]: 2026-02-02 12:09:05.543 238887 INFO nova.compute.claims [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:09:05 np0005604943 nova_compute[238883]: 2026-02-02 12:09:05.662 238887 DEBUG oslo_concurrency.processutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:09:06 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1858485382' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.243 238887 DEBUG oslo_concurrency.processutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.249 238887 DEBUG nova.compute.provider_tree [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.265 238887 DEBUG nova.scheduler.client.report [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.288 238887 DEBUG oslo_concurrency.lockutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.289 238887 DEBUG nova.compute.manager [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.337 238887 DEBUG nova.compute.manager [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.338 238887 DEBUG nova.network.neutron [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.355 238887 INFO nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.375 238887 DEBUG nova.compute.manager [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.389 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.426 238887 INFO nova.virt.block_device [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Booting with volume 67fb0117-6283-4cc0-b28b-6d772465dc05 at /dev/vda#033[00m
Feb  2 07:09:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 385 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 121 KiB/s rd, 17 MiB/s wr, 168 op/s
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.530 238887 DEBUG nova.policy [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cd5824e18d5e443cb24d3bf55ff2c553', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4c7b49c49c104c079544033b07fb2f3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.563 238887 DEBUG os_brick.utils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.565 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.573 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.573 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[f7295112-4860-4e05-a83b-e026130d92f5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.574 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.582 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.582 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[43451dca-5935-4694-bdc0-d76d4a01235f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.585 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.592 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.592 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[6f3b4073-9d77-4b73-be11-e5d65fcb9e2a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.594 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[72f00713-ba03-47ff-98f0-f662a2b11207]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.595 238887 DEBUG oslo_concurrency.processutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.613 238887 DEBUG oslo_concurrency.processutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.616 238887 DEBUG os_brick.initiator.connectors.lightos [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.616 238887 DEBUG os_brick.initiator.connectors.lightos [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.616 238887 DEBUG os_brick.initiator.connectors.lightos [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.616 238887 DEBUG os_brick.utils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] <== get_connector_properties: return (52ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.617 238887 DEBUG nova.virt.block_device [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Updating existing volume attachment record: 7e1e2716-8b4c-455a-b33e-a624f4577058 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:09:06 np0005604943 nova_compute[238883]: 2026-02-02 12:09:06.790 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Feb  2 07:09:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Feb  2 07:09:06 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Feb  2 07:09:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:09:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3705079107' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:09:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:09:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3705079107' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:09:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:09:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/348844322' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:09:07 np0005604943 nova_compute[238883]: 2026-02-02 12:09:07.437 238887 DEBUG nova.network.neutron [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Successfully created port: d8560427-e926-4579-8763-e2a149f487c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:09:07 np0005604943 nova_compute[238883]: 2026-02-02 12:09:07.760 238887 DEBUG nova.compute.manager [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:09:07 np0005604943 nova_compute[238883]: 2026-02-02 12:09:07.762 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:09:07 np0005604943 nova_compute[238883]: 2026-02-02 12:09:07.763 238887 INFO nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Creating image(s)#033[00m
Feb  2 07:09:07 np0005604943 nova_compute[238883]: 2026-02-02 12:09:07.763 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 07:09:07 np0005604943 nova_compute[238883]: 2026-02-02 12:09:07.764 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Ensure instance console log exists: /var/lib/nova/instances/f425e716-a5bd-4c8e-8135-829321a4281c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:09:07 np0005604943 nova_compute[238883]: 2026-02-02 12:09:07.764 238887 DEBUG oslo_concurrency.lockutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:07 np0005604943 nova_compute[238883]: 2026-02-02 12:09:07.765 238887 DEBUG oslo_concurrency.lockutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:07 np0005604943 nova_compute[238883]: 2026-02-02 12:09:07.765 238887 DEBUG oslo_concurrency.lockutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:09:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:09:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4260092813' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:09:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:09:08 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4260092813' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:09:08 np0005604943 nova_compute[238883]: 2026-02-02 12:09:08.346 238887 DEBUG nova.network.neutron [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Successfully updated port: d8560427-e926-4579-8763-e2a149f487c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:09:08 np0005604943 nova_compute[238883]: 2026-02-02 12:09:08.380 238887 DEBUG oslo_concurrency.lockutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "refresh_cache-f425e716-a5bd-4c8e-8135-829321a4281c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:09:08 np0005604943 nova_compute[238883]: 2026-02-02 12:09:08.381 238887 DEBUG oslo_concurrency.lockutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquired lock "refresh_cache-f425e716-a5bd-4c8e-8135-829321a4281c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:09:08 np0005604943 nova_compute[238883]: 2026-02-02 12:09:08.381 238887 DEBUG nova.network.neutron [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:09:08 np0005604943 nova_compute[238883]: 2026-02-02 12:09:08.430 238887 DEBUG nova.compute.manager [req-b9800171-e033-44cf-8b3c-37c64c0fab0e req-8bc24cd4-191e-4861-8e47-b7dc841f882a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Received event network-changed-d8560427-e926-4579-8763-e2a149f487c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:09:08 np0005604943 nova_compute[238883]: 2026-02-02 12:09:08.431 238887 DEBUG nova.compute.manager [req-b9800171-e033-44cf-8b3c-37c64c0fab0e req-8bc24cd4-191e-4861-8e47-b7dc841f882a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Refreshing instance network info cache due to event network-changed-d8560427-e926-4579-8763-e2a149f487c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:09:08 np0005604943 nova_compute[238883]: 2026-02-02 12:09:08.431 238887 DEBUG oslo_concurrency.lockutils [req-b9800171-e033-44cf-8b3c-37c64c0fab0e req-8bc24cd4-191e-4861-8e47-b7dc841f882a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-f425e716-a5bd-4c8e-8135-829321a4281c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:09:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 385 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 6.0 MiB/s wr, 136 op/s
Feb  2 07:09:08 np0005604943 nova_compute[238883]: 2026-02-02 12:09:08.521 238887 DEBUG nova.network.neutron [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.447 238887 DEBUG nova.network.neutron [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Updating instance_info_cache with network_info: [{"id": "d8560427-e926-4579-8763-e2a149f487c3", "address": "fa:16:3e:ad:ea:a5", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8560427-e9", "ovs_interfaceid": "d8560427-e926-4579-8763-e2a149f487c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.504 238887 DEBUG oslo_concurrency.lockutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Releasing lock "refresh_cache-f425e716-a5bd-4c8e-8135-829321a4281c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.505 238887 DEBUG nova.compute.manager [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Instance network_info: |[{"id": "d8560427-e926-4579-8763-e2a149f487c3", "address": "fa:16:3e:ad:ea:a5", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8560427-e9", "ovs_interfaceid": "d8560427-e926-4579-8763-e2a149f487c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.506 238887 DEBUG oslo_concurrency.lockutils [req-b9800171-e033-44cf-8b3c-37c64c0fab0e req-8bc24cd4-191e-4861-8e47-b7dc841f882a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-f425e716-a5bd-4c8e-8135-829321a4281c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.506 238887 DEBUG nova.network.neutron [req-b9800171-e033-44cf-8b3c-37c64c0fab0e req-8bc24cd4-191e-4861-8e47-b7dc841f882a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Refreshing network info cache for port d8560427-e926-4579-8763-e2a149f487c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.512 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Start _get_guest_xml network_info=[{"id": "d8560427-e926-4579-8763-e2a149f487c3", "address": "fa:16:3e:ad:ea:a5", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8560427-e9", "ovs_interfaceid": "d8560427-e926-4579-8763-e2a149f487c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'attachment_id': '7e1e2716-8b4c-455a-b33e-a624f4577058', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-67fb0117-6283-4cc0-b28b-6d772465dc05', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '67fb0117-6283-4cc0-b28b-6d772465dc05', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'f425e716-a5bd-4c8e-8135-829321a4281c', 'attached_at': '', 'detached_at': '', 'volume_id': '67fb0117-6283-4cc0-b28b-6d772465dc05', 'serial': '67fb0117-6283-4cc0-b28b-6d772465dc05'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.519 238887 WARNING nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.525 238887 DEBUG nova.virt.libvirt.host [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.526 238887 DEBUG nova.virt.libvirt.host [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.533 238887 DEBUG nova.virt.libvirt.host [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.534 238887 DEBUG nova.virt.libvirt.host [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.534 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.535 238887 DEBUG nova.virt.hardware [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.535 238887 DEBUG nova.virt.hardware [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.535 238887 DEBUG nova.virt.hardware [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.536 238887 DEBUG nova.virt.hardware [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.536 238887 DEBUG nova.virt.hardware [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.536 238887 DEBUG nova.virt.hardware [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.536 238887 DEBUG nova.virt.hardware [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.536 238887 DEBUG nova.virt.hardware [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.537 238887 DEBUG nova.virt.hardware [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.537 238887 DEBUG nova.virt.hardware [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.537 238887 DEBUG nova.virt.hardware [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.560 238887 DEBUG nova.storage.rbd_utils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] rbd image f425e716-a5bd-4c8e-8135-829321a4281c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:09:09 np0005604943 nova_compute[238883]: 2026-02-02 12:09:09.565 238887 DEBUG oslo_concurrency.processutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_12:09:09
Feb  2 07:09:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 07:09:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 07:09:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', '.mgr', 'backups', 'volumes']
Feb  2 07:09:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 07:09:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:10.033 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:10.034 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:10.035 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:09:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784550343' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.148 238887 DEBUG oslo_concurrency.processutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.284 238887 DEBUG os_brick.encryptors [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Using volume encryption metadata '{'encryption_key_id': '23949615-ea1b-4311-82c1-8e6e5465ea86', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-67fb0117-6283-4cc0-b28b-6d772465dc05', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '67fb0117-6283-4cc0-b28b-6d772465dc05', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'f425e716-a5bd-4c8e-8135-829321a4281c', 'attached_at': '', 'detached_at': '', 'volume_id': '67fb0117-6283-4cc0-b28b-6d772465dc05', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.287 238887 DEBUG barbicanclient.client [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.305 238887 DEBUG barbicanclient.v1.secrets [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/23949615-ea1b-4311-82c1-8e6e5465ea86 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.306 238887 INFO barbicanclient.base [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/23949615-ea1b-4311-82c1-8e6e5465ea86#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.327 238887 DEBUG barbicanclient.client [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.328 238887 INFO barbicanclient.base [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/23949615-ea1b-4311-82c1-8e6e5465ea86#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.353 238887 DEBUG barbicanclient.client [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.354 238887 INFO barbicanclient.base [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/23949615-ea1b-4311-82c1-8e6e5465ea86#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.382 238887 DEBUG barbicanclient.client [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.383 238887 INFO barbicanclient.base [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/23949615-ea1b-4311-82c1-8e6e5465ea86#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.407 238887 DEBUG barbicanclient.client [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.408 238887 INFO barbicanclient.base [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/23949615-ea1b-4311-82c1-8e6e5465ea86#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.429 238887 DEBUG barbicanclient.client [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.430 238887 INFO barbicanclient.base [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/23949615-ea1b-4311-82c1-8e6e5465ea86#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.452 238887 DEBUG barbicanclient.client [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.453 238887 INFO barbicanclient.base [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/23949615-ea1b-4311-82c1-8e6e5465ea86#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.474 238887 DEBUG barbicanclient.client [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.475 238887 INFO barbicanclient.base [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/23949615-ea1b-4311-82c1-8e6e5465ea86#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.501 238887 DEBUG barbicanclient.client [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.502 238887 INFO barbicanclient.base [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/23949615-ea1b-4311-82c1-8e6e5465ea86#033[00m
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 385 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 4.7 MiB/s wr, 156 op/s
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.628 238887 DEBUG barbicanclient.client [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.629 238887 INFO barbicanclient.base [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/23949615-ea1b-4311-82c1-8e6e5465ea86#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.705 238887 DEBUG barbicanclient.client [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.705 238887 INFO barbicanclient.base [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/23949615-ea1b-4311-82c1-8e6e5465ea86#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.731 238887 DEBUG barbicanclient.client [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.731 238887 INFO barbicanclient.base [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/23949615-ea1b-4311-82c1-8e6e5465ea86#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.753 238887 DEBUG barbicanclient.client [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.754 238887 INFO barbicanclient.base [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/23949615-ea1b-4311-82c1-8e6e5465ea86#033[00m
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.799 238887 DEBUG barbicanclient.client [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.800 238887 INFO barbicanclient.base [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/23949615-ea1b-4311-82c1-8e6e5465ea86#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.820 238887 DEBUG barbicanclient.client [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.821 238887 INFO barbicanclient.base [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/23949615-ea1b-4311-82c1-8e6e5465ea86#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.851 238887 DEBUG barbicanclient.client [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.852 238887 DEBUG nova.virt.libvirt.host [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  <usage type="volume">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <volume>67fb0117-6283-4cc0-b28b-6d772465dc05</volume>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  </usage>
Feb  2 07:09:10 np0005604943 nova_compute[238883]: </secret>
Feb  2 07:09:10 np0005604943 nova_compute[238883]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 07:09:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Feb  2 07:09:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Feb  2 07:09:10 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.915 238887 DEBUG nova.virt.libvirt.vif [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:09:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1095099264',display_name='tempest-TransferEncryptedVolumeTest-server-1095099264',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1095099264',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7aUv7PBeO78zp3TdCQ5pHHrUWfgatS9ASOECbWv5UGrW7YbMyQ2Q5xaozZcd0G8LLxfP6XSKv3an4flOYSD0UKdfJBDp1c8Bpfee8qRIo6Ih80jJn9izsYTHHCaZomBw==',key_name='tempest-TransferEncryptedVolumeTest-1781235943',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c7b49c49c104c079544033b07fb2f3d',ramdisk_id='',reservation_id='r-hcfjc9t6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-347797880',owner_user_name='tempest-TransferEncryptedVolumeTest-347797880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:09:06Z,user_data=None,user_id='cd5824e18d5e443cb24d3bf55ff2c553',uuid=f425e716-a5bd-4c8e-8135-829321a4281c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8560427-e926-4579-8763-e2a149f487c3", "address": "fa:16:3e:ad:ea:a5", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8560427-e9", "ovs_interfaceid": "d8560427-e926-4579-8763-e2a149f487c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.916 238887 DEBUG nova.network.os_vif_util [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converting VIF {"id": "d8560427-e926-4579-8763-e2a149f487c3", "address": "fa:16:3e:ad:ea:a5", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8560427-e9", "ovs_interfaceid": "d8560427-e926-4579-8763-e2a149f487c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.917 238887 DEBUG nova.network.os_vif_util [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:ea:a5,bridge_name='br-int',has_traffic_filtering=True,id=d8560427-e926-4579-8763-e2a149f487c3,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8560427-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.918 238887 DEBUG nova.objects.instance [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lazy-loading 'pci_devices' on Instance uuid f425e716-a5bd-4c8e-8135-829321a4281c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.932 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  <uuid>f425e716-a5bd-4c8e-8135-829321a4281c</uuid>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  <name>instance-0000001b</name>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-1095099264</nova:name>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:09:09</nova:creationTime>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:        <nova:user uuid="cd5824e18d5e443cb24d3bf55ff2c553">tempest-TransferEncryptedVolumeTest-347797880-project-member</nova:user>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:        <nova:project uuid="4c7b49c49c104c079544033b07fb2f3d">tempest-TransferEncryptedVolumeTest-347797880</nova:project>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:        <nova:port uuid="d8560427-e926-4579-8763-e2a149f487c3">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <entry name="serial">f425e716-a5bd-4c8e-8135-829321a4281c</entry>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <entry name="uuid">f425e716-a5bd-4c8e-8135-829321a4281c</entry>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/f425e716-a5bd-4c8e-8135-829321a4281c_disk.config">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="volumes/volume-67fb0117-6283-4cc0-b28b-6d772465dc05">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <serial>67fb0117-6283-4cc0-b28b-6d772465dc05</serial>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <encryption format="luks">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:        <secret type="passphrase" uuid="004d2fc2-54eb-4175-aa56-d179f298d342"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      </encryption>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:ad:ea:a5"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <target dev="tapd8560427-e9"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/f425e716-a5bd-4c8e-8135-829321a4281c/console.log" append="off"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:09:10 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:09:10 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:09:10 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:09:10 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.933 238887 DEBUG nova.compute.manager [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Preparing to wait for external event network-vif-plugged-d8560427-e926-4579-8763-e2a149f487c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.934 238887 DEBUG oslo_concurrency.lockutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.934 238887 DEBUG oslo_concurrency.lockutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.934 238887 DEBUG oslo_concurrency.lockutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.935 238887 DEBUG nova.virt.libvirt.vif [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:09:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1095099264',display_name='tempest-TransferEncryptedVolumeTest-server-1095099264',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1095099264',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7aUv7PBeO78zp3TdCQ5pHHrUWfgatS9ASOECbWv5UGrW7YbMyQ2Q5xaozZcd0G8LLxfP6XSKv3an4flOYSD0UKdfJBDp1c8Bpfee8qRIo6Ih80jJn9izsYTHHCaZomBw==',key_name='tempest-TransferEncryptedVolumeTest-1781235943',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c7b49c49c104c079544033b07fb2f3d',ramdisk_id='',reservation_id='r-hcfjc9t6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-347797880',owner_user_name='tempest-TransferEncryptedVolumeTest-347797880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:09:06Z,user_data=None,user_id='cd5824e18d5e443cb24d3bf55ff2c553',uuid=f425e716-a5bd-4c8e-8135-829321a4281c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8560427-e926-4579-8763-e2a149f487c3", "address": "fa:16:3e:ad:ea:a5", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8560427-e9", "ovs_interfaceid": "d8560427-e926-4579-8763-e2a149f487c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.935 238887 DEBUG nova.network.os_vif_util [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converting VIF {"id": "d8560427-e926-4579-8763-e2a149f487c3", "address": "fa:16:3e:ad:ea:a5", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8560427-e9", "ovs_interfaceid": "d8560427-e926-4579-8763-e2a149f487c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.936 238887 DEBUG nova.network.os_vif_util [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:ea:a5,bridge_name='br-int',has_traffic_filtering=True,id=d8560427-e926-4579-8763-e2a149f487c3,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8560427-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.936 238887 DEBUG os_vif [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:ea:a5,bridge_name='br-int',has_traffic_filtering=True,id=d8560427-e926-4579-8763-e2a149f487c3,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8560427-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.937 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.938 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.938 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.943 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.943 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8560427-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.944 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8560427-e9, col_values=(('external_ids', {'iface-id': 'd8560427-e926-4579-8763-e2a149f487c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:ea:a5', 'vm-uuid': 'f425e716-a5bd-4c8e-8135-829321a4281c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.946 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:10 np0005604943 NetworkManager[49093]: <info>  [1770034150.9476] manager: (tapd8560427-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.948 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.953 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:10 np0005604943 nova_compute[238883]: 2026-02-02 12:09:10.955 238887 INFO os_vif [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:ea:a5,bridge_name='br-int',has_traffic_filtering=True,id=d8560427-e926-4579-8763-e2a149f487c3,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8560427-e9')#033[00m
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:09:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:09:11 np0005604943 nova_compute[238883]: 2026-02-02 12:09:11.010 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:09:11 np0005604943 nova_compute[238883]: 2026-02-02 12:09:11.011 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:09:11 np0005604943 nova_compute[238883]: 2026-02-02 12:09:11.011 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] No VIF found with MAC fa:16:3e:ad:ea:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:09:11 np0005604943 nova_compute[238883]: 2026-02-02 12:09:11.012 238887 INFO nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Using config drive#033[00m
Feb  2 07:09:11 np0005604943 nova_compute[238883]: 2026-02-02 12:09:11.031 238887 DEBUG nova.storage.rbd_utils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] rbd image f425e716-a5bd-4c8e-8135-829321a4281c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:09:11 np0005604943 nova_compute[238883]: 2026-02-02 12:09:11.605 238887 DEBUG nova.network.neutron [req-b9800171-e033-44cf-8b3c-37c64c0fab0e req-8bc24cd4-191e-4861-8e47-b7dc841f882a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Updated VIF entry in instance network info cache for port d8560427-e926-4579-8763-e2a149f487c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:09:11 np0005604943 nova_compute[238883]: 2026-02-02 12:09:11.606 238887 DEBUG nova.network.neutron [req-b9800171-e033-44cf-8b3c-37c64c0fab0e req-8bc24cd4-191e-4861-8e47-b7dc841f882a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Updating instance_info_cache with network_info: [{"id": "d8560427-e926-4579-8763-e2a149f487c3", "address": "fa:16:3e:ad:ea:a5", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8560427-e9", "ovs_interfaceid": "d8560427-e926-4579-8763-e2a149f487c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:09:11 np0005604943 nova_compute[238883]: 2026-02-02 12:09:11.622 238887 DEBUG oslo_concurrency.lockutils [req-b9800171-e033-44cf-8b3c-37c64c0fab0e req-8bc24cd4-191e-4861-8e47-b7dc841f882a 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-f425e716-a5bd-4c8e-8135-829321a4281c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:09:11 np0005604943 nova_compute[238883]: 2026-02-02 12:09:11.706 238887 INFO nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Creating config drive at /var/lib/nova/instances/f425e716-a5bd-4c8e-8135-829321a4281c/disk.config#033[00m
Feb  2 07:09:11 np0005604943 nova_compute[238883]: 2026-02-02 12:09:11.711 238887 DEBUG oslo_concurrency.processutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f425e716-a5bd-4c8e-8135-829321a4281c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp08ox5b_0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:11 np0005604943 nova_compute[238883]: 2026-02-02 12:09:11.792 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:11 np0005604943 nova_compute[238883]: 2026-02-02 12:09:11.839 238887 DEBUG oslo_concurrency.processutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f425e716-a5bd-4c8e-8135-829321a4281c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp08ox5b_0" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:11 np0005604943 nova_compute[238883]: 2026-02-02 12:09:11.864 238887 DEBUG nova.storage.rbd_utils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] rbd image f425e716-a5bd-4c8e-8135-829321a4281c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:09:11 np0005604943 nova_compute[238883]: 2026-02-02 12:09:11.868 238887 DEBUG oslo_concurrency.processutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f425e716-a5bd-4c8e-8135-829321a4281c/disk.config f425e716-a5bd-4c8e-8135-829321a4281c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:11.999 238887 DEBUG oslo_concurrency.processutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f425e716-a5bd-4c8e-8135-829321a4281c/disk.config f425e716-a5bd-4c8e-8135-829321a4281c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:12.000 238887 INFO nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Deleting local config drive /var/lib/nova/instances/f425e716-a5bd-4c8e-8135-829321a4281c/disk.config because it was imported into RBD.#033[00m
Feb  2 07:09:12 np0005604943 kernel: tapd8560427-e9: entered promiscuous mode
Feb  2 07:09:12 np0005604943 NetworkManager[49093]: <info>  [1770034152.0459] manager: (tapd8560427-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/133)
Feb  2 07:09:12 np0005604943 ovn_controller[145056]: 2026-02-02T12:09:12Z|00263|binding|INFO|Claiming lport d8560427-e926-4579-8763-e2a149f487c3 for this chassis.
Feb  2 07:09:12 np0005604943 ovn_controller[145056]: 2026-02-02T12:09:12Z|00264|binding|INFO|d8560427-e926-4579-8763-e2a149f487c3: Claiming fa:16:3e:ad:ea:a5 10.100.0.5
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:12.048 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:12.050 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:12.054 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.069 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:ea:a5 10.100.0.5'], port_security=['fa:16:3e:ad:ea:a5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f425e716-a5bd-4c8e-8135-829321a4281c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efa24ae1-9962-44ca-882a-8d146356fcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c7b49c49c104c079544033b07fb2f3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2781f824-7ac1-4375-9bc7-15197abfb3e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b6e8bcf-741b-41c8-a826-9b6dbb1c260b, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=d8560427-e926-4579-8763-e2a149f487c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.070 155011 INFO neutron.agent.ovn.metadata.agent [-] Port d8560427-e926-4579-8763-e2a149f487c3 in datapath efa24ae1-9962-44ca-882a-8d146356fcca bound to our chassis#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.071 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network efa24ae1-9962-44ca-882a-8d146356fcca#033[00m
Feb  2 07:09:12 np0005604943 systemd-udevd[269080]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:12.080 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:12 np0005604943 systemd-machined[206973]: New machine qemu-27-instance-0000001b.
Feb  2 07:09:12 np0005604943 ovn_controller[145056]: 2026-02-02T12:09:12Z|00265|binding|INFO|Setting lport d8560427-e926-4579-8763-e2a149f487c3 ovn-installed in OVS
Feb  2 07:09:12 np0005604943 ovn_controller[145056]: 2026-02-02T12:09:12Z|00266|binding|INFO|Setting lport d8560427-e926-4579-8763-e2a149f487c3 up in Southbound
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:12.084 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.084 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[23179404-2248-48ed-8c03-afefb6214665]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.085 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapefa24ae1-91 in ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.087 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapefa24ae1-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.087 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[0a26d185-2319-48a7-a024-b614707b41f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.088 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[726e98aa-3f5e-4b99-acfe-9daab37fbbed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:12 np0005604943 NetworkManager[49093]: <info>  [1770034152.0911] device (tapd8560427-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:09:12 np0005604943 NetworkManager[49093]: <info>  [1770034152.0918] device (tapd8560427-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:09:12 np0005604943 systemd[1]: Started Virtual Machine qemu-27-instance-0000001b.
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.098 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee018ee-a15c-4177-b63a-e5ebb31bfad4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.110 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a04940ec-ad48-4972-a299-d615fa558deb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.140 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[77df68ff-b818-4a28-b7aa-63f28e2953ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.146 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7074db15-7f80-4a31-a1be-9a69cfd9139a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:12 np0005604943 systemd-udevd[269083]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:09:12 np0005604943 NetworkManager[49093]: <info>  [1770034152.1482] manager: (tapefa24ae1-90): new Veth device (/org/freedesktop/NetworkManager/Devices/134)
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.176 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[36db809d-5fe0-400c-b138-d08f510f536d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.179 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[bb602450-610a-4281-a322-59a19ee9d937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:12 np0005604943 NetworkManager[49093]: <info>  [1770034152.1989] device (tapefa24ae1-90): carrier: link connected
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.201 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f72e8d-3d00-4df4-a4a3-66812c98c086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.218 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c0483c-a8fd-4f37-8fc9-bbaec627ab67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefa24ae1-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:4e:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459754, 'reachable_time': 22575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269113, 'error': None, 'target': 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.232 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd73c9f-1e41-4aeb-8778-02e09d9fc4cd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:4ebf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459754, 'tstamp': 459754}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269114, 'error': None, 'target': 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.245 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a731590b-82ba-4325-9a31-84c403f4f7bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefa24ae1-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:4e:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459754, 'reachable_time': 22575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269115, 'error': None, 'target': 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.267 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[62b48f40-be98-4597-8134-438105852a39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.318 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ae4f6e-1507-4cc0-9de1-3e63506b2675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.320 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefa24ae1-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.320 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.320 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefa24ae1-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:09:12 np0005604943 NetworkManager[49093]: <info>  [1770034152.3224] manager: (tapefa24ae1-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Feb  2 07:09:12 np0005604943 kernel: tapefa24ae1-90: entered promiscuous mode
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.324 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapefa24ae1-90, col_values=(('external_ids', {'iface-id': '88fa0d04-0a79-4556-b2c6-d65a3a18ab58'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:12.322 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:12.324 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:12 np0005604943 ovn_controller[145056]: 2026-02-02T12:09:12Z|00267|binding|INFO|Releasing lport 88fa0d04-0a79-4556-b2c6-d65a3a18ab58 from this chassis (sb_readonly=0)
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:12.325 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:12.326 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.326 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/efa24ae1-9962-44ca-882a-8d146356fcca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/efa24ae1-9962-44ca-882a-8d146356fcca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.327 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[46a80571-7f3e-4809-9587-3bad4e216750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.328 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-efa24ae1-9962-44ca-882a-8d146356fcca
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/efa24ae1-9962-44ca-882a-8d146356fcca.pid.haproxy
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID efa24ae1-9962-44ca-882a-8d146356fcca
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 07:09:12 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:12.328 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'env', 'PROCESS_TAG=haproxy-efa24ae1-9962-44ca-882a-8d146356fcca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/efa24ae1-9962-44ca-882a-8d146356fcca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:12.332 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:12.355 238887 DEBUG nova.compute.manager [req-edd351b4-c348-4e4d-b00b-9c6d52e67548 req-cdf26acf-2af5-47a5-982a-b3163c3adaf7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Received event network-vif-plugged-d8560427-e926-4579-8763-e2a149f487c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:12.355 238887 DEBUG oslo_concurrency.lockutils [req-edd351b4-c348-4e4d-b00b-9c6d52e67548 req-cdf26acf-2af5-47a5-982a-b3163c3adaf7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:12.356 238887 DEBUG oslo_concurrency.lockutils [req-edd351b4-c348-4e4d-b00b-9c6d52e67548 req-cdf26acf-2af5-47a5-982a-b3163c3adaf7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:12.356 238887 DEBUG oslo_concurrency.lockutils [req-edd351b4-c348-4e4d-b00b-9c6d52e67548 req-cdf26acf-2af5-47a5-982a-b3163c3adaf7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:12 np0005604943 nova_compute[238883]: 2026-02-02 12:09:12.356 238887 DEBUG nova.compute.manager [req-edd351b4-c348-4e4d-b00b-9c6d52e67548 req-cdf26acf-2af5-47a5-982a-b3163c3adaf7 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Processing event network-vif-plugged-d8560427-e926-4579-8763-e2a149f487c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:09:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 385 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 3.2 KiB/s wr, 156 op/s
Feb  2 07:09:12 np0005604943 podman[269183]: 2026-02-02 12:09:12.690655543 +0000 UTC m=+0.049810976 container create 37998fea4ceae6e81cee18783b2c64857598bf231fde78558da18c96b6ef6445 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb  2 07:09:12 np0005604943 systemd[1]: Started libpod-conmon-37998fea4ceae6e81cee18783b2c64857598bf231fde78558da18c96b6ef6445.scope.
Feb  2 07:09:12 np0005604943 podman[269183]: 2026-02-02 12:09:12.666724506 +0000 UTC m=+0.025879959 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 07:09:12 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:09:12 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdfb9c93815b9dc82e7a2d824e901d15e38cebd95e987d9a95f7d25acfc6c1f9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 07:09:12 np0005604943 podman[269183]: 2026-02-02 12:09:12.781981718 +0000 UTC m=+0.141137161 container init 37998fea4ceae6e81cee18783b2c64857598bf231fde78558da18c96b6ef6445 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 07:09:12 np0005604943 podman[269183]: 2026-02-02 12:09:12.785715545 +0000 UTC m=+0.144870978 container start 37998fea4ceae6e81cee18783b2c64857598bf231fde78558da18c96b6ef6445 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:09:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:09:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Feb  2 07:09:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Feb  2 07:09:12 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Feb  2 07:09:12 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[269198]: [NOTICE]   (269202) : New worker (269204) forked
Feb  2 07:09:12 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[269198]: [NOTICE]   (269202) : Loading success.
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.468 238887 DEBUG nova.compute.manager [req-7893ff62-a1d8-4a56-85f6-609bfcdda928 req-40147bdf-eb8e-4d1e-905c-92608a27a1ad 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Received event network-vif-plugged-d8560427-e926-4579-8763-e2a149f487c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.469 238887 DEBUG oslo_concurrency.lockutils [req-7893ff62-a1d8-4a56-85f6-609bfcdda928 req-40147bdf-eb8e-4d1e-905c-92608a27a1ad 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.470 238887 DEBUG oslo_concurrency.lockutils [req-7893ff62-a1d8-4a56-85f6-609bfcdda928 req-40147bdf-eb8e-4d1e-905c-92608a27a1ad 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.470 238887 DEBUG oslo_concurrency.lockutils [req-7893ff62-a1d8-4a56-85f6-609bfcdda928 req-40147bdf-eb8e-4d1e-905c-92608a27a1ad 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.470 238887 DEBUG nova.compute.manager [req-7893ff62-a1d8-4a56-85f6-609bfcdda928 req-40147bdf-eb8e-4d1e-905c-92608a27a1ad 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] No waiting events found dispatching network-vif-plugged-d8560427-e926-4579-8763-e2a149f487c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.471 238887 WARNING nova.compute.manager [req-7893ff62-a1d8-4a56-85f6-609bfcdda928 req-40147bdf-eb8e-4d1e-905c-92608a27a1ad 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Received unexpected event network-vif-plugged-d8560427-e926-4579-8763-e2a149f487c3 for instance with vm_state building and task_state spawning.#033[00m
Feb  2 07:09:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 385 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 23 KiB/s wr, 169 op/s
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.866 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034154.8656702, f425e716-a5bd-4c8e-8135-829321a4281c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.867 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] VM Started (Lifecycle Event)#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.870 238887 DEBUG nova.compute.manager [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.873 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.878 238887 INFO nova.virt.libvirt.driver [-] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Instance spawned successfully.#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.878 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.893 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.900 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.904 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.905 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.905 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.905 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.906 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.906 238887 DEBUG nova.virt.libvirt.driver [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.932 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.932 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034154.86606, f425e716-a5bd-4c8e-8135-829321a4281c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.932 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.957 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.961 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034154.8730633, f425e716-a5bd-4c8e-8135-829321a4281c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.961 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.968 238887 INFO nova.compute.manager [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Took 7.21 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.968 238887 DEBUG nova.compute.manager [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.977 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:09:14 np0005604943 nova_compute[238883]: 2026-02-02 12:09:14.980 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:09:15 np0005604943 nova_compute[238883]: 2026-02-02 12:09:15.009 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:09:15 np0005604943 nova_compute[238883]: 2026-02-02 12:09:15.029 238887 INFO nova.compute.manager [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Took 9.54 seconds to build instance.#033[00m
Feb  2 07:09:15 np0005604943 nova_compute[238883]: 2026-02-02 12:09:15.044 238887 DEBUG oslo_concurrency.lockutils [None req-bb3e0da8-4a90-4032-a04c-604e5b5fe528 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "f425e716-a5bd-4c8e-8135-829321a4281c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Feb  2 07:09:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Feb  2 07:09:15 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Feb  2 07:09:15 np0005604943 nova_compute[238883]: 2026-02-02 12:09:15.947 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:09:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2064480209' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:09:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:09:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2064480209' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:09:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 385 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 26 KiB/s wr, 62 op/s
Feb  2 07:09:16 np0005604943 nova_compute[238883]: 2026-02-02 12:09:16.795 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:17 np0005604943 nova_compute[238883]: 2026-02-02 12:09:17.249 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:17.249 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:09:17 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:17.250 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 07:09:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:09:18 np0005604943 podman[269220]: 2026-02-02 12:09:18.037919294 +0000 UTC m=+0.054176310 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb  2 07:09:18 np0005604943 podman[269219]: 2026-02-02 12:09:18.076959377 +0000 UTC m=+0.093628225 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 07:09:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 385 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 23 KiB/s wr, 193 op/s
Feb  2 07:09:18 np0005604943 nova_compute[238883]: 2026-02-02 12:09:18.527 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:18 np0005604943 NetworkManager[49093]: <info>  [1770034158.5285] manager: (patch-br-int-to-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Feb  2 07:09:18 np0005604943 NetworkManager[49093]: <info>  [1770034158.5294] manager: (patch-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Feb  2 07:09:18 np0005604943 nova_compute[238883]: 2026-02-02 12:09:18.569 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:18 np0005604943 ovn_controller[145056]: 2026-02-02T12:09:18Z|00268|binding|INFO|Releasing lport 88fa0d04-0a79-4556-b2c6-d65a3a18ab58 from this chassis (sb_readonly=0)
Feb  2 07:09:18 np0005604943 nova_compute[238883]: 2026-02-02 12:09:18.583 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:18 np0005604943 nova_compute[238883]: 2026-02-02 12:09:18.796 238887 DEBUG nova.compute.manager [req-cb067fed-a280-4860-9f27-f2ef1fb40693 req-f43e9409-ce4c-481e-9d65-80e1160380b5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Received event network-changed-d8560427-e926-4579-8763-e2a149f487c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:09:18 np0005604943 nova_compute[238883]: 2026-02-02 12:09:18.796 238887 DEBUG nova.compute.manager [req-cb067fed-a280-4860-9f27-f2ef1fb40693 req-f43e9409-ce4c-481e-9d65-80e1160380b5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Refreshing instance network info cache due to event network-changed-d8560427-e926-4579-8763-e2a149f487c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:09:18 np0005604943 nova_compute[238883]: 2026-02-02 12:09:18.798 238887 DEBUG oslo_concurrency.lockutils [req-cb067fed-a280-4860-9f27-f2ef1fb40693 req-f43e9409-ce4c-481e-9d65-80e1160380b5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-f425e716-a5bd-4c8e-8135-829321a4281c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:09:18 np0005604943 nova_compute[238883]: 2026-02-02 12:09:18.798 238887 DEBUG oslo_concurrency.lockutils [req-cb067fed-a280-4860-9f27-f2ef1fb40693 req-f43e9409-ce4c-481e-9d65-80e1160380b5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-f425e716-a5bd-4c8e-8135-829321a4281c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:09:18 np0005604943 nova_compute[238883]: 2026-02-02 12:09:18.798 238887 DEBUG nova.network.neutron [req-cb067fed-a280-4860-9f27-f2ef1fb40693 req-f43e9409-ce4c-481e-9d65-80e1160380b5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Refreshing network info cache for port d8560427-e926-4579-8763-e2a149f487c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:09:18.901005) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034158901049, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2455, "num_deletes": 512, "total_data_size": 3302491, "memory_usage": 3371344, "flush_reason": "Manual Compaction"}
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034158923544, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3241734, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29384, "largest_seqno": 31838, "table_properties": {"data_size": 3231009, "index_size": 6387, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 25658, "raw_average_key_size": 19, "raw_value_size": 3207291, "raw_average_value_size": 2484, "num_data_blocks": 279, "num_entries": 1291, "num_filter_entries": 1291, "num_deletions": 512, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770033972, "oldest_key_time": 1770033972, "file_creation_time": 1770034158, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 22602 microseconds, and 5817 cpu microseconds.
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:09:18.923604) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3241734 bytes OK
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:09:18.923626) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:09:18.928643) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:09:18.928659) EVENT_LOG_v1 {"time_micros": 1770034158928655, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:09:18.928681) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3291082, prev total WAL file size 3291082, number of live WAL files 2.
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:09:18.929290) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3165KB)], [62(9020KB)]
Feb  2 07:09:18 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034158929376, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12478402, "oldest_snapshot_seqno": -1}
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6222 keys, 10561452 bytes, temperature: kUnknown
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034159003431, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10561452, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10513635, "index_size": 31131, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 156863, "raw_average_key_size": 25, "raw_value_size": 10395720, "raw_average_value_size": 1670, "num_data_blocks": 1248, "num_entries": 6222, "num_filter_entries": 6222, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770034158, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:09:19.004004) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10561452 bytes
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:09:19.005813) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.2 rd, 142.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 7261, records dropped: 1039 output_compression: NoCompression
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:09:19.005836) EVENT_LOG_v1 {"time_micros": 1770034159005825, "job": 34, "event": "compaction_finished", "compaction_time_micros": 74166, "compaction_time_cpu_micros": 20174, "output_level": 6, "num_output_files": 1, "total_output_size": 10561452, "num_input_records": 7261, "num_output_records": 6222, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034159006402, "job": 34, "event": "table_file_deletion", "file_number": 64}
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034159007534, "job": 34, "event": "table_file_deletion", "file_number": 62}
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:09:18.929165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:09:19.007618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:09:19.007625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:09:19.007626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:09:19.007628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:09:19.007629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:09:19 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:19.252 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1584955401' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:09:19 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1584955401' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:09:19 np0005604943 nova_compute[238883]: 2026-02-02 12:09:19.900 238887 DEBUG nova.network.neutron [req-cb067fed-a280-4860-9f27-f2ef1fb40693 req-f43e9409-ce4c-481e-9d65-80e1160380b5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Updated VIF entry in instance network info cache for port d8560427-e926-4579-8763-e2a149f487c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:09:19 np0005604943 nova_compute[238883]: 2026-02-02 12:09:19.901 238887 DEBUG nova.network.neutron [req-cb067fed-a280-4860-9f27-f2ef1fb40693 req-f43e9409-ce4c-481e-9d65-80e1160380b5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Updating instance_info_cache with network_info: [{"id": "d8560427-e926-4579-8763-e2a149f487c3", "address": "fa:16:3e:ad:ea:a5", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8560427-e9", "ovs_interfaceid": "d8560427-e926-4579-8763-e2a149f487c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:09:20 np0005604943 nova_compute[238883]: 2026-02-02 12:09:20.004 238887 DEBUG oslo_concurrency.lockutils [req-cb067fed-a280-4860-9f27-f2ef1fb40693 req-f43e9409-ce4c-481e-9d65-80e1160380b5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-f425e716-a5bd-4c8e-8135-829321a4281c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:09:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 385 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 23 KiB/s wr, 199 op/s
Feb  2 07:09:20 np0005604943 nova_compute[238883]: 2026-02-02 12:09:20.982 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:09:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/322397974' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:09:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:09:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/322397974' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 4.713987243775551e-06 of space, bias 1.0, pg target 0.0014141961731326653 quantized to 32 (current 32)
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004734094311981887 of space, bias 1.0, pg target 1.420228293594566 quantized to 32 (current 32)
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.5219075920262466e-06 of space, bias 1.0, pg target 0.00045505037001584774 quantized to 32 (current 32)
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006667375952702074 of space, bias 1.0, pg target 0.199354540985792 quantized to 32 (current 32)
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.047166318968574e-06 of space, bias 4.0, pg target 0.0012524109174864146 quantized to 16 (current 16)
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:09:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Feb  2 07:09:21 np0005604943 nova_compute[238883]: 2026-02-02 12:09:21.797 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 385 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 KiB/s wr, 198 op/s
Feb  2 07:09:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:09:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1589647163' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:09:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:09:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1589647163' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:09:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:09:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Feb  2 07:09:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Feb  2 07:09:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Feb  2 07:09:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 385 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.4 KiB/s wr, 251 op/s
Feb  2 07:09:25 np0005604943 nova_compute[238883]: 2026-02-02 12:09:25.986 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 385 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 KiB/s wr, 114 op/s
Feb  2 07:09:26 np0005604943 nova_compute[238883]: 2026-02-02 12:09:26.799 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:27 np0005604943 ovn_controller[145056]: 2026-02-02T12:09:27Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:ea:a5 10.100.0.5
Feb  2 07:09:27 np0005604943 ovn_controller[145056]: 2026-02-02T12:09:27Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:ea:a5 10.100.0.5
Feb  2 07:09:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:09:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 416 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 434 KiB/s rd, 3.4 MiB/s wr, 120 op/s
Feb  2 07:09:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 436 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 695 KiB/s rd, 5.8 MiB/s wr, 146 op/s
Feb  2 07:09:31 np0005604943 nova_compute[238883]: 2026-02-02 12:09:31.027 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:31 np0005604943 nova_compute[238883]: 2026-02-02 12:09:31.801 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:09:32 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/116349624' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:09:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 453 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 689 KiB/s rd, 7.0 MiB/s wr, 142 op/s
Feb  2 07:09:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:09:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Feb  2 07:09:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Feb  2 07:09:32 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Feb  2 07:09:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Feb  2 07:09:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Feb  2 07:09:34 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Feb  2 07:09:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 453 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 820 KiB/s rd, 8.7 MiB/s wr, 126 op/s
Feb  2 07:09:34 np0005604943 nova_compute[238883]: 2026-02-02 12:09:34.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:09:34 np0005604943 nova_compute[238883]: 2026-02-02 12:09:34.871 238887 DEBUG oslo_concurrency.lockutils [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "f425e716-a5bd-4c8e-8135-829321a4281c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:34 np0005604943 nova_compute[238883]: 2026-02-02 12:09:34.871 238887 DEBUG oslo_concurrency.lockutils [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "f425e716-a5bd-4c8e-8135-829321a4281c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:34 np0005604943 nova_compute[238883]: 2026-02-02 12:09:34.872 238887 DEBUG oslo_concurrency.lockutils [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:34 np0005604943 nova_compute[238883]: 2026-02-02 12:09:34.872 238887 DEBUG oslo_concurrency.lockutils [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:34 np0005604943 nova_compute[238883]: 2026-02-02 12:09:34.872 238887 DEBUG oslo_concurrency.lockutils [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:34 np0005604943 nova_compute[238883]: 2026-02-02 12:09:34.873 238887 INFO nova.compute.manager [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Terminating instance#033[00m
Feb  2 07:09:34 np0005604943 nova_compute[238883]: 2026-02-02 12:09:34.875 238887 DEBUG nova.compute.manager [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:09:34 np0005604943 kernel: tapd8560427-e9 (unregistering): left promiscuous mode
Feb  2 07:09:34 np0005604943 NetworkManager[49093]: <info>  [1770034174.9493] device (tapd8560427-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:09:34 np0005604943 nova_compute[238883]: 2026-02-02 12:09:34.949 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:34 np0005604943 nova_compute[238883]: 2026-02-02 12:09:34.958 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:34 np0005604943 ovn_controller[145056]: 2026-02-02T12:09:34Z|00269|binding|INFO|Releasing lport d8560427-e926-4579-8763-e2a149f487c3 from this chassis (sb_readonly=0)
Feb  2 07:09:34 np0005604943 ovn_controller[145056]: 2026-02-02T12:09:34Z|00270|binding|INFO|Setting lport d8560427-e926-4579-8763-e2a149f487c3 down in Southbound
Feb  2 07:09:34 np0005604943 ovn_controller[145056]: 2026-02-02T12:09:34Z|00271|binding|INFO|Removing iface tapd8560427-e9 ovn-installed in OVS
Feb  2 07:09:34 np0005604943 nova_compute[238883]: 2026-02-02 12:09:34.960 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:34 np0005604943 nova_compute[238883]: 2026-02-02 12:09:34.967 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:34.970 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:ea:a5 10.100.0.5'], port_security=['fa:16:3e:ad:ea:a5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f425e716-a5bd-4c8e-8135-829321a4281c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efa24ae1-9962-44ca-882a-8d146356fcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c7b49c49c104c079544033b07fb2f3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2781f824-7ac1-4375-9bc7-15197abfb3e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b6e8bcf-741b-41c8-a826-9b6dbb1c260b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=d8560427-e926-4579-8763-e2a149f487c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:09:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:34.972 155011 INFO neutron.agent.ovn.metadata.agent [-] Port d8560427-e926-4579-8763-e2a149f487c3 in datapath efa24ae1-9962-44ca-882a-8d146356fcca unbound from our chassis#033[00m
Feb  2 07:09:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:34.974 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network efa24ae1-9962-44ca-882a-8d146356fcca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:09:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:34.976 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[36cc94c6-f74a-4eb4-8ca9-66377dcc3fef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:34 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:34.976 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca namespace which is not needed anymore#033[00m
Feb  2 07:09:35 np0005604943 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Feb  2 07:09:35 np0005604943 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Consumed 15.211s CPU time.
Feb  2 07:09:35 np0005604943 systemd-machined[206973]: Machine qemu-27-instance-0000001b terminated.
Feb  2 07:09:35 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[269198]: [NOTICE]   (269202) : haproxy version is 2.8.14-c23fe91
Feb  2 07:09:35 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[269198]: [NOTICE]   (269202) : path to executable is /usr/sbin/haproxy
Feb  2 07:09:35 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[269198]: [WARNING]  (269202) : Exiting Master process...
Feb  2 07:09:35 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[269198]: [ALERT]    (269202) : Current worker (269204) exited with code 143 (Terminated)
Feb  2 07:09:35 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[269198]: [WARNING]  (269202) : All workers exited. Exiting... (0)
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.124 238887 INFO nova.virt.libvirt.driver [-] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Instance destroyed successfully.#033[00m
Feb  2 07:09:35 np0005604943 systemd[1]: libpod-37998fea4ceae6e81cee18783b2c64857598bf231fde78558da18c96b6ef6445.scope: Deactivated successfully.
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.125 238887 DEBUG nova.objects.instance [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lazy-loading 'resources' on Instance uuid f425e716-a5bd-4c8e-8135-829321a4281c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:09:35 np0005604943 podman[269287]: 2026-02-02 12:09:35.131856905 +0000 UTC m=+0.082220236 container died 37998fea4ceae6e81cee18783b2c64857598bf231fde78558da18c96b6ef6445 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.136 238887 DEBUG nova.virt.libvirt.vif [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:09:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1095099264',display_name='tempest-TransferEncryptedVolumeTest-server-1095099264',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1095099264',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7aUv7PBeO78zp3TdCQ5pHHrUWfgatS9ASOECbWv5UGrW7YbMyQ2Q5xaozZcd0G8LLxfP6XSKv3an4flOYSD0UKdfJBDp1c8Bpfee8qRIo6Ih80jJn9izsYTHHCaZomBw==',key_name='tempest-TransferEncryptedVolumeTest-1781235943',keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:09:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4c7b49c49c104c079544033b07fb2f3d',ramdisk_id='',reservation_id='r-hcfjc9t6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-347797880',owner_user_name='tempest-TransferEncryptedVolumeTest-347797880-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:09:15Z,user_data=None,user_id='cd5824e18d5e443cb24d3bf55ff2c553',uuid=f425e716-a5bd-4c8e-8135-829321a4281c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8560427-e926-4579-8763-e2a149f487c3", "address": "fa:16:3e:ad:ea:a5", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8560427-e9", "ovs_interfaceid": "d8560427-e926-4579-8763-e2a149f487c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.137 238887 DEBUG nova.network.os_vif_util [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converting VIF {"id": "d8560427-e926-4579-8763-e2a149f487c3", "address": "fa:16:3e:ad:ea:a5", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8560427-e9", "ovs_interfaceid": "d8560427-e926-4579-8763-e2a149f487c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.137 238887 DEBUG nova.network.os_vif_util [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ad:ea:a5,bridge_name='br-int',has_traffic_filtering=True,id=d8560427-e926-4579-8763-e2a149f487c3,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8560427-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.138 238887 DEBUG os_vif [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:ea:a5,bridge_name='br-int',has_traffic_filtering=True,id=d8560427-e926-4579-8763-e2a149f487c3,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8560427-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.140 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.140 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8560427-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.142 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.145 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.147 238887 INFO os_vif [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:ea:a5,bridge_name='br-int',has_traffic_filtering=True,id=d8560427-e926-4579-8763-e2a149f487c3,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8560427-e9')#033[00m
Feb  2 07:09:35 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37998fea4ceae6e81cee18783b2c64857598bf231fde78558da18c96b6ef6445-userdata-shm.mount: Deactivated successfully.
Feb  2 07:09:35 np0005604943 systemd[1]: var-lib-containers-storage-overlay-cdfb9c93815b9dc82e7a2d824e901d15e38cebd95e987d9a95f7d25acfc6c1f9-merged.mount: Deactivated successfully.
Feb  2 07:09:35 np0005604943 podman[269287]: 2026-02-02 12:09:35.198355699 +0000 UTC m=+0.148719020 container cleanup 37998fea4ceae6e81cee18783b2c64857598bf231fde78558da18c96b6ef6445 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb  2 07:09:35 np0005604943 systemd[1]: libpod-conmon-37998fea4ceae6e81cee18783b2c64857598bf231fde78558da18c96b6ef6445.scope: Deactivated successfully.
Feb  2 07:09:35 np0005604943 podman[269348]: 2026-02-02 12:09:35.251409898 +0000 UTC m=+0.036222439 container remove 37998fea4ceae6e81cee18783b2c64857598bf231fde78558da18c96b6ef6445 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb  2 07:09:35 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:35.255 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c3bd594c-fdaa-4dce-92d3-6d04c5d19178]: (4, ('Mon Feb  2 12:09:35 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca (37998fea4ceae6e81cee18783b2c64857598bf231fde78558da18c96b6ef6445)\n37998fea4ceae6e81cee18783b2c64857598bf231fde78558da18c96b6ef6445\nMon Feb  2 12:09:35 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca (37998fea4ceae6e81cee18783b2c64857598bf231fde78558da18c96b6ef6445)\n37998fea4ceae6e81cee18783b2c64857598bf231fde78558da18c96b6ef6445\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:35 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:35.256 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c8828cfd-c016-4470-bd6a-312a4ea56c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:35 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:35.257 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefa24ae1-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:09:35 np0005604943 kernel: tapefa24ae1-90: left promiscuous mode
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.260 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.267 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:35 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:35.269 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9ad41c-bc1a-442c-b8fa-6ed9a6ff2d11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:35 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:35.287 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[f84c1d2c-d6bb-49c0-8811-282baf8f5623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:35 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:35.288 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[6b49879d-5dfe-4a77-bbeb-e70902f44788]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.290 238887 INFO nova.virt.libvirt.driver [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Deleting instance files /var/lib/nova/instances/f425e716-a5bd-4c8e-8135-829321a4281c_del#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.291 238887 INFO nova.virt.libvirt.driver [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Deletion of /var/lib/nova/instances/f425e716-a5bd-4c8e-8135-829321a4281c_del complete#033[00m
Feb  2 07:09:35 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:35.302 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a151ece6-da32-4ae7-ac9f-86291e6f6c4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459748, 'reachable_time': 33109, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269364, 'error': None, 'target': 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:35 np0005604943 systemd[1]: run-netns-ovnmeta\x2defa24ae1\x2d9962\x2d44ca\x2d882a\x2d8d146356fcca.mount: Deactivated successfully.
Feb  2 07:09:35 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:35.306 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 07:09:35 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:35.306 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[21a05287-ca18-47b8-a73d-9cf02d9f1fd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.349 238887 INFO nova.compute.manager [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Took 0.47 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.350 238887 DEBUG oslo.service.loopingcall [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.350 238887 DEBUG nova.compute.manager [-] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.350 238887 DEBUG nova.network.neutron [-] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.437 238887 DEBUG nova.compute.manager [req-b853ba4e-1463-436b-98ab-68ee30a285ff req-c0166129-9362-433b-bab6-d9c88fe5c1d5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Received event network-vif-unplugged-d8560427-e926-4579-8763-e2a149f487c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.437 238887 DEBUG oslo_concurrency.lockutils [req-b853ba4e-1463-436b-98ab-68ee30a285ff req-c0166129-9362-433b-bab6-d9c88fe5c1d5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.439 238887 DEBUG oslo_concurrency.lockutils [req-b853ba4e-1463-436b-98ab-68ee30a285ff req-c0166129-9362-433b-bab6-d9c88fe5c1d5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.439 238887 DEBUG oslo_concurrency.lockutils [req-b853ba4e-1463-436b-98ab-68ee30a285ff req-c0166129-9362-433b-bab6-d9c88fe5c1d5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.439 238887 DEBUG nova.compute.manager [req-b853ba4e-1463-436b-98ab-68ee30a285ff req-c0166129-9362-433b-bab6-d9c88fe5c1d5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] No waiting events found dispatching network-vif-unplugged-d8560427-e926-4579-8763-e2a149f487c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:09:35 np0005604943 nova_compute[238883]: 2026-02-02 12:09:35.439 238887 DEBUG nova.compute.manager [req-b853ba4e-1463-436b-98ab-68ee30a285ff req-c0166129-9362-433b-bab6-d9c88fe5c1d5 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Received event network-vif-unplugged-d8560427-e926-4579-8763-e2a149f487c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:09:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Feb  2 07:09:36 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Feb  2 07:09:36 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Feb  2 07:09:36 np0005604943 nova_compute[238883]: 2026-02-02 12:09:36.374 238887 DEBUG nova.network.neutron [-] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:09:36 np0005604943 nova_compute[238883]: 2026-02-02 12:09:36.393 238887 INFO nova.compute.manager [-] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Took 1.04 seconds to deallocate network for instance.#033[00m
Feb  2 07:09:36 np0005604943 nova_compute[238883]: 2026-02-02 12:09:36.452 238887 DEBUG nova.compute.manager [req-9bdaeabc-4c82-44cf-955e-7c07cba175af req-1d096f24-7e99-404b-8f85-18f5eb223912 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Received event network-vif-deleted-d8560427-e926-4579-8763-e2a149f487c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:09:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 453 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 1.9 MiB/s wr, 18 op/s
Feb  2 07:09:36 np0005604943 nova_compute[238883]: 2026-02-02 12:09:36.554 238887 INFO nova.compute.manager [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Took 0.16 seconds to detach 1 volumes for instance.#033[00m
Feb  2 07:09:36 np0005604943 nova_compute[238883]: 2026-02-02 12:09:36.597 238887 DEBUG oslo_concurrency.lockutils [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:36 np0005604943 nova_compute[238883]: 2026-02-02 12:09:36.598 238887 DEBUG oslo_concurrency.lockutils [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:36 np0005604943 nova_compute[238883]: 2026-02-02 12:09:36.660 238887 DEBUG oslo_concurrency.processutils [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:36 np0005604943 nova_compute[238883]: 2026-02-02 12:09:36.803 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Feb  2 07:09:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Feb  2 07:09:37 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Feb  2 07:09:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:09:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2297519136' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:09:37 np0005604943 nova_compute[238883]: 2026-02-02 12:09:37.242 238887 DEBUG oslo_concurrency.processutils [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:37 np0005604943 nova_compute[238883]: 2026-02-02 12:09:37.249 238887 DEBUG nova.compute.provider_tree [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:09:37 np0005604943 nova_compute[238883]: 2026-02-02 12:09:37.288 238887 DEBUG nova.scheduler.client.report [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:09:37 np0005604943 nova_compute[238883]: 2026-02-02 12:09:37.314 238887 DEBUG oslo_concurrency.lockutils [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:37 np0005604943 nova_compute[238883]: 2026-02-02 12:09:37.350 238887 INFO nova.scheduler.client.report [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Deleted allocations for instance f425e716-a5bd-4c8e-8135-829321a4281c#033[00m
Feb  2 07:09:37 np0005604943 nova_compute[238883]: 2026-02-02 12:09:37.410 238887 DEBUG oslo_concurrency.lockutils [None req-03add918-8468-4238-84fe-cdf0a8b50798 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "f425e716-a5bd-4c8e-8135-829321a4281c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:09:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/995620626' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:09:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:09:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/995620626' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:09:37 np0005604943 nova_compute[238883]: 2026-02-02 12:09:37.516 238887 DEBUG nova.compute.manager [req-80950bd0-19c5-4c48-bd15-ea0a3a759fb8 req-fb681eba-05ee-4765-b289-1c932088d642 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Received event network-vif-plugged-d8560427-e926-4579-8763-e2a149f487c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:09:37 np0005604943 nova_compute[238883]: 2026-02-02 12:09:37.517 238887 DEBUG oslo_concurrency.lockutils [req-80950bd0-19c5-4c48-bd15-ea0a3a759fb8 req-fb681eba-05ee-4765-b289-1c932088d642 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:37 np0005604943 nova_compute[238883]: 2026-02-02 12:09:37.517 238887 DEBUG oslo_concurrency.lockutils [req-80950bd0-19c5-4c48-bd15-ea0a3a759fb8 req-fb681eba-05ee-4765-b289-1c932088d642 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:37 np0005604943 nova_compute[238883]: 2026-02-02 12:09:37.517 238887 DEBUG oslo_concurrency.lockutils [req-80950bd0-19c5-4c48-bd15-ea0a3a759fb8 req-fb681eba-05ee-4765-b289-1c932088d642 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "f425e716-a5bd-4c8e-8135-829321a4281c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:37 np0005604943 nova_compute[238883]: 2026-02-02 12:09:37.517 238887 DEBUG nova.compute.manager [req-80950bd0-19c5-4c48-bd15-ea0a3a759fb8 req-fb681eba-05ee-4765-b289-1c932088d642 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] No waiting events found dispatching network-vif-plugged-d8560427-e926-4579-8763-e2a149f487c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:09:37 np0005604943 nova_compute[238883]: 2026-02-02 12:09:37.518 238887 WARNING nova.compute.manager [req-80950bd0-19c5-4c48-bd15-ea0a3a759fb8 req-fb681eba-05ee-4765-b289-1c932088d642 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Received unexpected event network-vif-plugged-d8560427-e926-4579-8763-e2a149f487c3 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:09:37 np0005604943 nova_compute[238883]: 2026-02-02 12:09:37.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:09:37 np0005604943 nova_compute[238883]: 2026-02-02 12:09:37.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:09:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:09:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 300 active+clean; 453 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 40 KiB/s wr, 115 op/s
Feb  2 07:09:39 np0005604943 nova_compute[238883]: 2026-02-02 12:09:39.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:09:39 np0005604943 nova_compute[238883]: 2026-02-02 12:09:39.668 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:39 np0005604943 nova_compute[238883]: 2026-02-02 12:09:39.668 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:39 np0005604943 nova_compute[238883]: 2026-02-02 12:09:39.669 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:39 np0005604943 nova_compute[238883]: 2026-02-02 12:09:39.669 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 07:09:39 np0005604943 nova_compute[238883]: 2026-02-02 12:09:39.669 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:40 np0005604943 nova_compute[238883]: 2026-02-02 12:09:40.143 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:09:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2060452585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:09:40 np0005604943 nova_compute[238883]: 2026-02-02 12:09:40.241 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:40 np0005604943 nova_compute[238883]: 2026-02-02 12:09:40.406 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:09:40 np0005604943 nova_compute[238883]: 2026-02-02 12:09:40.407 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4346MB free_disk=59.988001491874456GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 07:09:40 np0005604943 nova_compute[238883]: 2026-02-02 12:09:40.408 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:40 np0005604943 nova_compute[238883]: 2026-02-02 12:09:40.408 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:40 np0005604943 nova_compute[238883]: 2026-02-02 12:09:40.463 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 07:09:40 np0005604943 nova_compute[238883]: 2026-02-02 12:09:40.464 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 07:09:40 np0005604943 nova_compute[238883]: 2026-02-02 12:09:40.484 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 300 active+clean; 453 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 36 KiB/s wr, 123 op/s
Feb  2 07:09:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:09:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:09:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:09:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:09:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:09:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:09:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:09:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3389898435' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:09:41 np0005604943 nova_compute[238883]: 2026-02-02 12:09:41.045 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:41 np0005604943 nova_compute[238883]: 2026-02-02 12:09:41.053 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:09:41 np0005604943 nova_compute[238883]: 2026-02-02 12:09:41.069 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:09:41 np0005604943 nova_compute[238883]: 2026-02-02 12:09:41.142 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 07:09:41 np0005604943 nova_compute[238883]: 2026-02-02 12:09:41.142 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:41 np0005604943 nova_compute[238883]: 2026-02-02 12:09:41.806 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:42 np0005604943 nova_compute[238883]: 2026-02-02 12:09:42.142 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:09:42 np0005604943 nova_compute[238883]: 2026-02-02 12:09:42.143 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 07:09:42 np0005604943 nova_compute[238883]: 2026-02-02 12:09:42.143 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 07:09:42 np0005604943 nova_compute[238883]: 2026-02-02 12:09:42.156 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 07:09:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 453 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 29 KiB/s wr, 99 op/s
Feb  2 07:09:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:09:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Feb  2 07:09:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Feb  2 07:09:42 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Feb  2 07:09:43 np0005604943 nova_compute[238883]: 2026-02-02 12:09:43.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:09:43 np0005604943 nova_compute[238883]: 2026-02-02 12:09:43.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:09:43 np0005604943 nova_compute[238883]: 2026-02-02 12:09:43.930 238887 DEBUG oslo_concurrency.lockutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:43 np0005604943 nova_compute[238883]: 2026-02-02 12:09:43.930 238887 DEBUG oslo_concurrency.lockutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:43 np0005604943 nova_compute[238883]: 2026-02-02 12:09:43.955 238887 DEBUG nova.compute.manager [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:09:44 np0005604943 nova_compute[238883]: 2026-02-02 12:09:44.023 238887 DEBUG oslo_concurrency.lockutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:44 np0005604943 nova_compute[238883]: 2026-02-02 12:09:44.024 238887 DEBUG oslo_concurrency.lockutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:44 np0005604943 nova_compute[238883]: 2026-02-02 12:09:44.031 238887 DEBUG nova.virt.hardware [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:09:44 np0005604943 nova_compute[238883]: 2026-02-02 12:09:44.032 238887 INFO nova.compute.claims [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:09:44 np0005604943 nova_compute[238883]: 2026-02-02 12:09:44.153 238887 DEBUG oslo_concurrency.processutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:09:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1645465858' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:09:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 453 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 29 KiB/s wr, 100 op/s
Feb  2 07:09:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:09:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738589319' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:09:44 np0005604943 nova_compute[238883]: 2026-02-02 12:09:44.737 238887 DEBUG oslo_concurrency.processutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:44 np0005604943 nova_compute[238883]: 2026-02-02 12:09:44.742 238887 DEBUG nova.compute.provider_tree [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:09:44 np0005604943 nova_compute[238883]: 2026-02-02 12:09:44.762 238887 DEBUG nova.scheduler.client.report [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:09:44 np0005604943 nova_compute[238883]: 2026-02-02 12:09:44.797 238887 DEBUG oslo_concurrency.lockutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:44 np0005604943 nova_compute[238883]: 2026-02-02 12:09:44.798 238887 DEBUG nova.compute.manager [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:09:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Feb  2 07:09:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Feb  2 07:09:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Feb  2 07:09:44 np0005604943 nova_compute[238883]: 2026-02-02 12:09:44.854 238887 DEBUG nova.compute.manager [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:09:44 np0005604943 nova_compute[238883]: 2026-02-02 12:09:44.855 238887 DEBUG nova.network.neutron [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:09:44 np0005604943 nova_compute[238883]: 2026-02-02 12:09:44.878 238887 INFO nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:09:44 np0005604943 nova_compute[238883]: 2026-02-02 12:09:44.903 238887 DEBUG nova.compute.manager [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:09:44 np0005604943 nova_compute[238883]: 2026-02-02 12:09:44.963 238887 INFO nova.virt.block_device [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Booting with volume 67fb0117-6283-4cc0-b28b-6d772465dc05 at /dev/vda#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.074 238887 DEBUG nova.policy [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cd5824e18d5e443cb24d3bf55ff2c553', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4c7b49c49c104c079544033b07fb2f3d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.140 238887 DEBUG os_brick.utils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.141 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.145 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.155 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.155 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[08864ab3-0c29-4f3d-907a-1e73f9cab76b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.157 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.166 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.167 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[bd440228-695c-40ad-b63e-76d8c5cc4693]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.169 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.180 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.180 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[6b382ba5-9147-4178-8928-b25fcb7c4e22]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.182 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[92d075f3-7d5a-4dee-86ea-fcfb83976376]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.182 238887 DEBUG oslo_concurrency.processutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.200 238887 DEBUG oslo_concurrency.processutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.202 238887 DEBUG os_brick.initiator.connectors.lightos [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.203 238887 DEBUG os_brick.initiator.connectors.lightos [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.203 238887 DEBUG os_brick.initiator.connectors.lightos [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.203 238887 DEBUG os_brick.utils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] <== get_connector_properties: return (63ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.204 238887 DEBUG nova.virt.block_device [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Updating existing volume attachment record: d8a13a89-d7a9-410a-97a5-d220046b8417 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:09:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:09:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2059889523' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:09:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:09:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2059889523' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.636 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.642 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 07:09:45 np0005604943 nova_compute[238883]: 2026-02-02 12:09:45.719 238887 DEBUG nova.network.neutron [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Successfully created port: ed2a76ec-632f-4f24-b7b7-e89921520207 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:09:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Feb  2 07:09:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Feb  2 07:09:45 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Feb  2 07:09:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:09:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3591046149' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.277 238887 DEBUG nova.compute.manager [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.279 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.279 238887 INFO nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Creating image(s)#033[00m
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.280 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.280 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Ensure instance console log exists: /var/lib/nova/instances/63f7d822-7481-4c48-a8f8-d900cc1cbb7d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.281 238887 DEBUG oslo_concurrency.lockutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.281 238887 DEBUG oslo_concurrency.lockutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.282 238887 DEBUG oslo_concurrency.lockutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.518 238887 DEBUG nova.network.neutron [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Successfully updated port: ed2a76ec-632f-4f24-b7b7-e89921520207 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:09:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 453 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 341 B/s wr, 0 op/s
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.536 238887 DEBUG oslo_concurrency.lockutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "refresh_cache-63f7d822-7481-4c48-a8f8-d900cc1cbb7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.536 238887 DEBUG oslo_concurrency.lockutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquired lock "refresh_cache-63f7d822-7481-4c48-a8f8-d900cc1cbb7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.536 238887 DEBUG nova.network.neutron [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.601 238887 DEBUG nova.compute.manager [req-f7388518-7294-4134-b2a8-8cbd1c512cdd req-6037b96c-2445-4e59-8dcc-5f35a720d3d4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Received event network-changed-ed2a76ec-632f-4f24-b7b7-e89921520207 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.601 238887 DEBUG nova.compute.manager [req-f7388518-7294-4134-b2a8-8cbd1c512cdd req-6037b96c-2445-4e59-8dcc-5f35a720d3d4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Refreshing instance network info cache due to event network-changed-ed2a76ec-632f-4f24-b7b7-e89921520207. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.602 238887 DEBUG oslo_concurrency.lockutils [req-f7388518-7294-4134-b2a8-8cbd1c512cdd req-6037b96c-2445-4e59-8dcc-5f35a720d3d4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-63f7d822-7481-4c48-a8f8-d900cc1cbb7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.702 238887 DEBUG nova.network.neutron [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:09:46 np0005604943 nova_compute[238883]: 2026-02-02 12:09:46.808 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.319 238887 DEBUG nova.network.neutron [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Updating instance_info_cache with network_info: [{"id": "ed2a76ec-632f-4f24-b7b7-e89921520207", "address": "fa:16:3e:c7:72:dd", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped2a76ec-63", "ovs_interfaceid": "ed2a76ec-632f-4f24-b7b7-e89921520207", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.342 238887 DEBUG oslo_concurrency.lockutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Releasing lock "refresh_cache-63f7d822-7481-4c48-a8f8-d900cc1cbb7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.342 238887 DEBUG nova.compute.manager [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Instance network_info: |[{"id": "ed2a76ec-632f-4f24-b7b7-e89921520207", "address": "fa:16:3e:c7:72:dd", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped2a76ec-63", "ovs_interfaceid": "ed2a76ec-632f-4f24-b7b7-e89921520207", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.343 238887 DEBUG oslo_concurrency.lockutils [req-f7388518-7294-4134-b2a8-8cbd1c512cdd req-6037b96c-2445-4e59-8dcc-5f35a720d3d4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-63f7d822-7481-4c48-a8f8-d900cc1cbb7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.343 238887 DEBUG nova.network.neutron [req-f7388518-7294-4134-b2a8-8cbd1c512cdd req-6037b96c-2445-4e59-8dcc-5f35a720d3d4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Refreshing network info cache for port ed2a76ec-632f-4f24-b7b7-e89921520207 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.347 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Start _get_guest_xml network_info=[{"id": "ed2a76ec-632f-4f24-b7b7-e89921520207", "address": "fa:16:3e:c7:72:dd", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped2a76ec-63", "ovs_interfaceid": "ed2a76ec-632f-4f24-b7b7-e89921520207", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'attachment_id': 'd8a13a89-d7a9-410a-97a5-d220046b8417', 'delete_on_termination': False, 'guest_format': None, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-67fb0117-6283-4cc0-b28b-6d772465dc05', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '67fb0117-6283-4cc0-b28b-6d772465dc05', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '63f7d822-7481-4c48-a8f8-d900cc1cbb7d', 'attached_at': '', 'detached_at': '', 'volume_id': '67fb0117-6283-4cc0-b28b-6d772465dc05', 'serial': '67fb0117-6283-4cc0-b28b-6d772465dc05'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.352 238887 WARNING nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.360 238887 DEBUG nova.virt.libvirt.host [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.361 238887 DEBUG nova.virt.libvirt.host [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.364 238887 DEBUG nova.virt.libvirt.host [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.364 238887 DEBUG nova.virt.libvirt.host [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.365 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.365 238887 DEBUG nova.virt.hardware [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.366 238887 DEBUG nova.virt.hardware [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.366 238887 DEBUG nova.virt.hardware [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.366 238887 DEBUG nova.virt.hardware [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.367 238887 DEBUG nova.virt.hardware [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.367 238887 DEBUG nova.virt.hardware [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.367 238887 DEBUG nova.virt.hardware [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.367 238887 DEBUG nova.virt.hardware [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.367 238887 DEBUG nova.virt.hardware [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.368 238887 DEBUG nova.virt.hardware [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.368 238887 DEBUG nova.virt.hardware [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.395 238887 DEBUG nova.storage.rbd_utils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] rbd image 63f7d822-7481-4c48-a8f8-d900cc1cbb7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.400 238887 DEBUG oslo_concurrency.processutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:09:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Feb  2 07:09:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Feb  2 07:09:47 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Feb  2 07:09:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:09:47 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1410478115' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:09:47 np0005604943 nova_compute[238883]: 2026-02-02 12:09:47.903 238887 DEBUG oslo_concurrency.processutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.077 238887 DEBUG os_brick.encryptors [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Using volume encryption metadata '{'encryption_key_id': '5a621a1b-1b61-44f5-b213-584f8c4910b2', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-67fb0117-6283-4cc0-b28b-6d772465dc05', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '67fb0117-6283-4cc0-b28b-6d772465dc05', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '63f7d822-7481-4c48-a8f8-d900cc1cbb7d', 'attached_at': '', 'detached_at': '', 'volume_id': '67fb0117-6283-4cc0-b28b-6d772465dc05', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Feb  2 07:09:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:09:48 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3658032843' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.080 238887 DEBUG barbicanclient.client [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.098 238887 DEBUG barbicanclient.v1.secrets [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/5a621a1b-1b61-44f5-b213-584f8c4910b2 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.099 238887 INFO barbicanclient.base [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/5a621a1b-1b61-44f5-b213-584f8c4910b2#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.127 238887 DEBUG barbicanclient.client [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.127 238887 INFO barbicanclient.base [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/5a621a1b-1b61-44f5-b213-584f8c4910b2#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.150 238887 DEBUG barbicanclient.client [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.151 238887 INFO barbicanclient.base [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/5a621a1b-1b61-44f5-b213-584f8c4910b2#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.173 238887 DEBUG barbicanclient.client [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.174 238887 INFO barbicanclient.base [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/5a621a1b-1b61-44f5-b213-584f8c4910b2#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.217 238887 DEBUG barbicanclient.client [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.218 238887 INFO barbicanclient.base [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/5a621a1b-1b61-44f5-b213-584f8c4910b2#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.253 238887 DEBUG barbicanclient.client [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.253 238887 INFO barbicanclient.base [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/5a621a1b-1b61-44f5-b213-584f8c4910b2#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.292 238887 DEBUG barbicanclient.client [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.293 238887 INFO barbicanclient.base [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/5a621a1b-1b61-44f5-b213-584f8c4910b2#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.316 238887 DEBUG barbicanclient.client [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.317 238887 INFO barbicanclient.base [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/5a621a1b-1b61-44f5-b213-584f8c4910b2#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.340 238887 DEBUG barbicanclient.client [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.341 238887 INFO barbicanclient.base [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/5a621a1b-1b61-44f5-b213-584f8c4910b2#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.365 238887 DEBUG barbicanclient.client [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.366 238887 INFO barbicanclient.base [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/5a621a1b-1b61-44f5-b213-584f8c4910b2#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.390 238887 DEBUG barbicanclient.client [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.391 238887 INFO barbicanclient.base [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/5a621a1b-1b61-44f5-b213-584f8c4910b2#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.417 238887 DEBUG barbicanclient.client [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.418 238887 INFO barbicanclient.base [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/5a621a1b-1b61-44f5-b213-584f8c4910b2#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.470 238887 DEBUG barbicanclient.client [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.471 238887 INFO barbicanclient.base [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/5a621a1b-1b61-44f5-b213-584f8c4910b2#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.490 238887 DEBUG barbicanclient.client [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.491 238887 INFO barbicanclient.base [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/5a621a1b-1b61-44f5-b213-584f8c4910b2#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.512 238887 DEBUG barbicanclient.client [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.513 238887 INFO barbicanclient.base [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Calculated Secrets uuid ref: secrets/5a621a1b-1b61-44f5-b213-584f8c4910b2#033[00m
Feb  2 07:09:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 453 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 2.1 KiB/s wr, 14 op/s
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.538 238887 DEBUG barbicanclient.client [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.539 238887 DEBUG nova.virt.libvirt.host [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Secret XML: <secret ephemeral="no" private="no">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  <usage type="volume">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <volume>67fb0117-6283-4cc0-b28b-6d772465dc05</volume>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  </usage>
Feb  2 07:09:48 np0005604943 nova_compute[238883]: </secret>
Feb  2 07:09:48 np0005604943 nova_compute[238883]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.571 238887 DEBUG nova.virt.libvirt.vif [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:09:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1899434577',display_name='tempest-TransferEncryptedVolumeTest-server-1899434577',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1899434577',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7aUv7PBeO78zp3TdCQ5pHHrUWfgatS9ASOECbWv5UGrW7YbMyQ2Q5xaozZcd0G8LLxfP6XSKv3an4flOYSD0UKdfJBDp1c8Bpfee8qRIo6Ih80jJn9izsYTHHCaZomBw==',key_name='tempest-TransferEncryptedVolumeTest-1781235943',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c7b49c49c104c079544033b07fb2f3d',ramdisk_id='',reservation_id='r-47c40zi0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-347797880',owner_user_name='tempest-TransferEncryptedVolumeTest-347797880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:09:44Z,user_data=None,user_id='cd5824e18d5e443cb24d3bf55ff2c553',uuid=63f7d822-7481-4c48-a8f8-d900cc1cbb7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ed2a76ec-632f-4f24-b7b7-e89921520207", "address": "fa:16:3e:c7:72:dd", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped2a76ec-63", "ovs_interfaceid": "ed2a76ec-632f-4f24-b7b7-e89921520207", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.571 238887 DEBUG nova.network.os_vif_util [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converting VIF {"id": "ed2a76ec-632f-4f24-b7b7-e89921520207", "address": "fa:16:3e:c7:72:dd", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped2a76ec-63", "ovs_interfaceid": "ed2a76ec-632f-4f24-b7b7-e89921520207", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.573 238887 DEBUG nova.network.os_vif_util [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:72:dd,bridge_name='br-int',has_traffic_filtering=True,id=ed2a76ec-632f-4f24-b7b7-e89921520207,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped2a76ec-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.577 238887 DEBUG nova.objects.instance [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lazy-loading 'pci_devices' on Instance uuid 63f7d822-7481-4c48-a8f8-d900cc1cbb7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.596 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  <uuid>63f7d822-7481-4c48-a8f8-d900cc1cbb7d</uuid>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  <name>instance-0000001c</name>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-1899434577</nova:name>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:09:47</nova:creationTime>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:        <nova:user uuid="cd5824e18d5e443cb24d3bf55ff2c553">tempest-TransferEncryptedVolumeTest-347797880-project-member</nova:user>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:        <nova:project uuid="4c7b49c49c104c079544033b07fb2f3d">tempest-TransferEncryptedVolumeTest-347797880</nova:project>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:        <nova:port uuid="ed2a76ec-632f-4f24-b7b7-e89921520207">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <entry name="serial">63f7d822-7481-4c48-a8f8-d900cc1cbb7d</entry>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <entry name="uuid">63f7d822-7481-4c48-a8f8-d900cc1cbb7d</entry>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/63f7d822-7481-4c48-a8f8-d900cc1cbb7d_disk.config">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="volumes/volume-67fb0117-6283-4cc0-b28b-6d772465dc05">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <serial>67fb0117-6283-4cc0-b28b-6d772465dc05</serial>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <encryption format="luks">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:        <secret type="passphrase" uuid="b3e93d47-4474-43f8-8f7b-d91c305bef73"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      </encryption>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:c7:72:dd"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <target dev="taped2a76ec-63"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/63f7d822-7481-4c48-a8f8-d900cc1cbb7d/console.log" append="off"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:09:48 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:09:48 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:09:48 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:09:48 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.597 238887 DEBUG nova.compute.manager [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Preparing to wait for external event network-vif-plugged-ed2a76ec-632f-4f24-b7b7-e89921520207 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.597 238887 DEBUG oslo_concurrency.lockutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.597 238887 DEBUG oslo_concurrency.lockutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.598 238887 DEBUG oslo_concurrency.lockutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.598 238887 DEBUG nova.virt.libvirt.vif [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:09:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1899434577',display_name='tempest-TransferEncryptedVolumeTest-server-1899434577',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1899434577',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7aUv7PBeO78zp3TdCQ5pHHrUWfgatS9ASOECbWv5UGrW7YbMyQ2Q5xaozZcd0G8LLxfP6XSKv3an4flOYSD0UKdfJBDp1c8Bpfee8qRIo6Ih80jJn9izsYTHHCaZomBw==',key_name='tempest-TransferEncryptedVolumeTest-1781235943',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c7b49c49c104c079544033b07fb2f3d',ramdisk_id='',reservation_id='r-47c40zi0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-347797880',owner_user_name='tempest-TransferEncryptedVolumeTest-347797880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:09:44Z,user_data=None,user_id='cd5824e18d5e443cb24d3bf55ff2c553',uuid=63f7d822-7481-4c48-a8f8-d900cc1cbb7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ed2a76ec-632f-4f24-b7b7-e89921520207", "address": "fa:16:3e:c7:72:dd", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped2a76ec-63", "ovs_interfaceid": "ed2a76ec-632f-4f24-b7b7-e89921520207", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.599 238887 DEBUG nova.network.os_vif_util [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converting VIF {"id": "ed2a76ec-632f-4f24-b7b7-e89921520207", "address": "fa:16:3e:c7:72:dd", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped2a76ec-63", "ovs_interfaceid": "ed2a76ec-632f-4f24-b7b7-e89921520207", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.599 238887 DEBUG nova.network.os_vif_util [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:72:dd,bridge_name='br-int',has_traffic_filtering=True,id=ed2a76ec-632f-4f24-b7b7-e89921520207,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped2a76ec-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.600 238887 DEBUG os_vif [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:72:dd,bridge_name='br-int',has_traffic_filtering=True,id=ed2a76ec-632f-4f24-b7b7-e89921520207,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped2a76ec-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.600 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.601 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.601 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.605 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.605 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped2a76ec-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.606 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=taped2a76ec-63, col_values=(('external_ids', {'iface-id': 'ed2a76ec-632f-4f24-b7b7-e89921520207', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:72:dd', 'vm-uuid': '63f7d822-7481-4c48-a8f8-d900cc1cbb7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.607 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:48 np0005604943 NetworkManager[49093]: <info>  [1770034188.6085] manager: (taped2a76ec-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.610 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.614 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.615 238887 INFO os_vif [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:72:dd,bridge_name='br-int',has_traffic_filtering=True,id=ed2a76ec-632f-4f24-b7b7-e89921520207,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped2a76ec-63')#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.657 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.658 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.659 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] No VIF found with MAC fa:16:3e:c7:72:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.659 238887 INFO nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Using config drive#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.681 238887 DEBUG nova.storage.rbd_utils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] rbd image 63f7d822-7481-4c48-a8f8-d900cc1cbb7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.699 238887 DEBUG nova.network.neutron [req-f7388518-7294-4134-b2a8-8cbd1c512cdd req-6037b96c-2445-4e59-8dcc-5f35a720d3d4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Updated VIF entry in instance network info cache for port ed2a76ec-632f-4f24-b7b7-e89921520207. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.700 238887 DEBUG nova.network.neutron [req-f7388518-7294-4134-b2a8-8cbd1c512cdd req-6037b96c-2445-4e59-8dcc-5f35a720d3d4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Updating instance_info_cache with network_info: [{"id": "ed2a76ec-632f-4f24-b7b7-e89921520207", "address": "fa:16:3e:c7:72:dd", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped2a76ec-63", "ovs_interfaceid": "ed2a76ec-632f-4f24-b7b7-e89921520207", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.710 238887 DEBUG oslo_concurrency.lockutils [req-f7388518-7294-4134-b2a8-8cbd1c512cdd req-6037b96c-2445-4e59-8dcc-5f35a720d3d4 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-63f7d822-7481-4c48-a8f8-d900cc1cbb7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:09:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Feb  2 07:09:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Feb  2 07:09:48 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.970 238887 INFO nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Creating config drive at /var/lib/nova/instances/63f7d822-7481-4c48-a8f8-d900cc1cbb7d/disk.config#033[00m
Feb  2 07:09:48 np0005604943 nova_compute[238883]: 2026-02-02 12:09:48.975 238887 DEBUG oslo_concurrency.processutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/63f7d822-7481-4c48-a8f8-d900cc1cbb7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0yuadodl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:49 np0005604943 podman[269524]: 2026-02-02 12:09:49.04303358 +0000 UTC m=+0.060921829 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Feb  2 07:09:49 np0005604943 podman[269523]: 2026-02-02 12:09:49.080292005 +0000 UTC m=+0.098607725 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.108 238887 DEBUG oslo_concurrency.processutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/63f7d822-7481-4c48-a8f8-d900cc1cbb7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp0yuadodl" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.130 238887 DEBUG nova.storage.rbd_utils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] rbd image 63f7d822-7481-4c48-a8f8-d900cc1cbb7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.133 238887 DEBUG oslo_concurrency.processutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/63f7d822-7481-4c48-a8f8-d900cc1cbb7d/disk.config 63f7d822-7481-4c48-a8f8-d900cc1cbb7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.250 238887 DEBUG oslo_concurrency.processutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/63f7d822-7481-4c48-a8f8-d900cc1cbb7d/disk.config 63f7d822-7481-4c48-a8f8-d900cc1cbb7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.251 238887 INFO nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Deleting local config drive /var/lib/nova/instances/63f7d822-7481-4c48-a8f8-d900cc1cbb7d/disk.config because it was imported into RBD.#033[00m
Feb  2 07:09:49 np0005604943 kernel: taped2a76ec-63: entered promiscuous mode
Feb  2 07:09:49 np0005604943 NetworkManager[49093]: <info>  [1770034189.3162] manager: (taped2a76ec-63): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Feb  2 07:09:49 np0005604943 ovn_controller[145056]: 2026-02-02T12:09:49Z|00272|binding|INFO|Claiming lport ed2a76ec-632f-4f24-b7b7-e89921520207 for this chassis.
Feb  2 07:09:49 np0005604943 ovn_controller[145056]: 2026-02-02T12:09:49Z|00273|binding|INFO|ed2a76ec-632f-4f24-b7b7-e89921520207: Claiming fa:16:3e:c7:72:dd 10.100.0.14
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.316 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.324 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:72:dd 10.100.0.14'], port_security=['fa:16:3e:c7:72:dd 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '63f7d822-7481-4c48-a8f8-d900cc1cbb7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efa24ae1-9962-44ca-882a-8d146356fcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c7b49c49c104c079544033b07fb2f3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2781f824-7ac1-4375-9bc7-15197abfb3e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b6e8bcf-741b-41c8-a826-9b6dbb1c260b, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=ed2a76ec-632f-4f24-b7b7-e89921520207) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:09:49 np0005604943 ovn_controller[145056]: 2026-02-02T12:09:49Z|00274|binding|INFO|Setting lport ed2a76ec-632f-4f24-b7b7-e89921520207 ovn-installed in OVS
Feb  2 07:09:49 np0005604943 ovn_controller[145056]: 2026-02-02T12:09:49Z|00275|binding|INFO|Setting lport ed2a76ec-632f-4f24-b7b7-e89921520207 up in Southbound
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.327 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.329 155011 INFO neutron.agent.ovn.metadata.agent [-] Port ed2a76ec-632f-4f24-b7b7-e89921520207 in datapath efa24ae1-9962-44ca-882a-8d146356fcca bound to our chassis#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.331 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network efa24ae1-9962-44ca-882a-8d146356fcca#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.344 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a6644ae9-ffa0-4290-8e26-09c416bab1bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.345 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapefa24ae1-91 in ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.348 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapefa24ae1-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.348 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[71be9cf0-4ae0-4003-b958-53de01d4b11b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:49 np0005604943 systemd-machined[206973]: New machine qemu-28-instance-0000001c.
Feb  2 07:09:49 np0005604943 systemd-udevd[269626]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.349 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[45c42037-0379-416a-9608-cdadae8d195d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.360 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[8de502d5-0c91-409d-a2a4-b8b06bca9e30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:49 np0005604943 NetworkManager[49093]: <info>  [1770034189.3637] device (taped2a76ec-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:09:49 np0005604943 NetworkManager[49093]: <info>  [1770034189.3644] device (taped2a76ec-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:09:49 np0005604943 systemd[1]: Started Virtual Machine qemu-28-instance-0000001c.
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.385 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[6100a20d-e57c-4e4c-8071-0e23abf3935e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.411 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[634bb192-08df-4d9f-95bd-1de168922585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.416 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[8e997aa3-bbc2-40cf-a693-8714b2bc3dfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:49 np0005604943 NetworkManager[49093]: <info>  [1770034189.4169] manager: (tapefa24ae1-90): new Veth device (/org/freedesktop/NetworkManager/Devices/140)
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.440 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc561fc-f1a2-4a57-bf6b-246cace99d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.444 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ea6ccc-42fa-4a4c-b4e9-b3375a8406ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:49 np0005604943 NetworkManager[49093]: <info>  [1770034189.4638] device (tapefa24ae1-90): carrier: link connected
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.466 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb48674-7ce3-417f-bdbb-6bc559f13a3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.482 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1df4b869-98b0-44c3-a58e-182605657254]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefa24ae1-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:4e:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463481, 'reachable_time': 40992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269658, 'error': None, 'target': 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.500 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[09b81de4-62b4-4394-93da-977babd304e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:4ebf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463481, 'tstamp': 463481}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269659, 'error': None, 'target': 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.518 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[f8744698-6447-49b1-ab20-0fff1abb8763]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefa24ae1-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:4e:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463481, 'reachable_time': 40992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269660, 'error': None, 'target': 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.548 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a2268d7d-d374-4ce0-89e3-88f8475914de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.598 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[b04695e8-6f56-43eb-8448-af02bbdec30c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.600 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefa24ae1-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.600 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.600 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefa24ae1-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:09:49 np0005604943 NetworkManager[49093]: <info>  [1770034189.6031] manager: (tapefa24ae1-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Feb  2 07:09:49 np0005604943 kernel: tapefa24ae1-90: entered promiscuous mode
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.602 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.604 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.605 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapefa24ae1-90, col_values=(('external_ids', {'iface-id': '88fa0d04-0a79-4556-b2c6-d65a3a18ab58'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.606 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:49 np0005604943 ovn_controller[145056]: 2026-02-02T12:09:49Z|00276|binding|INFO|Releasing lport 88fa0d04-0a79-4556-b2c6-d65a3a18ab58 from this chassis (sb_readonly=0)
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.612 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.613 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.613 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/efa24ae1-9962-44ca-882a-8d146356fcca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/efa24ae1-9962-44ca-882a-8d146356fcca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.614 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[46272f86-b104-4e87-b1c9-8b2eabbae7d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.614 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-efa24ae1-9962-44ca-882a-8d146356fcca
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/efa24ae1-9962-44ca-882a-8d146356fcca.pid.haproxy
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID efa24ae1-9962-44ca-882a-8d146356fcca
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 07:09:49 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:49.616 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'env', 'PROCESS_TAG=haproxy-efa24ae1-9962-44ca-882a-8d146356fcca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/efa24ae1-9962-44ca-882a-8d146356fcca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.636 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.807 238887 DEBUG nova.compute.manager [req-5e2c80b0-ac06-4bca-83d8-a7062fcf0d67 req-756227ae-60fe-4b0b-ba2b-1783b7ec64a6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Received event network-vif-plugged-ed2a76ec-632f-4f24-b7b7-e89921520207 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.807 238887 DEBUG oslo_concurrency.lockutils [req-5e2c80b0-ac06-4bca-83d8-a7062fcf0d67 req-756227ae-60fe-4b0b-ba2b-1783b7ec64a6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.808 238887 DEBUG oslo_concurrency.lockutils [req-5e2c80b0-ac06-4bca-83d8-a7062fcf0d67 req-756227ae-60fe-4b0b-ba2b-1783b7ec64a6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.808 238887 DEBUG oslo_concurrency.lockutils [req-5e2c80b0-ac06-4bca-83d8-a7062fcf0d67 req-756227ae-60fe-4b0b-ba2b-1783b7ec64a6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:49 np0005604943 nova_compute[238883]: 2026-02-02 12:09:49.809 238887 DEBUG nova.compute.manager [req-5e2c80b0-ac06-4bca-83d8-a7062fcf0d67 req-756227ae-60fe-4b0b-ba2b-1783b7ec64a6 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Processing event network-vif-plugged-ed2a76ec-632f-4f24-b7b7-e89921520207 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:09:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e434 do_prune osdmap full prune enabled
Feb  2 07:09:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e435 e435: 3 total, 3 up, 3 in
Feb  2 07:09:49 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e435: 3 total, 3 up, 3 in
Feb  2 07:09:49 np0005604943 podman[269726]: 2026-02-02 12:09:49.948926004 +0000 UTC m=+0.042806023 container create 98eaaf6bdeb7a14b4bc96b00e3a062ea09a704a29c1d377b9ab71c74f55ea809 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true)
Feb  2 07:09:49 np0005604943 systemd[1]: Started libpod-conmon-98eaaf6bdeb7a14b4bc96b00e3a062ea09a704a29c1d377b9ab71c74f55ea809.scope.
Feb  2 07:09:50 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:09:50 np0005604943 podman[269726]: 2026-02-02 12:09:49.925576672 +0000 UTC m=+0.019456691 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 07:09:50 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a636a6702009b3c81c78c44e5b57a2dd8c8c5cdbf562c38b794f95bb0212b52d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 07:09:50 np0005604943 podman[269726]: 2026-02-02 12:09:50.035078852 +0000 UTC m=+0.128958851 container init 98eaaf6bdeb7a14b4bc96b00e3a062ea09a704a29c1d377b9ab71c74f55ea809 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:09:50 np0005604943 podman[269726]: 2026-02-02 12:09:50.041814989 +0000 UTC m=+0.135694988 container start 98eaaf6bdeb7a14b4bc96b00e3a062ea09a704a29c1d377b9ab71c74f55ea809 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:09:50 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[269741]: [NOTICE]   (269745) : New worker (269747) forked
Feb  2 07:09:50 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[269741]: [NOTICE]   (269745) : Loading success.
Feb  2 07:09:50 np0005604943 nova_compute[238883]: 2026-02-02 12:09:50.120 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770034175.119599, f425e716-a5bd-4c8e-8135-829321a4281c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:09:50 np0005604943 nova_compute[238883]: 2026-02-02 12:09:50.120 238887 INFO nova.compute.manager [-] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:09:50 np0005604943 nova_compute[238883]: 2026-02-02 12:09:50.138 238887 DEBUG nova.compute.manager [None req-457901d2-c085-473c-b67e-b42f747dde44 - - - - - -] [instance: f425e716-a5bd-4c8e-8135-829321a4281c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:09:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 453 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.0 KiB/s wr, 47 op/s
Feb  2 07:09:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e435 do_prune osdmap full prune enabled
Feb  2 07:09:50 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e436 e436: 3 total, 3 up, 3 in
Feb  2 07:09:50 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e436: 3 total, 3 up, 3 in
Feb  2 07:09:51 np0005604943 nova_compute[238883]: 2026-02-02 12:09:51.810 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:51 np0005604943 nova_compute[238883]: 2026-02-02 12:09:51.881 238887 DEBUG nova.compute.manager [req-85d1aabb-c01d-4ea3-bb7c-ff10a69f9635 req-4e553829-f534-4a04-8e71-672edb29d503 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Received event network-vif-plugged-ed2a76ec-632f-4f24-b7b7-e89921520207 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:09:51 np0005604943 nova_compute[238883]: 2026-02-02 12:09:51.882 238887 DEBUG oslo_concurrency.lockutils [req-85d1aabb-c01d-4ea3-bb7c-ff10a69f9635 req-4e553829-f534-4a04-8e71-672edb29d503 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:09:51 np0005604943 nova_compute[238883]: 2026-02-02 12:09:51.882 238887 DEBUG oslo_concurrency.lockutils [req-85d1aabb-c01d-4ea3-bb7c-ff10a69f9635 req-4e553829-f534-4a04-8e71-672edb29d503 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:09:51 np0005604943 nova_compute[238883]: 2026-02-02 12:09:51.882 238887 DEBUG oslo_concurrency.lockutils [req-85d1aabb-c01d-4ea3-bb7c-ff10a69f9635 req-4e553829-f534-4a04-8e71-672edb29d503 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:51 np0005604943 nova_compute[238883]: 2026-02-02 12:09:51.882 238887 DEBUG nova.compute.manager [req-85d1aabb-c01d-4ea3-bb7c-ff10a69f9635 req-4e553829-f534-4a04-8e71-672edb29d503 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] No waiting events found dispatching network-vif-plugged-ed2a76ec-632f-4f24-b7b7-e89921520207 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:09:51 np0005604943 nova_compute[238883]: 2026-02-02 12:09:51.882 238887 WARNING nova.compute.manager [req-85d1aabb-c01d-4ea3-bb7c-ff10a69f9635 req-4e553829-f534-4a04-8e71-672edb29d503 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Received unexpected event network-vif-plugged-ed2a76ec-632f-4f24-b7b7-e89921520207 for instance with vm_state building and task_state spawning.#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.169 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034192.168549, 63f7d822-7481-4c48-a8f8-d900cc1cbb7d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.169 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] VM Started (Lifecycle Event)#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.171 238887 DEBUG nova.compute.manager [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.175 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.179 238887 INFO nova.virt.libvirt.driver [-] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Instance spawned successfully.#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.179 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.196 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.205 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.210 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.211 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.211 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.211 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.212 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.212 238887 DEBUG nova.virt.libvirt.driver [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.241 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.241 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034192.1688583, 63f7d822-7481-4c48-a8f8-d900cc1cbb7d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.241 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.273 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.278 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034192.1741414, 63f7d822-7481-4c48-a8f8-d900cc1cbb7d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.278 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.283 238887 INFO nova.compute.manager [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Took 6.01 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.284 238887 DEBUG nova.compute.manager [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.343 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.347 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.378 238887 INFO nova.compute.manager [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Took 8.37 seconds to build instance.#033[00m
Feb  2 07:09:52 np0005604943 nova_compute[238883]: 2026-02-02 12:09:52.398 238887 DEBUG oslo_concurrency.lockutils [None req-0575b528-5935-4a14-9163-2652ca8f390f cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.468s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:09:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 453 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 4.9 KiB/s wr, 115 op/s
Feb  2 07:09:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:09:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1913382630' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:09:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:09:53 np0005604943 nova_compute[238883]: 2026-02-02 12:09:53.657 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:53 np0005604943 podman[269903]: 2026-02-02 12:09:53.682476977 +0000 UTC m=+0.087933917 container create 4c3bf6a91fc265a350be4c27f83737a5f33acc43a6ade920062b62d883af053a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  2 07:09:53 np0005604943 systemd[1]: Started libpod-conmon-4c3bf6a91fc265a350be4c27f83737a5f33acc43a6ade920062b62d883af053a.scope.
Feb  2 07:09:53 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:09:53 np0005604943 podman[269903]: 2026-02-02 12:09:53.668379647 +0000 UTC m=+0.073836587 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:09:53 np0005604943 podman[269903]: 2026-02-02 12:09:53.768137052 +0000 UTC m=+0.173594012 container init 4c3bf6a91fc265a350be4c27f83737a5f33acc43a6ade920062b62d883af053a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 07:09:53 np0005604943 podman[269903]: 2026-02-02 12:09:53.775441814 +0000 UTC m=+0.180898754 container start 4c3bf6a91fc265a350be4c27f83737a5f33acc43a6ade920062b62d883af053a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb  2 07:09:53 np0005604943 podman[269903]: 2026-02-02 12:09:53.779197022 +0000 UTC m=+0.184653962 container attach 4c3bf6a91fc265a350be4c27f83737a5f33acc43a6ade920062b62d883af053a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 07:09:53 np0005604943 heuristic_meitner[269920]: 167 167
Feb  2 07:09:53 np0005604943 systemd[1]: libpod-4c3bf6a91fc265a350be4c27f83737a5f33acc43a6ade920062b62d883af053a.scope: Deactivated successfully.
Feb  2 07:09:53 np0005604943 podman[269903]: 2026-02-02 12:09:53.784374968 +0000 UTC m=+0.189831908 container died 4c3bf6a91fc265a350be4c27f83737a5f33acc43a6ade920062b62d883af053a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 07:09:53 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b9cc417447cfe791fcc8b65c9965b0bf0996ff500dd6af16b590dbc418ba9360-merged.mount: Deactivated successfully.
Feb  2 07:09:53 np0005604943 podman[269903]: 2026-02-02 12:09:53.823869293 +0000 UTC m=+0.229326233 container remove 4c3bf6a91fc265a350be4c27f83737a5f33acc43a6ade920062b62d883af053a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:09:53 np0005604943 systemd[1]: libpod-conmon-4c3bf6a91fc265a350be4c27f83737a5f33acc43a6ade920062b62d883af053a.scope: Deactivated successfully.
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e436 do_prune osdmap full prune enabled
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e437 e437: 3 total, 3 up, 3 in
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e437: 3 total, 3 up, 3 in
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:09:53 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:09:53 np0005604943 podman[269944]: 2026-02-02 12:09:53.990866729 +0000 UTC m=+0.070014725 container create 00fef604196d6df86165c2b0a92a9668bc2507fa7c2f6c921b5993c031faac93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:09:54 np0005604943 systemd[1]: Started libpod-conmon-00fef604196d6df86165c2b0a92a9668bc2507fa7c2f6c921b5993c031faac93.scope.
Feb  2 07:09:54 np0005604943 podman[269944]: 2026-02-02 12:09:53.968994567 +0000 UTC m=+0.048142613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:09:54 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:09:54 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4785d649dfa5b7a1c99114047c7e92e74e8a10aad9c14cb3685919e8e96e33a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:09:54 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4785d649dfa5b7a1c99114047c7e92e74e8a10aad9c14cb3685919e8e96e33a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:09:54 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4785d649dfa5b7a1c99114047c7e92e74e8a10aad9c14cb3685919e8e96e33a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:09:54 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4785d649dfa5b7a1c99114047c7e92e74e8a10aad9c14cb3685919e8e96e33a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:09:54 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4785d649dfa5b7a1c99114047c7e92e74e8a10aad9c14cb3685919e8e96e33a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 07:09:54 np0005604943 podman[269944]: 2026-02-02 12:09:54.090616095 +0000 UTC m=+0.169764101 container init 00fef604196d6df86165c2b0a92a9668bc2507fa7c2f6c921b5993c031faac93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:09:54 np0005604943 podman[269944]: 2026-02-02 12:09:54.100524745 +0000 UTC m=+0.179672741 container start 00fef604196d6df86165c2b0a92a9668bc2507fa7c2f6c921b5993c031faac93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:09:54 np0005604943 podman[269944]: 2026-02-02 12:09:54.104802707 +0000 UTC m=+0.183950703 container attach 00fef604196d6df86165c2b0a92a9668bc2507fa7c2f6c921b5993c031faac93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 07:09:54 np0005604943 nostalgic_moser[269960]: --> passed data devices: 0 physical, 3 LVM
Feb  2 07:09:54 np0005604943 nostalgic_moser[269960]: --> All data devices are unavailable
Feb  2 07:09:54 np0005604943 systemd[1]: libpod-00fef604196d6df86165c2b0a92a9668bc2507fa7c2f6c921b5993c031faac93.scope: Deactivated successfully.
Feb  2 07:09:54 np0005604943 podman[269944]: 2026-02-02 12:09:54.510477201 +0000 UTC m=+0.589625167 container died 00fef604196d6df86165c2b0a92a9668bc2507fa7c2f6c921b5993c031faac93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:09:54 np0005604943 systemd[1]: var-lib-containers-storage-overlay-4785d649dfa5b7a1c99114047c7e92e74e8a10aad9c14cb3685919e8e96e33a8-merged.mount: Deactivated successfully.
Feb  2 07:09:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 453 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 31 KiB/s wr, 141 op/s
Feb  2 07:09:54 np0005604943 podman[269944]: 2026-02-02 12:09:54.549585545 +0000 UTC m=+0.628733501 container remove 00fef604196d6df86165c2b0a92a9668bc2507fa7c2f6c921b5993c031faac93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:09:54 np0005604943 systemd[1]: libpod-conmon-00fef604196d6df86165c2b0a92a9668bc2507fa7c2f6c921b5993c031faac93.scope: Deactivated successfully.
Feb  2 07:09:54 np0005604943 podman[270053]: 2026-02-02 12:09:54.943850239 +0000 UTC m=+0.032329858 container create a088a773398f58177a0b057d0b8a4bd6de4e5026ecf12e840fcd61c5ca138c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:09:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e437 do_prune osdmap full prune enabled
Feb  2 07:09:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e438 e438: 3 total, 3 up, 3 in
Feb  2 07:09:54 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e438: 3 total, 3 up, 3 in
Feb  2 07:09:54 np0005604943 systemd[1]: Started libpod-conmon-a088a773398f58177a0b057d0b8a4bd6de4e5026ecf12e840fcd61c5ca138c24.scope.
Feb  2 07:09:55 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:09:55 np0005604943 podman[270053]: 2026-02-02 12:09:55.014037539 +0000 UTC m=+0.102517178 container init a088a773398f58177a0b057d0b8a4bd6de4e5026ecf12e840fcd61c5ca138c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jackson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:09:55 np0005604943 podman[270053]: 2026-02-02 12:09:55.018387763 +0000 UTC m=+0.106867382 container start a088a773398f58177a0b057d0b8a4bd6de4e5026ecf12e840fcd61c5ca138c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jackson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb  2 07:09:55 np0005604943 podman[270053]: 2026-02-02 12:09:55.021711451 +0000 UTC m=+0.110191070 container attach a088a773398f58177a0b057d0b8a4bd6de4e5026ecf12e840fcd61c5ca138c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:09:55 np0005604943 vigorous_jackson[270069]: 167 167
Feb  2 07:09:55 np0005604943 systemd[1]: libpod-a088a773398f58177a0b057d0b8a4bd6de4e5026ecf12e840fcd61c5ca138c24.scope: Deactivated successfully.
Feb  2 07:09:55 np0005604943 conmon[270069]: conmon a088a773398f58177a0b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a088a773398f58177a0b057d0b8a4bd6de4e5026ecf12e840fcd61c5ca138c24.scope/container/memory.events
Feb  2 07:09:55 np0005604943 podman[270053]: 2026-02-02 12:09:54.93014795 +0000 UTC m=+0.018627589 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:09:55 np0005604943 podman[270053]: 2026-02-02 12:09:55.028041716 +0000 UTC m=+0.116521345 container died a088a773398f58177a0b057d0b8a4bd6de4e5026ecf12e840fcd61c5ca138c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jackson, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb  2 07:09:55 np0005604943 systemd[1]: var-lib-containers-storage-overlay-c6498b047efdc0acf9cb3a9e20f1b38f45b2e0d9cdc21d8aad21a280a71dee47-merged.mount: Deactivated successfully.
Feb  2 07:09:55 np0005604943 podman[270053]: 2026-02-02 12:09:55.066727041 +0000 UTC m=+0.155206670 container remove a088a773398f58177a0b057d0b8a4bd6de4e5026ecf12e840fcd61c5ca138c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:09:55 np0005604943 systemd[1]: libpod-conmon-a088a773398f58177a0b057d0b8a4bd6de4e5026ecf12e840fcd61c5ca138c24.scope: Deactivated successfully.
Feb  2 07:09:55 np0005604943 podman[270093]: 2026-02-02 12:09:55.224700421 +0000 UTC m=+0.058758121 container create 9e8fa57471e2c04650dbe5e98806ad3a1c67647c38ff2b32dd518f6ba9cca6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ramanujan, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 07:09:55 np0005604943 systemd[1]: Started libpod-conmon-9e8fa57471e2c04650dbe5e98806ad3a1c67647c38ff2b32dd518f6ba9cca6e6.scope.
Feb  2 07:09:55 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:09:55 np0005604943 podman[270093]: 2026-02-02 12:09:55.208189418 +0000 UTC m=+0.042247138 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:09:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fe87f62569e8c4cf94659cc1c9c4f130784af625d1493c1f0ddf960bb619f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:09:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fe87f62569e8c4cf94659cc1c9c4f130784af625d1493c1f0ddf960bb619f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:09:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fe87f62569e8c4cf94659cc1c9c4f130784af625d1493c1f0ddf960bb619f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:09:55 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fe87f62569e8c4cf94659cc1c9c4f130784af625d1493c1f0ddf960bb619f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:09:55 np0005604943 podman[270093]: 2026-02-02 12:09:55.323006248 +0000 UTC m=+0.157063968 container init 9e8fa57471e2c04650dbe5e98806ad3a1c67647c38ff2b32dd518f6ba9cca6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb  2 07:09:55 np0005604943 podman[270093]: 2026-02-02 12:09:55.331695706 +0000 UTC m=+0.165753406 container start 9e8fa57471e2c04650dbe5e98806ad3a1c67647c38ff2b32dd518f6ba9cca6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:09:55 np0005604943 podman[270093]: 2026-02-02 12:09:55.33526841 +0000 UTC m=+0.169326110 container attach 9e8fa57471e2c04650dbe5e98806ad3a1c67647c38ff2b32dd518f6ba9cca6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ramanujan, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]: {
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:    "0": [
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:        {
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "devices": [
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "/dev/loop3"
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            ],
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "lv_name": "ceph_lv0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "lv_size": "21470642176",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "name": "ceph_lv0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "tags": {
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.cluster_name": "ceph",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.crush_device_class": "",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.encrypted": "0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.objectstore": "bluestore",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.osd_id": "0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.type": "block",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.vdo": "0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.with_tpm": "0"
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            },
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "type": "block",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "vg_name": "ceph_vg0"
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:        }
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:    ],
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:    "1": [
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:        {
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "devices": [
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "/dev/loop4"
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            ],
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "lv_name": "ceph_lv1",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "lv_size": "21470642176",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "name": "ceph_lv1",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "tags": {
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.cluster_name": "ceph",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.crush_device_class": "",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.encrypted": "0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.objectstore": "bluestore",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.osd_id": "1",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.type": "block",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.vdo": "0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.with_tpm": "0"
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            },
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "type": "block",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "vg_name": "ceph_vg1"
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:        }
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:    ],
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:    "2": [
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:        {
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "devices": [
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "/dev/loop5"
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            ],
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "lv_name": "ceph_lv2",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "lv_size": "21470642176",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "name": "ceph_lv2",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "tags": {
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.cluster_name": "ceph",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.crush_device_class": "",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.encrypted": "0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.objectstore": "bluestore",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.osd_id": "2",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.type": "block",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.vdo": "0",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:                "ceph.with_tpm": "0"
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            },
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "type": "block",
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:            "vg_name": "ceph_vg2"
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:        }
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]:    ]
Feb  2 07:09:55 np0005604943 optimistic_ramanujan[270111]: }
Feb  2 07:09:55 np0005604943 systemd[1]: libpod-9e8fa57471e2c04650dbe5e98806ad3a1c67647c38ff2b32dd518f6ba9cca6e6.scope: Deactivated successfully.
Feb  2 07:09:55 np0005604943 podman[270093]: 2026-02-02 12:09:55.666308156 +0000 UTC m=+0.500365856 container died 9e8fa57471e2c04650dbe5e98806ad3a1c67647c38ff2b32dd518f6ba9cca6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 07:09:55 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a2fe87f62569e8c4cf94659cc1c9c4f130784af625d1493c1f0ddf960bb619f6-merged.mount: Deactivated successfully.
Feb  2 07:09:55 np0005604943 podman[270093]: 2026-02-02 12:09:55.703544343 +0000 UTC m=+0.537602043 container remove 9e8fa57471e2c04650dbe5e98806ad3a1c67647c38ff2b32dd518f6ba9cca6e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Feb  2 07:09:55 np0005604943 systemd[1]: libpod-conmon-9e8fa57471e2c04650dbe5e98806ad3a1c67647c38ff2b32dd518f6ba9cca6e6.scope: Deactivated successfully.
Feb  2 07:09:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e438 do_prune osdmap full prune enabled
Feb  2 07:09:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e439 e439: 3 total, 3 up, 3 in
Feb  2 07:09:55 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e439: 3 total, 3 up, 3 in
Feb  2 07:09:56 np0005604943 podman[270194]: 2026-02-02 12:09:56.16432032 +0000 UTC m=+0.046162071 container create f3e1493525c6f6c7dda132b0e0a8a3303d088b8aa21509c6d21dd9616e98d726 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_einstein, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 07:09:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:56.176 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:09:56 np0005604943 nova_compute[238883]: 2026-02-02 12:09:56.219 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:56 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:09:56.179 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 07:09:56 np0005604943 podman[270194]: 2026-02-02 12:09:56.142753615 +0000 UTC m=+0.024595396 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:09:56 np0005604943 systemd[1]: Started libpod-conmon-f3e1493525c6f6c7dda132b0e0a8a3303d088b8aa21509c6d21dd9616e98d726.scope.
Feb  2 07:09:56 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:09:56 np0005604943 podman[270194]: 2026-02-02 12:09:56.294289645 +0000 UTC m=+0.176131396 container init f3e1493525c6f6c7dda132b0e0a8a3303d088b8aa21509c6d21dd9616e98d726 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_einstein, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:09:56 np0005604943 podman[270194]: 2026-02-02 12:09:56.30019414 +0000 UTC m=+0.182035891 container start f3e1493525c6f6c7dda132b0e0a8a3303d088b8aa21509c6d21dd9616e98d726 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:09:56 np0005604943 tender_einstein[270211]: 167 167
Feb  2 07:09:56 np0005604943 systemd[1]: libpod-f3e1493525c6f6c7dda132b0e0a8a3303d088b8aa21509c6d21dd9616e98d726.scope: Deactivated successfully.
Feb  2 07:09:56 np0005604943 podman[270194]: 2026-02-02 12:09:56.306050853 +0000 UTC m=+0.187892624 container attach f3e1493525c6f6c7dda132b0e0a8a3303d088b8aa21509c6d21dd9616e98d726 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 07:09:56 np0005604943 conmon[270211]: conmon f3e1493525c6f6c7dda1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f3e1493525c6f6c7dda132b0e0a8a3303d088b8aa21509c6d21dd9616e98d726.scope/container/memory.events
Feb  2 07:09:56 np0005604943 podman[270194]: 2026-02-02 12:09:56.308907438 +0000 UTC m=+0.190749189 container died f3e1493525c6f6c7dda132b0e0a8a3303d088b8aa21509c6d21dd9616e98d726 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:09:56 np0005604943 nova_compute[238883]: 2026-02-02 12:09:56.317 238887 DEBUG nova.compute.manager [req-b554cda7-272e-4504-921e-b0640dbe2d0f req-0db778f2-7091-4696-b1e1-3d00fcc3c418 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Received event network-changed-ed2a76ec-632f-4f24-b7b7-e89921520207 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:09:56 np0005604943 nova_compute[238883]: 2026-02-02 12:09:56.319 238887 DEBUG nova.compute.manager [req-b554cda7-272e-4504-921e-b0640dbe2d0f req-0db778f2-7091-4696-b1e1-3d00fcc3c418 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Refreshing instance network info cache due to event network-changed-ed2a76ec-632f-4f24-b7b7-e89921520207. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:09:56 np0005604943 nova_compute[238883]: 2026-02-02 12:09:56.320 238887 DEBUG oslo_concurrency.lockutils [req-b554cda7-272e-4504-921e-b0640dbe2d0f req-0db778f2-7091-4696-b1e1-3d00fcc3c418 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-63f7d822-7481-4c48-a8f8-d900cc1cbb7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:09:56 np0005604943 nova_compute[238883]: 2026-02-02 12:09:56.320 238887 DEBUG oslo_concurrency.lockutils [req-b554cda7-272e-4504-921e-b0640dbe2d0f req-0db778f2-7091-4696-b1e1-3d00fcc3c418 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-63f7d822-7481-4c48-a8f8-d900cc1cbb7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:09:56 np0005604943 nova_compute[238883]: 2026-02-02 12:09:56.320 238887 DEBUG nova.network.neutron [req-b554cda7-272e-4504-921e-b0640dbe2d0f req-0db778f2-7091-4696-b1e1-3d00fcc3c418 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Refreshing network info cache for port ed2a76ec-632f-4f24-b7b7-e89921520207 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:09:56 np0005604943 systemd[1]: var-lib-containers-storage-overlay-ee0d30dfa25ae331e3bb6cb95fbd294ccf400e88ae283884c0413fab92001402-merged.mount: Deactivated successfully.
Feb  2 07:09:56 np0005604943 podman[270194]: 2026-02-02 12:09:56.344728788 +0000 UTC m=+0.226570549 container remove f3e1493525c6f6c7dda132b0e0a8a3303d088b8aa21509c6d21dd9616e98d726 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:09:56 np0005604943 systemd[1]: libpod-conmon-f3e1493525c6f6c7dda132b0e0a8a3303d088b8aa21509c6d21dd9616e98d726.scope: Deactivated successfully.
Feb  2 07:09:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:09:56 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4047455125' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:09:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:09:56 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4047455125' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:09:56 np0005604943 podman[270236]: 2026-02-02 12:09:56.488473956 +0000 UTC m=+0.039195478 container create ab3c97a49f144b98b002fec3f35ff1c685fb18a6cf019739931b4c25c0b969c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_poitras, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:09:56 np0005604943 systemd[1]: Started libpod-conmon-ab3c97a49f144b98b002fec3f35ff1c685fb18a6cf019739931b4c25c0b969c4.scope.
Feb  2 07:09:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 453 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 31 KiB/s wr, 142 op/s
Feb  2 07:09:56 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:09:56 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b32fa80e09f9b31a239baf952d22878582e7a6638c7b3cef1c175881d1d5f64f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:09:56 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b32fa80e09f9b31a239baf952d22878582e7a6638c7b3cef1c175881d1d5f64f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:09:56 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b32fa80e09f9b31a239baf952d22878582e7a6638c7b3cef1c175881d1d5f64f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:09:56 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b32fa80e09f9b31a239baf952d22878582e7a6638c7b3cef1c175881d1d5f64f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:09:56 np0005604943 podman[270236]: 2026-02-02 12:09:56.470969268 +0000 UTC m=+0.021690810 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:09:56 np0005604943 podman[270236]: 2026-02-02 12:09:56.573840664 +0000 UTC m=+0.124562206 container init ab3c97a49f144b98b002fec3f35ff1c685fb18a6cf019739931b4c25c0b969c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb  2 07:09:56 np0005604943 podman[270236]: 2026-02-02 12:09:56.582606824 +0000 UTC m=+0.133328356 container start ab3c97a49f144b98b002fec3f35ff1c685fb18a6cf019739931b4c25c0b969c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_poitras, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:09:56 np0005604943 podman[270236]: 2026-02-02 12:09:56.586541897 +0000 UTC m=+0.137263439 container attach ab3c97a49f144b98b002fec3f35ff1c685fb18a6cf019739931b4c25c0b969c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 07:09:56 np0005604943 nova_compute[238883]: 2026-02-02 12:09:56.813 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:57 np0005604943 lvm[270332]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 07:09:57 np0005604943 lvm[270332]: VG ceph_vg1 finished
Feb  2 07:09:57 np0005604943 lvm[270331]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:09:57 np0005604943 lvm[270331]: VG ceph_vg0 finished
Feb  2 07:09:57 np0005604943 lvm[270334]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 07:09:57 np0005604943 lvm[270334]: VG ceph_vg2 finished
Feb  2 07:09:57 np0005604943 nova_compute[238883]: 2026-02-02 12:09:57.287 238887 DEBUG nova.network.neutron [req-b554cda7-272e-4504-921e-b0640dbe2d0f req-0db778f2-7091-4696-b1e1-3d00fcc3c418 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Updated VIF entry in instance network info cache for port ed2a76ec-632f-4f24-b7b7-e89921520207. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:09:57 np0005604943 nova_compute[238883]: 2026-02-02 12:09:57.288 238887 DEBUG nova.network.neutron [req-b554cda7-272e-4504-921e-b0640dbe2d0f req-0db778f2-7091-4696-b1e1-3d00fcc3c418 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Updating instance_info_cache with network_info: [{"id": "ed2a76ec-632f-4f24-b7b7-e89921520207", "address": "fa:16:3e:c7:72:dd", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped2a76ec-63", "ovs_interfaceid": "ed2a76ec-632f-4f24-b7b7-e89921520207", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:09:57 np0005604943 nova_compute[238883]: 2026-02-02 12:09:57.311 238887 DEBUG oslo_concurrency.lockutils [req-b554cda7-272e-4504-921e-b0640dbe2d0f req-0db778f2-7091-4696-b1e1-3d00fcc3c418 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-63f7d822-7481-4c48-a8f8-d900cc1cbb7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:09:57 np0005604943 clever_poitras[270253]: {}
Feb  2 07:09:57 np0005604943 systemd[1]: libpod-ab3c97a49f144b98b002fec3f35ff1c685fb18a6cf019739931b4c25c0b969c4.scope: Deactivated successfully.
Feb  2 07:09:57 np0005604943 podman[270236]: 2026-02-02 12:09:57.399317882 +0000 UTC m=+0.950039414 container died ab3c97a49f144b98b002fec3f35ff1c685fb18a6cf019739931b4c25c0b969c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb  2 07:09:57 np0005604943 systemd[1]: libpod-ab3c97a49f144b98b002fec3f35ff1c685fb18a6cf019739931b4c25c0b969c4.scope: Consumed 1.174s CPU time.
Feb  2 07:09:57 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b32fa80e09f9b31a239baf952d22878582e7a6638c7b3cef1c175881d1d5f64f-merged.mount: Deactivated successfully.
Feb  2 07:09:57 np0005604943 podman[270236]: 2026-02-02 12:09:57.440972024 +0000 UTC m=+0.991693546 container remove ab3c97a49f144b98b002fec3f35ff1c685fb18a6cf019739931b4c25c0b969c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 07:09:57 np0005604943 systemd[1]: libpod-conmon-ab3c97a49f144b98b002fec3f35ff1c685fb18a6cf019739931b4c25c0b969c4.scope: Deactivated successfully.
Feb  2 07:09:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:09:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:09:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:09:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:09:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:09:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e439 do_prune osdmap full prune enabled
Feb  2 07:09:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e440 e440: 3 total, 3 up, 3 in
Feb  2 07:09:57 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e440: 3 total, 3 up, 3 in
Feb  2 07:09:57 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:09:57 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:09:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 453 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.8 KiB/s wr, 235 op/s
Feb  2 07:09:58 np0005604943 nova_compute[238883]: 2026-02-02 12:09:58.697 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:09:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:09:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3759907980' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:09:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e440 do_prune osdmap full prune enabled
Feb  2 07:10:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e441 e441: 3 total, 3 up, 3 in
Feb  2 07:10:00 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e441: 3 total, 3 up, 3 in
Feb  2 07:10:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 453 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.4 KiB/s wr, 199 op/s
Feb  2 07:10:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e441 do_prune osdmap full prune enabled
Feb  2 07:10:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e442 e442: 3 total, 3 up, 3 in
Feb  2 07:10:01 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e442: 3 total, 3 up, 3 in
Feb  2 07:10:01 np0005604943 nova_compute[238883]: 2026-02-02 12:10:01.814 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e442 do_prune osdmap full prune enabled
Feb  2 07:10:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e443 e443: 3 total, 3 up, 3 in
Feb  2 07:10:02 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e443: 3 total, 3 up, 3 in
Feb  2 07:10:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:10:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1180514649' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:10:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:10:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1180514649' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:10:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 453 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 4.5 KiB/s wr, 77 op/s
Feb  2 07:10:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:10:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e443 do_prune osdmap full prune enabled
Feb  2 07:10:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e444 e444: 3 total, 3 up, 3 in
Feb  2 07:10:02 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e444: 3 total, 3 up, 3 in
Feb  2 07:10:03 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:03.181 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:10:03 np0005604943 ovn_controller[145056]: 2026-02-02T12:10:03Z|00066|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.14
Feb  2 07:10:03 np0005604943 ovn_controller[145056]: 2026-02-02T12:10:03Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c7:72:dd 10.100.0.14
Feb  2 07:10:03 np0005604943 nova_compute[238883]: 2026-02-02 12:10:03.700 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 453 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 596 KiB/s rd, 4.6 KiB/s wr, 129 op/s
Feb  2 07:10:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:10:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3938559580' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:10:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e444 do_prune osdmap full prune enabled
Feb  2 07:10:06 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e445 e445: 3 total, 3 up, 3 in
Feb  2 07:10:06 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e445: 3 total, 3 up, 3 in
Feb  2 07:10:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 453 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 491 KiB/s rd, 3.8 KiB/s wr, 106 op/s
Feb  2 07:10:06 np0005604943 nova_compute[238883]: 2026-02-02 12:10:06.817 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e445 do_prune osdmap full prune enabled
Feb  2 07:10:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e446 e446: 3 total, 3 up, 3 in
Feb  2 07:10:07 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e446: 3 total, 3 up, 3 in
Feb  2 07:10:07 np0005604943 ovn_controller[145056]: 2026-02-02T12:10:07Z|00068|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.14
Feb  2 07:10:07 np0005604943 ovn_controller[145056]: 2026-02-02T12:10:07Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c7:72:dd 10.100.0.14
Feb  2 07:10:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:10:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e446 do_prune osdmap full prune enabled
Feb  2 07:10:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e447 e447: 3 total, 3 up, 3 in
Feb  2 07:10:07 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e447: 3 total, 3 up, 3 in
Feb  2 07:10:08 np0005604943 ovn_controller[145056]: 2026-02-02T12:10:08Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c7:72:dd 10.100.0.14
Feb  2 07:10:08 np0005604943 ovn_controller[145056]: 2026-02-02T12:10:08Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:72:dd 10.100.0.14
Feb  2 07:10:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 453 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 18 KiB/s wr, 104 op/s
Feb  2 07:10:08 np0005604943 nova_compute[238883]: 2026-02-02 12:10:08.703 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e447 do_prune osdmap full prune enabled
Feb  2 07:10:08 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e448 e448: 3 total, 3 up, 3 in
Feb  2 07:10:08 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e448: 3 total, 3 up, 3 in
Feb  2 07:10:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:10:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4244939520' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:10:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:10:09 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4244939520' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:10:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_12:10:09
Feb  2 07:10:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 07:10:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 07:10:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['backups', 'volumes', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', '.rgw.root']
Feb  2 07:10:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 07:10:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:10.034 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:10:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:10.035 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:10:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:10.036 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 453 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 24 KiB/s wr, 126 op/s
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:10:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:10:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3056513678' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:10:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:10:11 np0005604943 nova_compute[238883]: 2026-02-02 12:10:11.820 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e448 do_prune osdmap full prune enabled
Feb  2 07:10:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e449 e449: 3 total, 3 up, 3 in
Feb  2 07:10:11 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e449: 3 total, 3 up, 3 in
Feb  2 07:10:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 453 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 934 KiB/s rd, 50 KiB/s wr, 132 op/s
Feb  2 07:10:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e449 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:10:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e449 do_prune osdmap full prune enabled
Feb  2 07:10:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e450 e450: 3 total, 3 up, 3 in
Feb  2 07:10:12 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e450: 3 total, 3 up, 3 in
Feb  2 07:10:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:10:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2739719346' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:10:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:10:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2739719346' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:10:13 np0005604943 nova_compute[238883]: 2026-02-02 12:10:13.707 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 453 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 140 KiB/s rd, 32 KiB/s wr, 144 op/s
Feb  2 07:10:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e450 do_prune osdmap full prune enabled
Feb  2 07:10:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e451 e451: 3 total, 3 up, 3 in
Feb  2 07:10:14 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e451: 3 total, 3 up, 3 in
Feb  2 07:10:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 453 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 30 KiB/s wr, 106 op/s
Feb  2 07:10:16 np0005604943 nova_compute[238883]: 2026-02-02 12:10:16.822 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e451 do_prune osdmap full prune enabled
Feb  2 07:10:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e452 e452: 3 total, 3 up, 3 in
Feb  2 07:10:16 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e452: 3 total, 3 up, 3 in
Feb  2 07:10:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e452 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:10:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e452 do_prune osdmap full prune enabled
Feb  2 07:10:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e453 e453: 3 total, 3 up, 3 in
Feb  2 07:10:17 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e453: 3 total, 3 up, 3 in
Feb  2 07:10:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 453 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 11 KiB/s wr, 133 op/s
Feb  2 07:10:18 np0005604943 nova_compute[238883]: 2026-02-02 12:10:18.710 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e453 do_prune osdmap full prune enabled
Feb  2 07:10:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e454 e454: 3 total, 3 up, 3 in
Feb  2 07:10:18 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e454: 3 total, 3 up, 3 in
Feb  2 07:10:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e454 do_prune osdmap full prune enabled
Feb  2 07:10:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e455 e455: 3 total, 3 up, 3 in
Feb  2 07:10:20 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e455: 3 total, 3 up, 3 in
Feb  2 07:10:20 np0005604943 podman[270376]: 2026-02-02 12:10:20.049488617 +0000 UTC m=+0.068371494 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb  2 07:10:20 np0005604943 podman[270375]: 2026-02-02 12:10:20.081375153 +0000 UTC m=+0.100675870 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Feb  2 07:10:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:10:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/298028747' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:10:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:10:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/298028747' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:10:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 453 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 15 KiB/s wr, 129 op/s
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 4.735411847484619e-06 of space, bias 1.0, pg target 0.0014206235542453857 quantized to 32 (current 32)
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005450142557797833 of space, bias 1.0, pg target 1.63504276733935 quantized to 32 (current 32)
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.6253822179109843e-06 of space, bias 1.0, pg target 0.00048598928315538427 quantized to 32 (current 32)
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006668451219407067 of space, bias 1.0, pg target 0.1993866914602713 quantized to 32 (current 32)
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0713078108291542e-06 of space, bias 4.0, pg target 0.0012812841417516683 quantized to 16 (current 16)
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:10:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Feb  2 07:10:21 np0005604943 nova_compute[238883]: 2026-02-02 12:10:21.824 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 453 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 32 KiB/s wr, 135 op/s
Feb  2 07:10:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e455 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:10:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e455 do_prune osdmap full prune enabled
Feb  2 07:10:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e456 e456: 3 total, 3 up, 3 in
Feb  2 07:10:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e456: 3 total, 3 up, 3 in
Feb  2 07:10:23 np0005604943 nova_compute[238883]: 2026-02-02 12:10:23.714 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 453 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 23 KiB/s wr, 116 op/s
Feb  2 07:10:25 np0005604943 ovn_controller[145056]: 2026-02-02T12:10:25Z|00277|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Feb  2 07:10:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 453 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 17 KiB/s wr, 61 op/s
Feb  2 07:10:26 np0005604943 nova_compute[238883]: 2026-02-02 12:10:26.826 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:10:27 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2825048758' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:10:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e456 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:10:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e456 do_prune osdmap full prune enabled
Feb  2 07:10:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e457 e457: 3 total, 3 up, 3 in
Feb  2 07:10:27 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e457: 3 total, 3 up, 3 in
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.532 238887 DEBUG oslo_concurrency.lockutils [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.533 238887 DEBUG oslo_concurrency.lockutils [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.533 238887 DEBUG oslo_concurrency.lockutils [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.533 238887 DEBUG oslo_concurrency.lockutils [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.533 238887 DEBUG oslo_concurrency.lockutils [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.535 238887 INFO nova.compute.manager [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Terminating instance#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.536 238887 DEBUG nova.compute.manager [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:10:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 453 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 18 KiB/s wr, 59 op/s
Feb  2 07:10:28 np0005604943 kernel: taped2a76ec-63 (unregistering): left promiscuous mode
Feb  2 07:10:28 np0005604943 NetworkManager[49093]: <info>  [1770034228.5822] device (taped2a76ec-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.588 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:28 np0005604943 ovn_controller[145056]: 2026-02-02T12:10:28Z|00278|binding|INFO|Releasing lport ed2a76ec-632f-4f24-b7b7-e89921520207 from this chassis (sb_readonly=0)
Feb  2 07:10:28 np0005604943 ovn_controller[145056]: 2026-02-02T12:10:28Z|00279|binding|INFO|Setting lport ed2a76ec-632f-4f24-b7b7-e89921520207 down in Southbound
Feb  2 07:10:28 np0005604943 ovn_controller[145056]: 2026-02-02T12:10:28Z|00280|binding|INFO|Removing iface taped2a76ec-63 ovn-installed in OVS
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.591 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:28.600 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:72:dd 10.100.0.14'], port_security=['fa:16:3e:c7:72:dd 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '63f7d822-7481-4c48-a8f8-d900cc1cbb7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efa24ae1-9962-44ca-882a-8d146356fcca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c7b49c49c104c079544033b07fb2f3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2781f824-7ac1-4375-9bc7-15197abfb3e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8b6e8bcf-741b-41c8-a826-9b6dbb1c260b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=ed2a76ec-632f-4f24-b7b7-e89921520207) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:10:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:28.601 155011 INFO neutron.agent.ovn.metadata.agent [-] Port ed2a76ec-632f-4f24-b7b7-e89921520207 in datapath efa24ae1-9962-44ca-882a-8d146356fcca unbound from our chassis#033[00m
Feb  2 07:10:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:28.602 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network efa24ae1-9962-44ca-882a-8d146356fcca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:10:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:28.605 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[e22bb99b-f413-4015-a2f2-ab9b42e78506]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:10:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:28.605 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca namespace which is not needed anymore#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.606 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:28 np0005604943 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Feb  2 07:10:28 np0005604943 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Consumed 15.357s CPU time.
Feb  2 07:10:28 np0005604943 systemd-machined[206973]: Machine qemu-28-instance-0000001c terminated.
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.716 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:28 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[269741]: [NOTICE]   (269745) : haproxy version is 2.8.14-c23fe91
Feb  2 07:10:28 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[269741]: [NOTICE]   (269745) : path to executable is /usr/sbin/haproxy
Feb  2 07:10:28 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[269741]: [WARNING]  (269745) : Exiting Master process...
Feb  2 07:10:28 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[269741]: [ALERT]    (269745) : Current worker (269747) exited with code 143 (Terminated)
Feb  2 07:10:28 np0005604943 neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca[269741]: [WARNING]  (269745) : All workers exited. Exiting... (0)
Feb  2 07:10:28 np0005604943 systemd[1]: libpod-98eaaf6bdeb7a14b4bc96b00e3a062ea09a704a29c1d377b9ab71c74f55ea809.scope: Deactivated successfully.
Feb  2 07:10:28 np0005604943 podman[270443]: 2026-02-02 12:10:28.736564458 +0000 UTC m=+0.058036572 container died 98eaaf6bdeb7a14b4bc96b00e3a062ea09a704a29c1d377b9ab71c74f55ea809 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.755 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.759 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:28 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-98eaaf6bdeb7a14b4bc96b00e3a062ea09a704a29c1d377b9ab71c74f55ea809-userdata-shm.mount: Deactivated successfully.
Feb  2 07:10:28 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a636a6702009b3c81c78c44e5b57a2dd8c8c5cdbf562c38b794f95bb0212b52d-merged.mount: Deactivated successfully.
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.776 238887 INFO nova.virt.libvirt.driver [-] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Instance destroyed successfully.#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.777 238887 DEBUG nova.objects.instance [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lazy-loading 'resources' on Instance uuid 63f7d822-7481-4c48-a8f8-d900cc1cbb7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:10:28 np0005604943 podman[270443]: 2026-02-02 12:10:28.783688693 +0000 UTC m=+0.105160807 container cleanup 98eaaf6bdeb7a14b4bc96b00e3a062ea09a704a29c1d377b9ab71c74f55ea809 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.791 238887 DEBUG nova.virt.libvirt.vif [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:09:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1899434577',display_name='tempest-TransferEncryptedVolumeTest-server-1899434577',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1899434577',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7aUv7PBeO78zp3TdCQ5pHHrUWfgatS9ASOECbWv5UGrW7YbMyQ2Q5xaozZcd0G8LLxfP6XSKv3an4flOYSD0UKdfJBDp1c8Bpfee8qRIo6Ih80jJn9izsYTHHCaZomBw==',key_name='tempest-TransferEncryptedVolumeTest-1781235943',keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:09:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4c7b49c49c104c079544033b07fb2f3d',ramdisk_id='',reservation_id='r-47c40zi0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-347797880',owner_user_name='tempest-TransferEncryptedVolumeTest-347797880-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:09:52Z,user_data=None,user_id='cd5824e18d5e443cb24d3bf55ff2c553',uuid=63f7d822-7481-4c48-a8f8-d900cc1cbb7d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ed2a76ec-632f-4f24-b7b7-e89921520207", "address": "fa:16:3e:c7:72:dd", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped2a76ec-63", "ovs_interfaceid": "ed2a76ec-632f-4f24-b7b7-e89921520207", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.791 238887 DEBUG nova.network.os_vif_util [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converting VIF {"id": "ed2a76ec-632f-4f24-b7b7-e89921520207", "address": "fa:16:3e:c7:72:dd", "network": {"id": "efa24ae1-9962-44ca-882a-8d146356fcca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-113290311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c7b49c49c104c079544033b07fb2f3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped2a76ec-63", "ovs_interfaceid": "ed2a76ec-632f-4f24-b7b7-e89921520207", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.792 238887 DEBUG nova.network.os_vif_util [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:72:dd,bridge_name='br-int',has_traffic_filtering=True,id=ed2a76ec-632f-4f24-b7b7-e89921520207,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped2a76ec-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.793 238887 DEBUG os_vif [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:72:dd,bridge_name='br-int',has_traffic_filtering=True,id=ed2a76ec-632f-4f24-b7b7-e89921520207,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped2a76ec-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:10:28 np0005604943 systemd[1]: libpod-conmon-98eaaf6bdeb7a14b4bc96b00e3a062ea09a704a29c1d377b9ab71c74f55ea809.scope: Deactivated successfully.
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.794 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.794 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped2a76ec-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.797 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.800 238887 INFO os_vif [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:72:dd,bridge_name='br-int',has_traffic_filtering=True,id=ed2a76ec-632f-4f24-b7b7-e89921520207,network=Network(efa24ae1-9962-44ca-882a-8d146356fcca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped2a76ec-63')#033[00m
Feb  2 07:10:28 np0005604943 podman[270480]: 2026-02-02 12:10:28.862784686 +0000 UTC m=+0.051100100 container remove 98eaaf6bdeb7a14b4bc96b00e3a062ea09a704a29c1d377b9ab71c74f55ea809 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 07:10:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:28.869 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[a0046da0-7c1c-487a-ae7f-9ae85af429b4]: (4, ('Mon Feb  2 12:10:28 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca (98eaaf6bdeb7a14b4bc96b00e3a062ea09a704a29c1d377b9ab71c74f55ea809)\n98eaaf6bdeb7a14b4bc96b00e3a062ea09a704a29c1d377b9ab71c74f55ea809\nMon Feb  2 12:10:28 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca (98eaaf6bdeb7a14b4bc96b00e3a062ea09a704a29c1d377b9ab71c74f55ea809)\n98eaaf6bdeb7a14b4bc96b00e3a062ea09a704a29c1d377b9ab71c74f55ea809\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:10:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:28.872 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[6a10cfd9-c6d6-4467-af37-38fb49152743]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:10:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e457 do_prune osdmap full prune enabled
Feb  2 07:10:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:28.873 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefa24ae1-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:10:28 np0005604943 kernel: tapefa24ae1-90: left promiscuous mode
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.877 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:28.880 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8cb6cd-d5a8-488f-8458-3de9fc6761c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.885 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:28 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e458 e458: 3 total, 3 up, 3 in
Feb  2 07:10:28 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e458: 3 total, 3 up, 3 in
Feb  2 07:10:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:28.898 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4e0a99-459a-4a8b-9ca6-cad100c0caf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:10:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:28.900 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[709238fa-ab95-42c8-a0f7-30a063fc90d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:10:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:28.916 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc39e81-542f-4b93-bf58-1e680c82f7f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463475, 'reachable_time': 26697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270511, 'error': None, 'target': 'ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:10:28 np0005604943 systemd[1]: run-netns-ovnmeta\x2defa24ae1\x2d9962\x2d44ca\x2d882a\x2d8d146356fcca.mount: Deactivated successfully.
Feb  2 07:10:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:28.920 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-efa24ae1-9962-44ca-882a-8d146356fcca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 07:10:28 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:28.920 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[94a78fce-787d-41ce-966a-aaa61f924069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.975 238887 INFO nova.virt.libvirt.driver [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Deleting instance files /var/lib/nova/instances/63f7d822-7481-4c48-a8f8-d900cc1cbb7d_del#033[00m
Feb  2 07:10:28 np0005604943 nova_compute[238883]: 2026-02-02 12:10:28.976 238887 INFO nova.virt.libvirt.driver [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Deletion of /var/lib/nova/instances/63f7d822-7481-4c48-a8f8-d900cc1cbb7d_del complete#033[00m
Feb  2 07:10:29 np0005604943 nova_compute[238883]: 2026-02-02 12:10:29.048 238887 INFO nova.compute.manager [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Took 0.51 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:10:29 np0005604943 nova_compute[238883]: 2026-02-02 12:10:29.049 238887 DEBUG oslo.service.loopingcall [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:10:29 np0005604943 nova_compute[238883]: 2026-02-02 12:10:29.050 238887 DEBUG nova.compute.manager [-] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:10:29 np0005604943 nova_compute[238883]: 2026-02-02 12:10:29.050 238887 DEBUG nova.network.neutron [-] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:10:29 np0005604943 nova_compute[238883]: 2026-02-02 12:10:29.566 238887 DEBUG nova.compute.manager [req-fdfb0659-512c-4df7-b216-b9b0620dc035 req-bbb4d321-e1e9-4409-82fa-17da8955dc05 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Received event network-vif-unplugged-ed2a76ec-632f-4f24-b7b7-e89921520207 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:10:29 np0005604943 nova_compute[238883]: 2026-02-02 12:10:29.566 238887 DEBUG oslo_concurrency.lockutils [req-fdfb0659-512c-4df7-b216-b9b0620dc035 req-bbb4d321-e1e9-4409-82fa-17da8955dc05 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:10:29 np0005604943 nova_compute[238883]: 2026-02-02 12:10:29.566 238887 DEBUG oslo_concurrency.lockutils [req-fdfb0659-512c-4df7-b216-b9b0620dc035 req-bbb4d321-e1e9-4409-82fa-17da8955dc05 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:10:29 np0005604943 nova_compute[238883]: 2026-02-02 12:10:29.567 238887 DEBUG oslo_concurrency.lockutils [req-fdfb0659-512c-4df7-b216-b9b0620dc035 req-bbb4d321-e1e9-4409-82fa-17da8955dc05 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:10:29 np0005604943 nova_compute[238883]: 2026-02-02 12:10:29.567 238887 DEBUG nova.compute.manager [req-fdfb0659-512c-4df7-b216-b9b0620dc035 req-bbb4d321-e1e9-4409-82fa-17da8955dc05 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] No waiting events found dispatching network-vif-unplugged-ed2a76ec-632f-4f24-b7b7-e89921520207 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:10:29 np0005604943 nova_compute[238883]: 2026-02-02 12:10:29.567 238887 DEBUG nova.compute.manager [req-fdfb0659-512c-4df7-b216-b9b0620dc035 req-bbb4d321-e1e9-4409-82fa-17da8955dc05 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Received event network-vif-unplugged-ed2a76ec-632f-4f24-b7b7-e89921520207 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:10:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e458 do_prune osdmap full prune enabled
Feb  2 07:10:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e459 e459: 3 total, 3 up, 3 in
Feb  2 07:10:29 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e459: 3 total, 3 up, 3 in
Feb  2 07:10:30 np0005604943 nova_compute[238883]: 2026-02-02 12:10:30.496 238887 DEBUG nova.network.neutron [-] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:10:30 np0005604943 nova_compute[238883]: 2026-02-02 12:10:30.518 238887 INFO nova.compute.manager [-] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Took 1.47 seconds to deallocate network for instance.#033[00m
Feb  2 07:10:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 453 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 142 KiB/s rd, 3.0 KiB/s wr, 12 op/s
Feb  2 07:10:30 np0005604943 nova_compute[238883]: 2026-02-02 12:10:30.810 238887 INFO nova.compute.manager [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Took 0.29 seconds to detach 1 volumes for instance.#033[00m
Feb  2 07:10:30 np0005604943 nova_compute[238883]: 2026-02-02 12:10:30.887 238887 DEBUG oslo_concurrency.lockutils [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:10:30 np0005604943 nova_compute[238883]: 2026-02-02 12:10:30.887 238887 DEBUG oslo_concurrency.lockutils [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:10:30 np0005604943 nova_compute[238883]: 2026-02-02 12:10:30.922 238887 DEBUG nova.scheduler.client.report [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Refreshing inventories for resource provider 30401227-b88f-415d-9c2d-3119bd1baf61 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  2 07:10:30 np0005604943 nova_compute[238883]: 2026-02-02 12:10:30.948 238887 DEBUG nova.scheduler.client.report [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Updating ProviderTree inventory for provider 30401227-b88f-415d-9c2d-3119bd1baf61 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  2 07:10:30 np0005604943 nova_compute[238883]: 2026-02-02 12:10:30.948 238887 DEBUG nova.compute.provider_tree [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Updating inventory in ProviderTree for provider 30401227-b88f-415d-9c2d-3119bd1baf61 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 07:10:30 np0005604943 nova_compute[238883]: 2026-02-02 12:10:30.967 238887 DEBUG nova.scheduler.client.report [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Refreshing aggregate associations for resource provider 30401227-b88f-415d-9c2d-3119bd1baf61, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  2 07:10:30 np0005604943 nova_compute[238883]: 2026-02-02 12:10:30.987 238887 DEBUG nova.scheduler.client.report [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Refreshing trait associations for resource provider 30401227-b88f-415d-9c2d-3119bd1baf61, traits: COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  2 07:10:31 np0005604943 nova_compute[238883]: 2026-02-02 12:10:31.019 238887 DEBUG oslo_concurrency.processutils [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:10:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:10:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2533904219' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:10:31 np0005604943 nova_compute[238883]: 2026-02-02 12:10:31.569 238887 DEBUG oslo_concurrency.processutils [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:10:31 np0005604943 nova_compute[238883]: 2026-02-02 12:10:31.575 238887 DEBUG nova.compute.provider_tree [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:10:31 np0005604943 nova_compute[238883]: 2026-02-02 12:10:31.596 238887 DEBUG nova.scheduler.client.report [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:10:31 np0005604943 nova_compute[238883]: 2026-02-02 12:10:31.629 238887 DEBUG oslo_concurrency.lockutils [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:10:31 np0005604943 nova_compute[238883]: 2026-02-02 12:10:31.644 238887 DEBUG nova.compute.manager [req-bbe3fa9f-b7d9-41b0-8835-497a0c60c9f5 req-4ccc3ffb-4f99-46d6-9f1e-22c5ee49ce88 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Received event network-vif-plugged-ed2a76ec-632f-4f24-b7b7-e89921520207 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:10:31 np0005604943 nova_compute[238883]: 2026-02-02 12:10:31.645 238887 DEBUG oslo_concurrency.lockutils [req-bbe3fa9f-b7d9-41b0-8835-497a0c60c9f5 req-4ccc3ffb-4f99-46d6-9f1e-22c5ee49ce88 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:10:31 np0005604943 nova_compute[238883]: 2026-02-02 12:10:31.645 238887 DEBUG oslo_concurrency.lockutils [req-bbe3fa9f-b7d9-41b0-8835-497a0c60c9f5 req-4ccc3ffb-4f99-46d6-9f1e-22c5ee49ce88 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:10:31 np0005604943 nova_compute[238883]: 2026-02-02 12:10:31.646 238887 DEBUG oslo_concurrency.lockutils [req-bbe3fa9f-b7d9-41b0-8835-497a0c60c9f5 req-4ccc3ffb-4f99-46d6-9f1e-22c5ee49ce88 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:10:31 np0005604943 nova_compute[238883]: 2026-02-02 12:10:31.646 238887 DEBUG nova.compute.manager [req-bbe3fa9f-b7d9-41b0-8835-497a0c60c9f5 req-4ccc3ffb-4f99-46d6-9f1e-22c5ee49ce88 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] No waiting events found dispatching network-vif-plugged-ed2a76ec-632f-4f24-b7b7-e89921520207 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:10:31 np0005604943 nova_compute[238883]: 2026-02-02 12:10:31.646 238887 WARNING nova.compute.manager [req-bbe3fa9f-b7d9-41b0-8835-497a0c60c9f5 req-4ccc3ffb-4f99-46d6-9f1e-22c5ee49ce88 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Received unexpected event network-vif-plugged-ed2a76ec-632f-4f24-b7b7-e89921520207 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:10:31 np0005604943 nova_compute[238883]: 2026-02-02 12:10:31.646 238887 DEBUG nova.compute.manager [req-bbe3fa9f-b7d9-41b0-8835-497a0c60c9f5 req-4ccc3ffb-4f99-46d6-9f1e-22c5ee49ce88 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Received event network-vif-deleted-ed2a76ec-632f-4f24-b7b7-e89921520207 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:10:31 np0005604943 nova_compute[238883]: 2026-02-02 12:10:31.655 238887 INFO nova.scheduler.client.report [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Deleted allocations for instance 63f7d822-7481-4c48-a8f8-d900cc1cbb7d#033[00m
Feb  2 07:10:31 np0005604943 nova_compute[238883]: 2026-02-02 12:10:31.724 238887 DEBUG oslo_concurrency.lockutils [None req-495b6c2f-eaf9-4b9f-b41f-cbf021cb9d26 cd5824e18d5e443cb24d3bf55ff2c553 4c7b49c49c104c079544033b07fb2f3d - - default default] Lock "63f7d822-7481-4c48-a8f8-d900cc1cbb7d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:10:31 np0005604943 nova_compute[238883]: 2026-02-02 12:10:31.829 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e459 do_prune osdmap full prune enabled
Feb  2 07:10:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e460 e460: 3 total, 3 up, 3 in
Feb  2 07:10:31 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb  2 07:10:31 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e460: 3 total, 3 up, 3 in
Feb  2 07:10:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 453 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 576 KiB/s rd, 3.0 KiB/s wr, 71 op/s
Feb  2 07:10:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:10:33 np0005604943 nova_compute[238883]: 2026-02-02 12:10:33.797 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:10:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/862686431' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:10:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:10:33 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/862686431' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:10:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e460 do_prune osdmap full prune enabled
Feb  2 07:10:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e461 e461: 3 total, 3 up, 3 in
Feb  2 07:10:33 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e461: 3 total, 3 up, 3 in
Feb  2 07:10:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 453 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 374 KiB/s rd, 4.8 KiB/s wr, 107 op/s
Feb  2 07:10:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:10:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1342585214' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:10:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:10:35 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1342585214' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:10:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 453 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 4.1 KiB/s wr, 91 op/s
Feb  2 07:10:36 np0005604943 nova_compute[238883]: 2026-02-02 12:10:36.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:10:36 np0005604943 nova_compute[238883]: 2026-02-02 12:10:36.831 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:37 np0005604943 nova_compute[238883]: 2026-02-02 12:10:37.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:10:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e461 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:10:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e461 do_prune osdmap full prune enabled
Feb  2 07:10:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e462 e462: 3 total, 3 up, 3 in
Feb  2 07:10:37 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e462: 3 total, 3 up, 3 in
Feb  2 07:10:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 329 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 4.7 KiB/s wr, 129 op/s
Feb  2 07:10:38 np0005604943 nova_compute[238883]: 2026-02-02 12:10:38.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:10:38 np0005604943 nova_compute[238883]: 2026-02-02 12:10:38.799 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:39 np0005604943 nova_compute[238883]: 2026-02-02 12:10:39.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:10:40 np0005604943 nova_compute[238883]: 2026-02-02 12:10:40.433 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:10:40 np0005604943 nova_compute[238883]: 2026-02-02 12:10:40.433 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:10:40 np0005604943 nova_compute[238883]: 2026-02-02 12:10:40.434 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:10:40 np0005604943 nova_compute[238883]: 2026-02-02 12:10:40.434 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 07:10:40 np0005604943 nova_compute[238883]: 2026-02-02 12:10:40.434 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:10:40 np0005604943 nova_compute[238883]: 2026-02-02 12:10:40.463 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 3.4 KiB/s wr, 92 op/s
Feb  2 07:10:40 np0005604943 nova_compute[238883]: 2026-02-02 12:10:40.568 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:10:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:10:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:10:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:10:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:10:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:10:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:10:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/585233716' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:10:40 np0005604943 nova_compute[238883]: 2026-02-02 12:10:40.940 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:10:41 np0005604943 nova_compute[238883]: 2026-02-02 12:10:41.127 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:10:41 np0005604943 nova_compute[238883]: 2026-02-02 12:10:41.129 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4302MB free_disk=59.98814195487648GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 07:10:41 np0005604943 nova_compute[238883]: 2026-02-02 12:10:41.129 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:10:41 np0005604943 nova_compute[238883]: 2026-02-02 12:10:41.129 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:10:41 np0005604943 nova_compute[238883]: 2026-02-02 12:10:41.433 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 07:10:41 np0005604943 nova_compute[238883]: 2026-02-02 12:10:41.433 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 07:10:41 np0005604943 nova_compute[238883]: 2026-02-02 12:10:41.452 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:10:41 np0005604943 nova_compute[238883]: 2026-02-02 12:10:41.834 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:10:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/658188210' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:10:41 np0005604943 nova_compute[238883]: 2026-02-02 12:10:41.966 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:10:41 np0005604943 nova_compute[238883]: 2026-02-02 12:10:41.971 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:10:41 np0005604943 nova_compute[238883]: 2026-02-02 12:10:41.984 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:10:42 np0005604943 nova_compute[238883]: 2026-02-02 12:10:42.008 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 07:10:42 np0005604943 nova_compute[238883]: 2026-02-02 12:10:42.009 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:10:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.3 KiB/s wr, 46 op/s
Feb  2 07:10:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:10:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e462 do_prune osdmap full prune enabled
Feb  2 07:10:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e463 e463: 3 total, 3 up, 3 in
Feb  2 07:10:42 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e463: 3 total, 3 up, 3 in
Feb  2 07:10:43 np0005604943 nova_compute[238883]: 2026-02-02 12:10:43.774 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770034228.7722313, 63f7d822-7481-4c48-a8f8-d900cc1cbb7d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:10:43 np0005604943 nova_compute[238883]: 2026-02-02 12:10:43.774 238887 INFO nova.compute.manager [-] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:10:43 np0005604943 nova_compute[238883]: 2026-02-02 12:10:43.794 238887 DEBUG nova.compute.manager [None req-df9d9441-dd27-43d4-bb52-2619f3a28718 - - - - - -] [instance: 63f7d822-7481-4c48-a8f8-d900cc1cbb7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:10:43 np0005604943 nova_compute[238883]: 2026-02-02 12:10:43.801 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.5 KiB/s wr, 50 op/s
Feb  2 07:10:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:10:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/837713164' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:10:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e463 do_prune osdmap full prune enabled
Feb  2 07:10:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e464 e464: 3 total, 3 up, 3 in
Feb  2 07:10:44 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e464: 3 total, 3 up, 3 in
Feb  2 07:10:45 np0005604943 nova_compute[238883]: 2026-02-02 12:10:45.009 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:10:45 np0005604943 nova_compute[238883]: 2026-02-02 12:10:45.010 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 07:10:45 np0005604943 nova_compute[238883]: 2026-02-02 12:10:45.010 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 07:10:45 np0005604943 nova_compute[238883]: 2026-02-02 12:10:45.048 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 07:10:45 np0005604943 nova_compute[238883]: 2026-02-02 12:10:45.048 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:10:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:10:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1633081288' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:10:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:10:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1633081288' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:10:45 np0005604943 nova_compute[238883]: 2026-02-02 12:10:45.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:10:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e464 do_prune osdmap full prune enabled
Feb  2 07:10:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e465 e465: 3 total, 3 up, 3 in
Feb  2 07:10:45 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e465: 3 total, 3 up, 3 in
Feb  2 07:10:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s
Feb  2 07:10:46 np0005604943 nova_compute[238883]: 2026-02-02 12:10:46.635 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:10:46 np0005604943 nova_compute[238883]: 2026-02-02 12:10:46.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:10:46 np0005604943 nova_compute[238883]: 2026-02-02 12:10:46.642 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 07:10:46 np0005604943 nova_compute[238883]: 2026-02-02 12:10:46.837 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e465 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:10:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e465 do_prune osdmap full prune enabled
Feb  2 07:10:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e466 e466: 3 total, 3 up, 3 in
Feb  2 07:10:47 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e466: 3 total, 3 up, 3 in
Feb  2 07:10:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.5 KiB/s wr, 24 op/s
Feb  2 07:10:48 np0005604943 nova_compute[238883]: 2026-02-02 12:10:48.805 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e466 do_prune osdmap full prune enabled
Feb  2 07:10:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e467 e467: 3 total, 3 up, 3 in
Feb  2 07:10:48 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e467: 3 total, 3 up, 3 in
Feb  2 07:10:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:10:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/590822807' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:10:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:10:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/590822807' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:10:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 3.7 KiB/s wr, 44 op/s
Feb  2 07:10:51 np0005604943 podman[270583]: 2026-02-02 12:10:51.041157208 +0000 UTC m=+0.049193370 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 07:10:51 np0005604943 podman[270582]: 2026-02-02 12:10:51.069512412 +0000 UTC m=+0.083088619 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 07:10:51 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 07:10:51 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 7286 writes, 33K keys, 7286 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 7286 writes, 7286 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2069 writes, 9703 keys, 2069 commit groups, 1.0 writes per commit group, ingest: 12.17 MB, 0.02 MB/s#012Interval WAL: 2069 writes, 2069 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    148.9      0.25              0.08        17    0.015       0      0       0.0       0.0#012  L6      1/0   10.07 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.5    173.1    143.5      0.91              0.29        16    0.057     81K   9440       0.0       0.0#012 Sum      1/0   10.07 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.5    135.9    144.6      1.16              0.37        33    0.035     81K   9440       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.0    183.4    192.0      0.32              0.12        10    0.032     32K   3644       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    173.1    143.5      0.91              0.29        16    0.057     81K   9440       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    151.5      0.24              0.08        16    0.015       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.1      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.036, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.16 GB write, 0.07 MB/s write, 0.15 GB read, 0.07 MB/s read, 1.2 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cd5e4c78d0#2 capacity: 304.00 MB usage: 18.53 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000239 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1245,17.85 MB,5.8711%) FilterBlock(34,234.48 KB,0.0753252%) IndexBlock(34,461.48 KB,0.148246%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 07:10:51 np0005604943 nova_compute[238883]: 2026-02-02 12:10:51.884 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 3.3 KiB/s wr, 68 op/s
Feb  2 07:10:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e467 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:10:53 np0005604943 nova_compute[238883]: 2026-02-02 12:10:53.808 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 3.6 KiB/s wr, 78 op/s
Feb  2 07:10:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 KiB/s wr, 57 op/s
Feb  2 07:10:56 np0005604943 nova_compute[238883]: 2026-02-02 12:10:56.887 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:57 np0005604943 nova_compute[238883]: 2026-02-02 12:10:57.573 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:57.574 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:10:57 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:10:57.576 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 07:10:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e467 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:10:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e467 do_prune osdmap full prune enabled
Feb  2 07:10:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e468 e468: 3 total, 3 up, 3 in
Feb  2 07:10:57 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e468: 3 total, 3 up, 3 in
Feb  2 07:10:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:10:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:10:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 07:10:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:10:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 07:10:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:10:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 07:10:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 07:10:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 07:10:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:10:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:10:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:10:58 np0005604943 podman[270768]: 2026-02-02 12:10:58.502576956 +0000 UTC m=+0.038407668 container create fc074eebf651a56b1912eb23a6bcc665f334964d790e758a78185924dcf1332a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:10:58 np0005604943 systemd[1]: Started libpod-conmon-fc074eebf651a56b1912eb23a6bcc665f334964d790e758a78185924dcf1332a.scope.
Feb  2 07:10:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 850 B/s wr, 38 op/s
Feb  2 07:10:58 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:10:58 np0005604943 podman[270768]: 2026-02-02 12:10:58.485829008 +0000 UTC m=+0.021659740 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:10:58 np0005604943 podman[270768]: 2026-02-02 12:10:58.583726573 +0000 UTC m=+0.119557305 container init fc074eebf651a56b1912eb23a6bcc665f334964d790e758a78185924dcf1332a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:10:58 np0005604943 podman[270768]: 2026-02-02 12:10:58.592652297 +0000 UTC m=+0.128483019 container start fc074eebf651a56b1912eb23a6bcc665f334964d790e758a78185924dcf1332a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 07:10:58 np0005604943 podman[270768]: 2026-02-02 12:10:58.595866151 +0000 UTC m=+0.131696893 container attach fc074eebf651a56b1912eb23a6bcc665f334964d790e758a78185924dcf1332a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mayer, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 07:10:58 np0005604943 systemd[1]: libpod-fc074eebf651a56b1912eb23a6bcc665f334964d790e758a78185924dcf1332a.scope: Deactivated successfully.
Feb  2 07:10:58 np0005604943 romantic_mayer[270784]: 167 167
Feb  2 07:10:58 np0005604943 conmon[270784]: conmon fc074eebf651a56b1912 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc074eebf651a56b1912eb23a6bcc665f334964d790e758a78185924dcf1332a.scope/container/memory.events
Feb  2 07:10:58 np0005604943 podman[270768]: 2026-02-02 12:10:58.600185634 +0000 UTC m=+0.136016356 container died fc074eebf651a56b1912eb23a6bcc665f334964d790e758a78185924dcf1332a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mayer, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:10:58 np0005604943 systemd[1]: var-lib-containers-storage-overlay-9e1f00c792c28a08bc9eb047fd9dee04366e9bcae4a5cadf5a9f78dac2e0acf8-merged.mount: Deactivated successfully.
Feb  2 07:10:58 np0005604943 podman[270768]: 2026-02-02 12:10:58.63890722 +0000 UTC m=+0.174737942 container remove fc074eebf651a56b1912eb23a6bcc665f334964d790e758a78185924dcf1332a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:10:58 np0005604943 systemd[1]: libpod-conmon-fc074eebf651a56b1912eb23a6bcc665f334964d790e758a78185924dcf1332a.scope: Deactivated successfully.
Feb  2 07:10:58 np0005604943 podman[270808]: 2026-02-02 12:10:58.788569163 +0000 UTC m=+0.039780995 container create 50d912dee9b7658223a2ca31c52dbee2b185c0433f117033e57482d8bd3e996c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:10:58 np0005604943 nova_compute[238883]: 2026-02-02 12:10:58.810 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:10:58 np0005604943 systemd[1]: Started libpod-conmon-50d912dee9b7658223a2ca31c52dbee2b185c0433f117033e57482d8bd3e996c.scope.
Feb  2 07:10:58 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:10:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd54a5bede47618c973110718e95f01eb29af5f3e915f918b8b72bea30f652b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:10:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd54a5bede47618c973110718e95f01eb29af5f3e915f918b8b72bea30f652b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:10:58 np0005604943 podman[270808]: 2026-02-02 12:10:58.772316386 +0000 UTC m=+0.023528218 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:10:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd54a5bede47618c973110718e95f01eb29af5f3e915f918b8b72bea30f652b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:10:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd54a5bede47618c973110718e95f01eb29af5f3e915f918b8b72bea30f652b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:10:58 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd54a5bede47618c973110718e95f01eb29af5f3e915f918b8b72bea30f652b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 07:10:58 np0005604943 podman[270808]: 2026-02-02 12:10:58.893708349 +0000 UTC m=+0.144920221 container init 50d912dee9b7658223a2ca31c52dbee2b185c0433f117033e57482d8bd3e996c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:10:58 np0005604943 podman[270808]: 2026-02-02 12:10:58.899992313 +0000 UTC m=+0.151204145 container start 50d912dee9b7658223a2ca31c52dbee2b185c0433f117033e57482d8bd3e996c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 07:10:58 np0005604943 podman[270808]: 2026-02-02 12:10:58.903443264 +0000 UTC m=+0.154655086 container attach 50d912dee9b7658223a2ca31c52dbee2b185c0433f117033e57482d8bd3e996c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 07:10:59 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:10:59 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:10:59 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:10:59 np0005604943 funny_bouman[270824]: --> passed data devices: 0 physical, 3 LVM
Feb  2 07:10:59 np0005604943 funny_bouman[270824]: --> All data devices are unavailable
Feb  2 07:10:59 np0005604943 systemd[1]: libpod-50d912dee9b7658223a2ca31c52dbee2b185c0433f117033e57482d8bd3e996c.scope: Deactivated successfully.
Feb  2 07:10:59 np0005604943 podman[270808]: 2026-02-02 12:10:59.366637185 +0000 UTC m=+0.617849017 container died 50d912dee9b7658223a2ca31c52dbee2b185c0433f117033e57482d8bd3e996c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:10:59 np0005604943 systemd[1]: var-lib-containers-storage-overlay-cd54a5bede47618c973110718e95f01eb29af5f3e915f918b8b72bea30f652b7-merged.mount: Deactivated successfully.
Feb  2 07:10:59 np0005604943 podman[270808]: 2026-02-02 12:10:59.413866282 +0000 UTC m=+0.665078104 container remove 50d912dee9b7658223a2ca31c52dbee2b185c0433f117033e57482d8bd3e996c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb  2 07:10:59 np0005604943 systemd[1]: libpod-conmon-50d912dee9b7658223a2ca31c52dbee2b185c0433f117033e57482d8bd3e996c.scope: Deactivated successfully.
Feb  2 07:10:59 np0005604943 podman[270917]: 2026-02-02 12:10:59.821629971 +0000 UTC m=+0.037919276 container create a0f7a294569995a3b37c876d847665b8b8253037faa29cd1f5aabea5ff1a630f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 07:10:59 np0005604943 systemd[1]: Started libpod-conmon-a0f7a294569995a3b37c876d847665b8b8253037faa29cd1f5aabea5ff1a630f.scope.
Feb  2 07:10:59 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:10:59 np0005604943 podman[270917]: 2026-02-02 12:10:59.887164758 +0000 UTC m=+0.103454073 container init a0f7a294569995a3b37c876d847665b8b8253037faa29cd1f5aabea5ff1a630f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb  2 07:10:59 np0005604943 podman[270917]: 2026-02-02 12:10:59.89177768 +0000 UTC m=+0.108066965 container start a0f7a294569995a3b37c876d847665b8b8253037faa29cd1f5aabea5ff1a630f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bhaskara, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 07:10:59 np0005604943 podman[270917]: 2026-02-02 12:10:59.895207639 +0000 UTC m=+0.111496964 container attach a0f7a294569995a3b37c876d847665b8b8253037faa29cd1f5aabea5ff1a630f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 07:10:59 np0005604943 elated_bhaskara[270933]: 167 167
Feb  2 07:10:59 np0005604943 systemd[1]: libpod-a0f7a294569995a3b37c876d847665b8b8253037faa29cd1f5aabea5ff1a630f.scope: Deactivated successfully.
Feb  2 07:10:59 np0005604943 podman[270917]: 2026-02-02 12:10:59.896479603 +0000 UTC m=+0.112768918 container died a0f7a294569995a3b37c876d847665b8b8253037faa29cd1f5aabea5ff1a630f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:10:59 np0005604943 podman[270917]: 2026-02-02 12:10:59.805776825 +0000 UTC m=+0.022066140 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:10:59 np0005604943 systemd[1]: var-lib-containers-storage-overlay-4201bd5404185ec779ed6ca7980e0d7f125525be79d57f3df2e58e69d859a27c-merged.mount: Deactivated successfully.
Feb  2 07:10:59 np0005604943 podman[270917]: 2026-02-02 12:10:59.927758892 +0000 UTC m=+0.144048187 container remove a0f7a294569995a3b37c876d847665b8b8253037faa29cd1f5aabea5ff1a630f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bhaskara, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb  2 07:10:59 np0005604943 systemd[1]: libpod-conmon-a0f7a294569995a3b37c876d847665b8b8253037faa29cd1f5aabea5ff1a630f.scope: Deactivated successfully.
Feb  2 07:11:00 np0005604943 podman[270957]: 2026-02-02 12:11:00.080540137 +0000 UTC m=+0.054085508 container create e8c635706ec55538d7d27c66a1594b2addc26688f0908ccaeb5cae656af43915 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_saha, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 07:11:00 np0005604943 systemd[1]: Started libpod-conmon-e8c635706ec55538d7d27c66a1594b2addc26688f0908ccaeb5cae656af43915.scope.
Feb  2 07:11:00 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:11:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b737ddbe5b094fcca5a2091277590574e487d80af2b4d2b4a2936ef1ce758e8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:11:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b737ddbe5b094fcca5a2091277590574e487d80af2b4d2b4a2936ef1ce758e8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:11:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b737ddbe5b094fcca5a2091277590574e487d80af2b4d2b4a2936ef1ce758e8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:11:00 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b737ddbe5b094fcca5a2091277590574e487d80af2b4d2b4a2936ef1ce758e8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:11:00 np0005604943 podman[270957]: 2026-02-02 12:11:00.05163409 +0000 UTC m=+0.025179541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:11:00 np0005604943 podman[270957]: 2026-02-02 12:11:00.149939026 +0000 UTC m=+0.123484487 container init e8c635706ec55538d7d27c66a1594b2addc26688f0908ccaeb5cae656af43915 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_saha, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:11:00 np0005604943 podman[270957]: 2026-02-02 12:11:00.160218495 +0000 UTC m=+0.133763866 container start e8c635706ec55538d7d27c66a1594b2addc26688f0908ccaeb5cae656af43915 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_saha, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:11:00 np0005604943 podman[270957]: 2026-02-02 12:11:00.163328548 +0000 UTC m=+0.136873959 container attach e8c635706ec55538d7d27c66a1594b2addc26688f0908ccaeb5cae656af43915 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 07:11:00 np0005604943 laughing_saha[270973]: {
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:    "0": [
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:        {
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "devices": [
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "/dev/loop3"
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            ],
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "lv_name": "ceph_lv0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "lv_size": "21470642176",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "name": "ceph_lv0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "tags": {
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.cluster_name": "ceph",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.crush_device_class": "",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.encrypted": "0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.objectstore": "bluestore",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.osd_id": "0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.type": "block",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.vdo": "0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.with_tpm": "0"
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            },
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "type": "block",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "vg_name": "ceph_vg0"
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:        }
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:    ],
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:    "1": [
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:        {
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "devices": [
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "/dev/loop4"
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            ],
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "lv_name": "ceph_lv1",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "lv_size": "21470642176",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "name": "ceph_lv1",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "tags": {
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.cluster_name": "ceph",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.crush_device_class": "",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.encrypted": "0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.objectstore": "bluestore",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.osd_id": "1",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.type": "block",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.vdo": "0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.with_tpm": "0"
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            },
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "type": "block",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "vg_name": "ceph_vg1"
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:        }
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:    ],
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:    "2": [
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:        {
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "devices": [
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "/dev/loop5"
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            ],
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "lv_name": "ceph_lv2",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "lv_size": "21470642176",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "name": "ceph_lv2",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "tags": {
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.cluster_name": "ceph",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.crush_device_class": "",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.encrypted": "0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.objectstore": "bluestore",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.osd_id": "2",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.type": "block",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.vdo": "0",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:                "ceph.with_tpm": "0"
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            },
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "type": "block",
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:            "vg_name": "ceph_vg2"
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:        }
Feb  2 07:11:00 np0005604943 laughing_saha[270973]:    ]
Feb  2 07:11:00 np0005604943 laughing_saha[270973]: }
Feb  2 07:11:00 np0005604943 systemd[1]: libpod-e8c635706ec55538d7d27c66a1594b2addc26688f0908ccaeb5cae656af43915.scope: Deactivated successfully.
Feb  2 07:11:00 np0005604943 podman[270957]: 2026-02-02 12:11:00.42358236 +0000 UTC m=+0.397127731 container died e8c635706ec55538d7d27c66a1594b2addc26688f0908ccaeb5cae656af43915 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:11:00 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b737ddbe5b094fcca5a2091277590574e487d80af2b4d2b4a2936ef1ce758e8e-merged.mount: Deactivated successfully.
Feb  2 07:11:00 np0005604943 podman[270957]: 2026-02-02 12:11:00.460687162 +0000 UTC m=+0.434232533 container remove e8c635706ec55538d7d27c66a1594b2addc26688f0908ccaeb5cae656af43915 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:11:00 np0005604943 systemd[1]: libpod-conmon-e8c635706ec55538d7d27c66a1594b2addc26688f0908ccaeb5cae656af43915.scope: Deactivated successfully.
Feb  2 07:11:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 818 B/s wr, 37 op/s
Feb  2 07:11:00 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:11:00.577 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:11:00 np0005604943 podman[271056]: 2026-02-02 12:11:00.848790114 +0000 UTC m=+0.035312026 container create c2e82fb5d34da797a4cab82cf572df7e30211282b6a35668b6f3a69495a3763e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:11:00 np0005604943 systemd[1]: Started libpod-conmon-c2e82fb5d34da797a4cab82cf572df7e30211282b6a35668b6f3a69495a3763e.scope.
Feb  2 07:11:00 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:11:00 np0005604943 podman[271056]: 2026-02-02 12:11:00.909832135 +0000 UTC m=+0.096354077 container init c2e82fb5d34da797a4cab82cf572df7e30211282b6a35668b6f3a69495a3763e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Feb  2 07:11:00 np0005604943 podman[271056]: 2026-02-02 12:11:00.915809571 +0000 UTC m=+0.102331633 container start c2e82fb5d34da797a4cab82cf572df7e30211282b6a35668b6f3a69495a3763e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb  2 07:11:00 np0005604943 podman[271056]: 2026-02-02 12:11:00.919273952 +0000 UTC m=+0.105795894 container attach c2e82fb5d34da797a4cab82cf572df7e30211282b6a35668b6f3a69495a3763e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:11:00 np0005604943 systemd[1]: libpod-c2e82fb5d34da797a4cab82cf572df7e30211282b6a35668b6f3a69495a3763e.scope: Deactivated successfully.
Feb  2 07:11:00 np0005604943 confident_shannon[271073]: 167 167
Feb  2 07:11:00 np0005604943 conmon[271073]: conmon c2e82fb5d34da797a4ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2e82fb5d34da797a4cab82cf572df7e30211282b6a35668b6f3a69495a3763e.scope/container/memory.events
Feb  2 07:11:00 np0005604943 podman[271056]: 2026-02-02 12:11:00.921752937 +0000 UTC m=+0.108274869 container died c2e82fb5d34da797a4cab82cf572df7e30211282b6a35668b6f3a69495a3763e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb  2 07:11:00 np0005604943 podman[271056]: 2026-02-02 12:11:00.832934259 +0000 UTC m=+0.019456211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:11:00 np0005604943 systemd[1]: var-lib-containers-storage-overlay-a36eac19ad8226d08cf55f9eda6cc426a4047e86beefba13bb08ba3c01b548db-merged.mount: Deactivated successfully.
Feb  2 07:11:00 np0005604943 podman[271056]: 2026-02-02 12:11:00.963556682 +0000 UTC m=+0.150078604 container remove c2e82fb5d34da797a4cab82cf572df7e30211282b6a35668b6f3a69495a3763e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shannon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 07:11:00 np0005604943 systemd[1]: libpod-conmon-c2e82fb5d34da797a4cab82cf572df7e30211282b6a35668b6f3a69495a3763e.scope: Deactivated successfully.
Feb  2 07:11:01 np0005604943 podman[271097]: 2026-02-02 12:11:01.088742044 +0000 UTC m=+0.038209913 container create da81cbd84521cac8fa50bd00ca5d2572e96cf785e0fdeb1d4e7517fa8101385c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khorana, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Feb  2 07:11:01 np0005604943 systemd[1]: Started libpod-conmon-da81cbd84521cac8fa50bd00ca5d2572e96cf785e0fdeb1d4e7517fa8101385c.scope.
Feb  2 07:11:01 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:11:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7865a06539506c4b4e37d6cdeb27edb9d01fb987aedb1fb1c82245e8b45ae07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:11:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7865a06539506c4b4e37d6cdeb27edb9d01fb987aedb1fb1c82245e8b45ae07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:11:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7865a06539506c4b4e37d6cdeb27edb9d01fb987aedb1fb1c82245e8b45ae07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:11:01 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7865a06539506c4b4e37d6cdeb27edb9d01fb987aedb1fb1c82245e8b45ae07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:11:01 np0005604943 podman[271097]: 2026-02-02 12:11:01.070351902 +0000 UTC m=+0.019819821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:11:01 np0005604943 podman[271097]: 2026-02-02 12:11:01.173702051 +0000 UTC m=+0.123170020 container init da81cbd84521cac8fa50bd00ca5d2572e96cf785e0fdeb1d4e7517fa8101385c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 07:11:01 np0005604943 podman[271097]: 2026-02-02 12:11:01.179990396 +0000 UTC m=+0.129458265 container start da81cbd84521cac8fa50bd00ca5d2572e96cf785e0fdeb1d4e7517fa8101385c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khorana, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 07:11:01 np0005604943 podman[271097]: 2026-02-02 12:11:01.183830096 +0000 UTC m=+0.133297995 container attach da81cbd84521cac8fa50bd00ca5d2572e96cf785e0fdeb1d4e7517fa8101385c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khorana, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 07:11:01 np0005604943 lvm[271193]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:11:01 np0005604943 lvm[271193]: VG ceph_vg0 finished
Feb  2 07:11:01 np0005604943 lvm[271194]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 07:11:01 np0005604943 lvm[271194]: VG ceph_vg1 finished
Feb  2 07:11:01 np0005604943 lvm[271196]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 07:11:01 np0005604943 lvm[271196]: VG ceph_vg2 finished
Feb  2 07:11:01 np0005604943 nova_compute[238883]: 2026-02-02 12:11:01.937 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:01 np0005604943 happy_khorana[271113]: {}
Feb  2 07:11:02 np0005604943 systemd[1]: libpod-da81cbd84521cac8fa50bd00ca5d2572e96cf785e0fdeb1d4e7517fa8101385c.scope: Deactivated successfully.
Feb  2 07:11:02 np0005604943 podman[271097]: 2026-02-02 12:11:02.019590453 +0000 UTC m=+0.969058322 container died da81cbd84521cac8fa50bd00ca5d2572e96cf785e0fdeb1d4e7517fa8101385c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khorana, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb  2 07:11:02 np0005604943 systemd[1]: libpod-da81cbd84521cac8fa50bd00ca5d2572e96cf785e0fdeb1d4e7517fa8101385c.scope: Consumed 1.197s CPU time.
Feb  2 07:11:02 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b7865a06539506c4b4e37d6cdeb27edb9d01fb987aedb1fb1c82245e8b45ae07-merged.mount: Deactivated successfully.
Feb  2 07:11:02 np0005604943 podman[271097]: 2026-02-02 12:11:02.055805172 +0000 UTC m=+1.005273041 container remove da81cbd84521cac8fa50bd00ca5d2572e96cf785e0fdeb1d4e7517fa8101385c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khorana, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 07:11:02 np0005604943 systemd[1]: libpod-conmon-da81cbd84521cac8fa50bd00ca5d2572e96cf785e0fdeb1d4e7517fa8101385c.scope: Deactivated successfully.
Feb  2 07:11:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:11:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:11:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:11:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:11:02 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:11:02 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:11:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 KiB/s wr, 18 op/s
Feb  2 07:11:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e468 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:11:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e468 do_prune osdmap full prune enabled
Feb  2 07:11:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e469 e469: 3 total, 3 up, 3 in
Feb  2 07:11:03 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e469: 3 total, 3 up, 3 in
Feb  2 07:11:03 np0005604943 nova_compute[238883]: 2026-02-02 12:11:03.813 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e469 do_prune osdmap full prune enabled
Feb  2 07:11:04 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e470 e470: 3 total, 3 up, 3 in
Feb  2 07:11:04 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e470: 3 total, 3 up, 3 in
Feb  2 07:11:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 764 B/s rd, 611 B/s wr, 1 op/s
Feb  2 07:11:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:11:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1868557429' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:11:05 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:11:05 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1868557429' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:11:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 639 B/s rd, 511 B/s wr, 1 op/s
Feb  2 07:11:06 np0005604943 nova_compute[238883]: 2026-02-02 12:11:06.939 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e470 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:11:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.2 KiB/s wr, 45 op/s
Feb  2 07:11:08 np0005604943 nova_compute[238883]: 2026-02-02 12:11:08.860 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e470 do_prune osdmap full prune enabled
Feb  2 07:11:09 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e471 e471: 3 total, 3 up, 3 in
Feb  2 07:11:09 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e471: 3 total, 3 up, 3 in
Feb  2 07:11:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_12:11:09
Feb  2 07:11:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 07:11:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 07:11:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['backups', 'default.rgw.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'vms', 'volumes']
Feb  2 07:11:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 07:11:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:11:10.036 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:11:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:11:10.037 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:11:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:11:10.037 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:11:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e471 do_prune osdmap full prune enabled
Feb  2 07:11:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e472 e472: 3 total, 3 up, 3 in
Feb  2 07:11:10 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e472: 3 total, 3 up, 3 in
Feb  2 07:11:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.7 KiB/s wr, 56 op/s
Feb  2 07:11:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:11:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:11:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:11:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:11:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:11:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:11:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 07:11:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 07:11:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:11:10 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:11:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:11:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:11:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:11:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:11:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:11:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:11:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:11:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2431908658' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:11:11 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:11:11 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2431908658' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:11:11 np0005604943 nova_compute[238883]: 2026-02-02 12:11:11.942 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 3.5 KiB/s wr, 88 op/s
Feb  2 07:11:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e472 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:11:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e472 do_prune osdmap full prune enabled
Feb  2 07:11:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e473 e473: 3 total, 3 up, 3 in
Feb  2 07:11:12 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e473: 3 total, 3 up, 3 in
Feb  2 07:11:13 np0005604943 nova_compute[238883]: 2026-02-02 12:11:13.863 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e473 do_prune osdmap full prune enabled
Feb  2 07:11:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e474 e474: 3 total, 3 up, 3 in
Feb  2 07:11:14 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e474: 3 total, 3 up, 3 in
Feb  2 07:11:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 3.0 KiB/s wr, 73 op/s
Feb  2 07:11:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e474 do_prune osdmap full prune enabled
Feb  2 07:11:15 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e475 e475: 3 total, 3 up, 3 in
Feb  2 07:11:15 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e475: 3 total, 3 up, 3 in
Feb  2 07:11:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:11:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2502988346' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:11:16 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:11:16 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2502988346' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:11:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 2.7 KiB/s wr, 65 op/s
Feb  2 07:11:16 np0005604943 nova_compute[238883]: 2026-02-02 12:11:16.944 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e475 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:11:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e475 do_prune osdmap full prune enabled
Feb  2 07:11:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e476 e476: 3 total, 3 up, 3 in
Feb  2 07:11:17 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e476: 3 total, 3 up, 3 in
Feb  2 07:11:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 8.4 KiB/s rd, 1.2 KiB/s wr, 14 op/s
Feb  2 07:11:18 np0005604943 nova_compute[238883]: 2026-02-02 12:11:18.865 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:19 np0005604943 ovn_controller[145056]: 2026-02-02T12:11:19Z|00281|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Feb  2 07:11:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e476 do_prune osdmap full prune enabled
Feb  2 07:11:19 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e477 e477: 3 total, 3 up, 3 in
Feb  2 07:11:19 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e477: 3 total, 3 up, 3 in
Feb  2 07:11:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e477 do_prune osdmap full prune enabled
Feb  2 07:11:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e478 e478: 3 total, 3 up, 3 in
Feb  2 07:11:20 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e478: 3 total, 3 up, 3 in
Feb  2 07:11:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.0 KiB/s wr, 61 op/s
Feb  2 07:11:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:11:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4244728237' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:11:21 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:11:21 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4244728237' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.3363685595490097e-06 of space, bias 1.0, pg target 0.0007009105678647029 quantized to 32 (current 32)
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00290741980708483 of space, bias 1.0, pg target 0.872225942125449 quantized to 32 (current 32)
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.5685604428564998e-06 of space, bias 1.0, pg target 0.0004705681328569499 quantized to 32 (current 32)
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006668556945168848 of space, bias 1.0, pg target 0.20005670835506545 quantized to 32 (current 32)
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0770365635600571e-06 of space, bias 4.0, pg target 0.0012924438762720685 quantized to 16 (current 16)
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:11:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 07:11:21 np0005604943 nova_compute[238883]: 2026-02-02 12:11:21.945 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:22 np0005604943 podman[271236]: 2026-02-02 12:11:22.044074077 +0000 UTC m=+0.062045057 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  2 07:11:22 np0005604943 podman[271235]: 2026-02-02 12:11:22.064426751 +0000 UTC m=+0.084928707 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true)
Feb  2 07:11:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.2 KiB/s wr, 61 op/s
Feb  2 07:11:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e478 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:11:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e478 do_prune osdmap full prune enabled
Feb  2 07:11:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e479 e479: 3 total, 3 up, 3 in
Feb  2 07:11:22 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e479: 3 total, 3 up, 3 in
Feb  2 07:11:23 np0005604943 nova_compute[238883]: 2026-02-02 12:11:23.922 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 5.0 KiB/s wr, 108 op/s
Feb  2 07:11:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e479 do_prune osdmap full prune enabled
Feb  2 07:11:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e480 e480: 3 total, 3 up, 3 in
Feb  2 07:11:24 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e480: 3 total, 3 up, 3 in
Feb  2 07:11:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e480 do_prune osdmap full prune enabled
Feb  2 07:11:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e481 e481: 3 total, 3 up, 3 in
Feb  2 07:11:25 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e481: 3 total, 3 up, 3 in
Feb  2 07:11:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.7 KiB/s wr, 60 op/s
Feb  2 07:11:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:11:26 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2072974711' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:11:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:11:26 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2072974711' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:11:26 np0005604943 nova_compute[238883]: 2026-02-02 12:11:26.948 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e481 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:11:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e481 do_prune osdmap full prune enabled
Feb  2 07:11:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e482 e482: 3 total, 3 up, 3 in
Feb  2 07:11:27 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e482: 3 total, 3 up, 3 in
Feb  2 07:11:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 4.4 KiB/s wr, 114 op/s
Feb  2 07:11:28 np0005604943 nova_compute[238883]: 2026-02-02 12:11:28.924 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e482 do_prune osdmap full prune enabled
Feb  2 07:11:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e483 e483: 3 total, 3 up, 3 in
Feb  2 07:11:29 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e483: 3 total, 3 up, 3 in
Feb  2 07:11:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 3.0 KiB/s wr, 63 op/s
Feb  2 07:11:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e483 do_prune osdmap full prune enabled
Feb  2 07:11:30 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e484 e484: 3 total, 3 up, 3 in
Feb  2 07:11:30 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e484: 3 total, 3 up, 3 in
Feb  2 07:11:31 np0005604943 nova_compute[238883]: 2026-02-02 12:11:31.950 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:11:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3025412374' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:11:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:11:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3025412374' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:11:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 3.3 KiB/s wr, 73 op/s
Feb  2 07:11:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e484 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:11:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e484 do_prune osdmap full prune enabled
Feb  2 07:11:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e485 e485: 3 total, 3 up, 3 in
Feb  2 07:11:32 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e485: 3 total, 3 up, 3 in
Feb  2 07:11:33 np0005604943 nova_compute[238883]: 2026-02-02 12:11:33.952 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.7 KiB/s wr, 65 op/s
Feb  2 07:11:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e485 do_prune osdmap full prune enabled
Feb  2 07:11:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e486 e486: 3 total, 3 up, 3 in
Feb  2 07:11:34 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e486: 3 total, 3 up, 3 in
Feb  2 07:11:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e486 do_prune osdmap full prune enabled
Feb  2 07:11:35 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e487 e487: 3 total, 3 up, 3 in
Feb  2 07:11:35 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e487: 3 total, 3 up, 3 in
Feb  2 07:11:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 3.0 KiB/s wr, 64 op/s
Feb  2 07:11:36 np0005604943 nova_compute[238883]: 2026-02-02 12:11:36.952 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:11:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1720877090' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:11:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:11:37 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1720877090' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:11:37 np0005604943 nova_compute[238883]: 2026-02-02 12:11:37.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:11:37 np0005604943 nova_compute[238883]: 2026-02-02 12:11:37.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:11:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e487 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:11:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e487 do_prune osdmap full prune enabled
Feb  2 07:11:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e488 e488: 3 total, 3 up, 3 in
Feb  2 07:11:37 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e488: 3 total, 3 up, 3 in
Feb  2 07:11:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 3.2 KiB/s wr, 64 op/s
Feb  2 07:11:38 np0005604943 nova_compute[238883]: 2026-02-02 12:11:38.955 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:39 np0005604943 nova_compute[238883]: 2026-02-02 12:11:39.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:11:39 np0005604943 nova_compute[238883]: 2026-02-02 12:11:39.681 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:11:39 np0005604943 nova_compute[238883]: 2026-02-02 12:11:39.682 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:11:39 np0005604943 nova_compute[238883]: 2026-02-02 12:11:39.682 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:11:39 np0005604943 nova_compute[238883]: 2026-02-02 12:11:39.682 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 07:11:39 np0005604943 nova_compute[238883]: 2026-02-02 12:11:39.682 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:11:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e488 do_prune osdmap full prune enabled
Feb  2 07:11:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e489 e489: 3 total, 3 up, 3 in
Feb  2 07:11:40 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e489: 3 total, 3 up, 3 in
Feb  2 07:11:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:11:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3344685509' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:11:40 np0005604943 nova_compute[238883]: 2026-02-02 12:11:40.264 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:11:40 np0005604943 nova_compute[238883]: 2026-02-02 12:11:40.385 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:11:40 np0005604943 nova_compute[238883]: 2026-02-02 12:11:40.387 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4352MB free_disk=59.98814096208662GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 07:11:40 np0005604943 nova_compute[238883]: 2026-02-02 12:11:40.388 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:11:40 np0005604943 nova_compute[238883]: 2026-02-02 12:11:40.388 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:11:40 np0005604943 nova_compute[238883]: 2026-02-02 12:11:40.448 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 07:11:40 np0005604943 nova_compute[238883]: 2026-02-02 12:11:40.449 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 07:11:40 np0005604943 nova_compute[238883]: 2026-02-02 12:11:40.465 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:11:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 3.0 KiB/s wr, 64 op/s
Feb  2 07:11:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:11:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:11:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:11:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:11:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:11:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:11:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:11:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/469989249' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:11:41 np0005604943 nova_compute[238883]: 2026-02-02 12:11:41.018 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:11:41 np0005604943 nova_compute[238883]: 2026-02-02 12:11:41.024 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:11:41 np0005604943 nova_compute[238883]: 2026-02-02 12:11:41.043 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:11:41 np0005604943 nova_compute[238883]: 2026-02-02 12:11:41.045 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 07:11:41 np0005604943 nova_compute[238883]: 2026-02-02 12:11:41.045 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:11:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e489 do_prune osdmap full prune enabled
Feb  2 07:11:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e490 e490: 3 total, 3 up, 3 in
Feb  2 07:11:41 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e490: 3 total, 3 up, 3 in
Feb  2 07:11:41 np0005604943 nova_compute[238883]: 2026-02-02 12:11:41.980 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:42 np0005604943 nova_compute[238883]: 2026-02-02 12:11:42.045 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:11:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:11:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3036321050' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:11:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:11:42 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3036321050' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:11:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 5.0 KiB/s wr, 112 op/s
Feb  2 07:11:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:11:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e490 do_prune osdmap full prune enabled
Feb  2 07:11:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e491 e491: 3 total, 3 up, 3 in
Feb  2 07:11:42 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e491: 3 total, 3 up, 3 in
Feb  2 07:11:43 np0005604943 nova_compute[238883]: 2026-02-02 12:11:43.957 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 4.8 KiB/s wr, 107 op/s
Feb  2 07:11:44 np0005604943 nova_compute[238883]: 2026-02-02 12:11:44.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:11:44 np0005604943 nova_compute[238883]: 2026-02-02 12:11:44.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 07:11:44 np0005604943 nova_compute[238883]: 2026-02-02 12:11:44.644 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 07:11:44 np0005604943 nova_compute[238883]: 2026-02-02 12:11:44.717 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 07:11:44 np0005604943 nova_compute[238883]: 2026-02-02 12:11:44.718 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:11:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:11:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1843943241' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:11:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:11:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1843943241' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:11:45 np0005604943 nova_compute[238883]: 2026-02-02 12:11:45.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:11:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e491 do_prune osdmap full prune enabled
Feb  2 07:11:46 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e492 e492: 3 total, 3 up, 3 in
Feb  2 07:11:46 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e492: 3 total, 3 up, 3 in
Feb  2 07:11:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.7 KiB/s wr, 60 op/s
Feb  2 07:11:46 np0005604943 nova_compute[238883]: 2026-02-02 12:11:46.982 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:47 np0005604943 nova_compute[238883]: 2026-02-02 12:11:47.635 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:11:47 np0005604943 nova_compute[238883]: 2026-02-02 12:11:47.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:11:47 np0005604943 nova_compute[238883]: 2026-02-02 12:11:47.641 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 07:11:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e492 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:11:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e492 do_prune osdmap full prune enabled
Feb  2 07:11:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e493 e493: 3 total, 3 up, 3 in
Feb  2 07:11:47 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e493: 3 total, 3 up, 3 in
Feb  2 07:11:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.5 KiB/s wr, 21 op/s
Feb  2 07:11:49 np0005604943 nova_compute[238883]: 2026-02-02 12:11:49.004 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:11:49.136307) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034309136342, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2438, "num_deletes": 272, "total_data_size": 3419986, "memory_usage": 3480208, "flush_reason": "Manual Compaction"}
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034309148944, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3361315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31839, "largest_seqno": 34276, "table_properties": {"data_size": 3349736, "index_size": 7689, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 24275, "raw_average_key_size": 21, "raw_value_size": 3326626, "raw_average_value_size": 2957, "num_data_blocks": 331, "num_entries": 1125, "num_filter_entries": 1125, "num_deletions": 272, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770034159, "oldest_key_time": 1770034159, "file_creation_time": 1770034309, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 12712 microseconds, and 5385 cpu microseconds.
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:11:49.149010) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3361315 bytes OK
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:11:49.149043) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:11:49.150829) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:11:49.150893) EVENT_LOG_v1 {"time_micros": 1770034309150880, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:11:49.150927) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3409500, prev total WAL file size 3409500, number of live WAL files 2.
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:11:49.151948) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3282KB)], [65(10MB)]
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034309152011, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 13922767, "oldest_snapshot_seqno": -1}
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6806 keys, 12182048 bytes, temperature: kUnknown
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034309235406, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 12182048, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12128073, "index_size": 35879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 170522, "raw_average_key_size": 25, "raw_value_size": 11997515, "raw_average_value_size": 1762, "num_data_blocks": 1437, "num_entries": 6806, "num_filter_entries": 6806, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770034309, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:11:49.235785) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 12182048 bytes
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:11:49.273875) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.7 rd, 145.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.1 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 7347, records dropped: 541 output_compression: NoCompression
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:11:49.273937) EVENT_LOG_v1 {"time_micros": 1770034309273914, "job": 36, "event": "compaction_finished", "compaction_time_micros": 83499, "compaction_time_cpu_micros": 25776, "output_level": 6, "num_output_files": 1, "total_output_size": 12182048, "num_input_records": 7347, "num_output_records": 6806, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034309274923, "job": 36, "event": "table_file_deletion", "file_number": 67}
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034309277242, "job": 36, "event": "table_file_deletion", "file_number": 65}
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:11:49.151830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:11:49.277375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:11:49.277385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:11:49.277387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:11:49.277389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:11:49.277391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/819578235' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:11:49 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/819578235' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:11:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.4 KiB/s wr, 19 op/s
Feb  2 07:11:51 np0005604943 nova_compute[238883]: 2026-02-02 12:11:51.635 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:11:51 np0005604943 nova_compute[238883]: 2026-02-02 12:11:51.864 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:11:51.864 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:11:51 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:11:51.865 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 07:11:51 np0005604943 nova_compute[238883]: 2026-02-02 12:11:51.984 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Feb  2 07:11:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:11:53 np0005604943 podman[271326]: 2026-02-02 12:11:53.044232834 +0000 UTC m=+0.055545247 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 07:11:53 np0005604943 podman[271325]: 2026-02-02 12:11:53.063550251 +0000 UTC m=+0.078918680 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb  2 07:11:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 07:11:53 np0005604943 ceph-osd[86144]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 31K writes, 113K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s#012Cumulative WAL: 31K writes, 11K syncs, 2.60 writes per sync, written: 0.08 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 13K writes, 51K keys, 13K commit groups, 1.0 writes per commit group, ingest: 38.60 MB, 0.06 MB/s#012Interval WAL: 13K writes, 5769 syncs, 2.34 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 07:11:54 np0005604943 nova_compute[238883]: 2026-02-02 12:11:54.036 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 KiB/s wr, 42 op/s
Feb  2 07:11:55 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:11:55.868 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:11:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.7 KiB/s wr, 36 op/s
Feb  2 07:11:56 np0005604943 nova_compute[238883]: 2026-02-02 12:11:56.986 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:11:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 07:11:57 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 29K writes, 113K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s#012Cumulative WAL: 29K writes, 10K syncs, 2.70 writes per sync, written: 0.08 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 46K keys, 11K commit groups, 1.0 writes per commit group, ingest: 34.00 MB, 0.06 MB/s#012Interval WAL: 11K writes, 5094 syncs, 2.35 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 07:11:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:11:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e493 do_prune osdmap full prune enabled
Feb  2 07:11:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e494 e494: 3 total, 3 up, 3 in
Feb  2 07:11:57 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e494: 3 total, 3 up, 3 in
Feb  2 07:11:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.1 KiB/s wr, 28 op/s
Feb  2 07:11:59 np0005604943 nova_compute[238883]: 2026-02-02 12:11:59.038 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 921 B/s wr, 25 op/s
Feb  2 07:12:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 07:12:01 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 23K writes, 93K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 23K writes, 8277 syncs, 2.79 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8596 writes, 36K keys, 8596 commit groups, 1.0 writes per commit group, ingest: 28.03 MB, 0.05 MB/s#012Interval WAL: 8596 writes, 3597 syncs, 2.39 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 07:12:02 np0005604943 nova_compute[238883]: 2026-02-02 12:12:02.022 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 1.5 KiB/s rd, 307 B/s wr, 2 op/s
Feb  2 07:12:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:12:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:12:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 07:12:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:12:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 07:12:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:12:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 07:12:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 07:12:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 07:12:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:12:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:12:02 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:12:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:12:03 np0005604943 podman[271510]: 2026-02-02 12:12:03.139416457 +0000 UTC m=+0.040275107 container create 449b8059e290fdb4fd2f719cfa546ac7d980ef4912e49044956b86a5c1b27d14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dijkstra, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:12:03 np0005604943 systemd[1]: Started libpod-conmon-449b8059e290fdb4fd2f719cfa546ac7d980ef4912e49044956b86a5c1b27d14.scope.
Feb  2 07:12:03 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:12:03 np0005604943 podman[271510]: 2026-02-02 12:12:03.202674785 +0000 UTC m=+0.103533435 container init 449b8059e290fdb4fd2f719cfa546ac7d980ef4912e49044956b86a5c1b27d14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 07:12:03 np0005604943 podman[271510]: 2026-02-02 12:12:03.210067118 +0000 UTC m=+0.110925758 container start 449b8059e290fdb4fd2f719cfa546ac7d980ef4912e49044956b86a5c1b27d14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dijkstra, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:12:03 np0005604943 podman[271510]: 2026-02-02 12:12:03.215000668 +0000 UTC m=+0.115859328 container attach 449b8059e290fdb4fd2f719cfa546ac7d980ef4912e49044956b86a5c1b27d14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Feb  2 07:12:03 np0005604943 podman[271510]: 2026-02-02 12:12:03.119845714 +0000 UTC m=+0.020704374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:12:03 np0005604943 nice_dijkstra[271526]: 167 167
Feb  2 07:12:03 np0005604943 systemd[1]: libpod-449b8059e290fdb4fd2f719cfa546ac7d980ef4912e49044956b86a5c1b27d14.scope: Deactivated successfully.
Feb  2 07:12:03 np0005604943 podman[271510]: 2026-02-02 12:12:03.217868863 +0000 UTC m=+0.118727503 container died 449b8059e290fdb4fd2f719cfa546ac7d980ef4912e49044956b86a5c1b27d14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dijkstra, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:12:03 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:12:03 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:12:03 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:12:03 np0005604943 systemd[1]: var-lib-containers-storage-overlay-b4a4bfc1ce76eb1678ab15295fd0a576dc289f7c1c15774ebfdd6a0eacb54f1e-merged.mount: Deactivated successfully.
Feb  2 07:12:03 np0005604943 podman[271510]: 2026-02-02 12:12:03.263277003 +0000 UTC m=+0.164135653 container remove 449b8059e290fdb4fd2f719cfa546ac7d980ef4912e49044956b86a5c1b27d14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dijkstra, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 07:12:03 np0005604943 systemd[1]: libpod-conmon-449b8059e290fdb4fd2f719cfa546ac7d980ef4912e49044956b86a5c1b27d14.scope: Deactivated successfully.
Feb  2 07:12:03 np0005604943 podman[271549]: 2026-02-02 12:12:03.41385715 +0000 UTC m=+0.043145032 container create b265a472a0e20c0ef944664f59e29929155f47db289c8e5e274c1637893019bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_franklin, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 07:12:03 np0005604943 systemd[1]: Started libpod-conmon-b265a472a0e20c0ef944664f59e29929155f47db289c8e5e274c1637893019bd.scope.
Feb  2 07:12:03 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:12:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0b58e1ff1ce68be0f9a118b7fd93205597e313b2c6c3fd2e5441c1cbf135cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:12:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0b58e1ff1ce68be0f9a118b7fd93205597e313b2c6c3fd2e5441c1cbf135cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:12:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0b58e1ff1ce68be0f9a118b7fd93205597e313b2c6c3fd2e5441c1cbf135cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:12:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0b58e1ff1ce68be0f9a118b7fd93205597e313b2c6c3fd2e5441c1cbf135cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:12:03 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0b58e1ff1ce68be0f9a118b7fd93205597e313b2c6c3fd2e5441c1cbf135cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 07:12:03 np0005604943 podman[271549]: 2026-02-02 12:12:03.393611839 +0000 UTC m=+0.022899721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:12:03 np0005604943 podman[271549]: 2026-02-02 12:12:03.515725781 +0000 UTC m=+0.145013663 container init b265a472a0e20c0ef944664f59e29929155f47db289c8e5e274c1637893019bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_franklin, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 07:12:03 np0005604943 podman[271549]: 2026-02-02 12:12:03.523329119 +0000 UTC m=+0.152616981 container start b265a472a0e20c0ef944664f59e29929155f47db289c8e5e274c1637893019bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_franklin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 07:12:03 np0005604943 podman[271549]: 2026-02-02 12:12:03.527397166 +0000 UTC m=+0.156685058 container attach b265a472a0e20c0ef944664f59e29929155f47db289c8e5e274c1637893019bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_franklin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 07:12:03 np0005604943 frosty_franklin[271565]: --> passed data devices: 0 physical, 3 LVM
Feb  2 07:12:03 np0005604943 frosty_franklin[271565]: --> All data devices are unavailable
Feb  2 07:12:03 np0005604943 systemd[1]: libpod-b265a472a0e20c0ef944664f59e29929155f47db289c8e5e274c1637893019bd.scope: Deactivated successfully.
Feb  2 07:12:03 np0005604943 podman[271549]: 2026-02-02 12:12:03.979837955 +0000 UTC m=+0.609125827 container died b265a472a0e20c0ef944664f59e29929155f47db289c8e5e274c1637893019bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_franklin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:12:04 np0005604943 systemd[1]: var-lib-containers-storage-overlay-0c0b58e1ff1ce68be0f9a118b7fd93205597e313b2c6c3fd2e5441c1cbf135cd-merged.mount: Deactivated successfully.
Feb  2 07:12:04 np0005604943 podman[271549]: 2026-02-02 12:12:04.02428175 +0000 UTC m=+0.653569612 container remove b265a472a0e20c0ef944664f59e29929155f47db289c8e5e274c1637893019bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_franklin, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:12:04 np0005604943 systemd[1]: libpod-conmon-b265a472a0e20c0ef944664f59e29929155f47db289c8e5e274c1637893019bd.scope: Deactivated successfully.
Feb  2 07:12:04 np0005604943 nova_compute[238883]: 2026-02-02 12:12:04.041 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:04 np0005604943 podman[271660]: 2026-02-02 12:12:04.487601985 +0000 UTC m=+0.052230820 container create d559bf7054b83ade7d4483fdc85496b38ad61b08acbadb35ed78f0f6f5de7e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 07:12:04 np0005604943 podman[271660]: 2026-02-02 12:12:04.459076347 +0000 UTC m=+0.023705192 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:12:04 np0005604943 systemd[1]: Started libpod-conmon-d559bf7054b83ade7d4483fdc85496b38ad61b08acbadb35ed78f0f6f5de7e78.scope.
Feb  2 07:12:04 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:12:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:12:04 np0005604943 podman[271660]: 2026-02-02 12:12:04.609424328 +0000 UTC m=+0.174053243 container init d559bf7054b83ade7d4483fdc85496b38ad61b08acbadb35ed78f0f6f5de7e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_clarke, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 07:12:04 np0005604943 podman[271660]: 2026-02-02 12:12:04.616551355 +0000 UTC m=+0.181180180 container start d559bf7054b83ade7d4483fdc85496b38ad61b08acbadb35ed78f0f6f5de7e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_clarke, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb  2 07:12:04 np0005604943 heuristic_clarke[271677]: 167 167
Feb  2 07:12:04 np0005604943 systemd[1]: libpod-d559bf7054b83ade7d4483fdc85496b38ad61b08acbadb35ed78f0f6f5de7e78.scope: Deactivated successfully.
Feb  2 07:12:04 np0005604943 conmon[271677]: conmon d559bf7054b83ade7d44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d559bf7054b83ade7d4483fdc85496b38ad61b08acbadb35ed78f0f6f5de7e78.scope/container/memory.events
Feb  2 07:12:04 np0005604943 podman[271660]: 2026-02-02 12:12:04.625701884 +0000 UTC m=+0.190330709 container attach d559bf7054b83ade7d4483fdc85496b38ad61b08acbadb35ed78f0f6f5de7e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_clarke, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:12:04 np0005604943 podman[271660]: 2026-02-02 12:12:04.626339181 +0000 UTC m=+0.190968006 container died d559bf7054b83ade7d4483fdc85496b38ad61b08acbadb35ed78f0f6f5de7e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_clarke, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:12:04 np0005604943 systemd[1]: var-lib-containers-storage-overlay-781c690beb3808880becbc8a6fabdbba3ec484eef2d91b92018e2ae17f462823-merged.mount: Deactivated successfully.
Feb  2 07:12:04 np0005604943 podman[271660]: 2026-02-02 12:12:04.667884131 +0000 UTC m=+0.232512956 container remove d559bf7054b83ade7d4483fdc85496b38ad61b08acbadb35ed78f0f6f5de7e78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:12:04 np0005604943 systemd[1]: libpod-conmon-d559bf7054b83ade7d4483fdc85496b38ad61b08acbadb35ed78f0f6f5de7e78.scope: Deactivated successfully.
Feb  2 07:12:04 np0005604943 podman[271704]: 2026-02-02 12:12:04.782362841 +0000 UTC m=+0.023262910 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:12:04 np0005604943 podman[271704]: 2026-02-02 12:12:04.912554313 +0000 UTC m=+0.153454352 container create da83533228e4d53827c6ef404d5e6163992f0ff2c971ba9e39d34310d0b6d474 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_robinson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:12:04 np0005604943 systemd[1]: Started libpod-conmon-da83533228e4d53827c6ef404d5e6163992f0ff2c971ba9e39d34310d0b6d474.scope.
Feb  2 07:12:04 np0005604943 ceph-mgr[75558]: [devicehealth INFO root] Check health
Feb  2 07:12:04 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:12:04 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52e37e8e2184548b47b5e01a7e9149933557135e0e1014017f39e6858fb8a913/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:12:04 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52e37e8e2184548b47b5e01a7e9149933557135e0e1014017f39e6858fb8a913/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:12:04 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52e37e8e2184548b47b5e01a7e9149933557135e0e1014017f39e6858fb8a913/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:12:04 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52e37e8e2184548b47b5e01a7e9149933557135e0e1014017f39e6858fb8a913/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:12:04 np0005604943 podman[271704]: 2026-02-02 12:12:04.995049816 +0000 UTC m=+0.235949915 container init da83533228e4d53827c6ef404d5e6163992f0ff2c971ba9e39d34310d0b6d474 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_robinson, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:12:05 np0005604943 podman[271704]: 2026-02-02 12:12:05.002493401 +0000 UTC m=+0.243393440 container start da83533228e4d53827c6ef404d5e6163992f0ff2c971ba9e39d34310d0b6d474 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_robinson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:12:05 np0005604943 podman[271704]: 2026-02-02 12:12:05.006407323 +0000 UTC m=+0.247307392 container attach da83533228e4d53827c6ef404d5e6163992f0ff2c971ba9e39d34310d0b6d474 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_robinson, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]: {
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:    "0": [
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:        {
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "devices": [
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "/dev/loop3"
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            ],
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "lv_name": "ceph_lv0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "lv_size": "21470642176",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "name": "ceph_lv0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "tags": {
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.cluster_name": "ceph",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.crush_device_class": "",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.encrypted": "0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.objectstore": "bluestore",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.osd_id": "0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.type": "block",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.vdo": "0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.with_tpm": "0"
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            },
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "type": "block",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "vg_name": "ceph_vg0"
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:        }
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:    ],
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:    "1": [
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:        {
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "devices": [
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "/dev/loop4"
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            ],
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "lv_name": "ceph_lv1",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "lv_size": "21470642176",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "name": "ceph_lv1",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "tags": {
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.cluster_name": "ceph",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.crush_device_class": "",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.encrypted": "0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.objectstore": "bluestore",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.osd_id": "1",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.type": "block",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.vdo": "0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.with_tpm": "0"
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            },
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "type": "block",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "vg_name": "ceph_vg1"
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:        }
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:    ],
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:    "2": [
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:        {
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "devices": [
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "/dev/loop5"
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            ],
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "lv_name": "ceph_lv2",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "lv_size": "21470642176",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "name": "ceph_lv2",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "tags": {
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.cluster_name": "ceph",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.crush_device_class": "",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.encrypted": "0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.objectstore": "bluestore",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.osd_id": "2",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.type": "block",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.vdo": "0",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:                "ceph.with_tpm": "0"
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            },
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "type": "block",
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:            "vg_name": "ceph_vg2"
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:        }
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]:    ]
Feb  2 07:12:05 np0005604943 dazzling_robinson[271721]: }
Feb  2 07:12:05 np0005604943 systemd[1]: libpod-da83533228e4d53827c6ef404d5e6163992f0ff2c971ba9e39d34310d0b6d474.scope: Deactivated successfully.
Feb  2 07:12:05 np0005604943 podman[271704]: 2026-02-02 12:12:05.316714427 +0000 UTC m=+0.557614496 container died da83533228e4d53827c6ef404d5e6163992f0ff2c971ba9e39d34310d0b6d474 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 07:12:05 np0005604943 systemd[1]: var-lib-containers-storage-overlay-52e37e8e2184548b47b5e01a7e9149933557135e0e1014017f39e6858fb8a913-merged.mount: Deactivated successfully.
Feb  2 07:12:05 np0005604943 podman[271704]: 2026-02-02 12:12:05.356242723 +0000 UTC m=+0.597142762 container remove da83533228e4d53827c6ef404d5e6163992f0ff2c971ba9e39d34310d0b6d474 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:12:05 np0005604943 systemd[1]: libpod-conmon-da83533228e4d53827c6ef404d5e6163992f0ff2c971ba9e39d34310d0b6d474.scope: Deactivated successfully.
Feb  2 07:12:05 np0005604943 podman[271803]: 2026-02-02 12:12:05.840785904 +0000 UTC m=+0.046786077 container create 6e05ec62ee62581c36d25b0f13e0e2e5f63efa6db37693811e504010820edebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_northcutt, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  2 07:12:05 np0005604943 systemd[1]: Started libpod-conmon-6e05ec62ee62581c36d25b0f13e0e2e5f63efa6db37693811e504010820edebe.scope.
Feb  2 07:12:05 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:12:05 np0005604943 podman[271803]: 2026-02-02 12:12:05.819894496 +0000 UTC m=+0.025894699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:12:05 np0005604943 podman[271803]: 2026-02-02 12:12:05.920152404 +0000 UTC m=+0.126152597 container init 6e05ec62ee62581c36d25b0f13e0e2e5f63efa6db37693811e504010820edebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 07:12:05 np0005604943 podman[271803]: 2026-02-02 12:12:05.925498185 +0000 UTC m=+0.131498358 container start 6e05ec62ee62581c36d25b0f13e0e2e5f63efa6db37693811e504010820edebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_northcutt, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:12:05 np0005604943 podman[271803]: 2026-02-02 12:12:05.929538811 +0000 UTC m=+0.135539184 container attach 6e05ec62ee62581c36d25b0f13e0e2e5f63efa6db37693811e504010820edebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_northcutt, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb  2 07:12:05 np0005604943 priceless_northcutt[271820]: 167 167
Feb  2 07:12:05 np0005604943 systemd[1]: libpod-6e05ec62ee62581c36d25b0f13e0e2e5f63efa6db37693811e504010820edebe.scope: Deactivated successfully.
Feb  2 07:12:05 np0005604943 podman[271803]: 2026-02-02 12:12:05.932363475 +0000 UTC m=+0.138363648 container died 6e05ec62ee62581c36d25b0f13e0e2e5f63efa6db37693811e504010820edebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_northcutt, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 07:12:05 np0005604943 systemd[1]: var-lib-containers-storage-overlay-85de6bc776120a6401cc45ff1d23238423a87a82176898698c4bb94f96788977-merged.mount: Deactivated successfully.
Feb  2 07:12:05 np0005604943 podman[271803]: 2026-02-02 12:12:05.967122816 +0000 UTC m=+0.173122989 container remove 6e05ec62ee62581c36d25b0f13e0e2e5f63efa6db37693811e504010820edebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_northcutt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 07:12:05 np0005604943 systemd[1]: libpod-conmon-6e05ec62ee62581c36d25b0f13e0e2e5f63efa6db37693811e504010820edebe.scope: Deactivated successfully.
Feb  2 07:12:06 np0005604943 podman[271845]: 2026-02-02 12:12:06.125708803 +0000 UTC m=+0.046390247 container create a6d9f131b542c74770f63250bcd1a8fdbd10a3fd5e563aabd43de780ea6eab2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:12:06 np0005604943 systemd[1]: Started libpod-conmon-a6d9f131b542c74770f63250bcd1a8fdbd10a3fd5e563aabd43de780ea6eab2e.scope.
Feb  2 07:12:06 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:12:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9bb9c9826a1434f1d8dbe30e6b7751a70759d54d9a8a519d4d887b4110f623c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:12:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9bb9c9826a1434f1d8dbe30e6b7751a70759d54d9a8a519d4d887b4110f623c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:12:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9bb9c9826a1434f1d8dbe30e6b7751a70759d54d9a8a519d4d887b4110f623c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:12:06 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9bb9c9826a1434f1d8dbe30e6b7751a70759d54d9a8a519d4d887b4110f623c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:12:06 np0005604943 podman[271845]: 2026-02-02 12:12:06.202438794 +0000 UTC m=+0.123120238 container init a6d9f131b542c74770f63250bcd1a8fdbd10a3fd5e563aabd43de780ea6eab2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 07:12:06 np0005604943 podman[271845]: 2026-02-02 12:12:06.107921116 +0000 UTC m=+0.028602590 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:12:06 np0005604943 podman[271845]: 2026-02-02 12:12:06.207565838 +0000 UTC m=+0.128247282 container start a6d9f131b542c74770f63250bcd1a8fdbd10a3fd5e563aabd43de780ea6eab2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Feb  2 07:12:06 np0005604943 podman[271845]: 2026-02-02 12:12:06.211108031 +0000 UTC m=+0.131789475 container attach a6d9f131b542c74770f63250bcd1a8fdbd10a3fd5e563aabd43de780ea6eab2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb  2 07:12:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:12:06 np0005604943 lvm[271937]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:12:06 np0005604943 lvm[271937]: VG ceph_vg0 finished
Feb  2 07:12:06 np0005604943 lvm[271940]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 07:12:06 np0005604943 lvm[271940]: VG ceph_vg1 finished
Feb  2 07:12:06 np0005604943 lvm[271941]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 07:12:06 np0005604943 lvm[271941]: VG ceph_vg2 finished
Feb  2 07:12:06 np0005604943 busy_nightingale[271861]: {}
Feb  2 07:12:06 np0005604943 systemd[1]: libpod-a6d9f131b542c74770f63250bcd1a8fdbd10a3fd5e563aabd43de780ea6eab2e.scope: Deactivated successfully.
Feb  2 07:12:06 np0005604943 systemd[1]: libpod-a6d9f131b542c74770f63250bcd1a8fdbd10a3fd5e563aabd43de780ea6eab2e.scope: Consumed 1.098s CPU time.
Feb  2 07:12:06 np0005604943 podman[271845]: 2026-02-02 12:12:06.992921133 +0000 UTC m=+0.913602587 container died a6d9f131b542c74770f63250bcd1a8fdbd10a3fd5e563aabd43de780ea6eab2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:12:07 np0005604943 systemd[1]: var-lib-containers-storage-overlay-d9bb9c9826a1434f1d8dbe30e6b7751a70759d54d9a8a519d4d887b4110f623c-merged.mount: Deactivated successfully.
Feb  2 07:12:07 np0005604943 nova_compute[238883]: 2026-02-02 12:12:07.024 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:07 np0005604943 podman[271845]: 2026-02-02 12:12:07.02901214 +0000 UTC m=+0.949693604 container remove a6d9f131b542c74770f63250bcd1a8fdbd10a3fd5e563aabd43de780ea6eab2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:12:07 np0005604943 systemd[1]: libpod-conmon-a6d9f131b542c74770f63250bcd1a8fdbd10a3fd5e563aabd43de780ea6eab2e.scope: Deactivated successfully.
Feb  2 07:12:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:12:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:12:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:12:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:12:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:12:08 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:12:08 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:12:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:12:09 np0005604943 nova_compute[238883]: 2026-02-02 12:12:09.044 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_12:12:09
Feb  2 07:12:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 07:12:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 07:12:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.mgr', 'vms', 'images', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'default.rgw.log']
Feb  2 07:12:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 07:12:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:10.037 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:12:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:10.038 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:12:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:10.038 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:12:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:12:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:12:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:12:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:12:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:12:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:12:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:12:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 07:12:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 07:12:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:12:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:12:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:12:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:12:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:12:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:12:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:12:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:12:12 np0005604943 nova_compute[238883]: 2026-02-02 12:12:12.025 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:12:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:12:14 np0005604943 nova_compute[238883]: 2026-02-02 12:12:14.047 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:12:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:12:17 np0005604943 nova_compute[238883]: 2026-02-02 12:12:17.028 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:12:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:12:19 np0005604943 nova_compute[238883]: 2026-02-02 12:12:19.050 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:20 np0005604943 nova_compute[238883]: 2026-02-02 12:12:20.080 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:12:20 np0005604943 nova_compute[238883]: 2026-02-02 12:12:20.080 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:12:20 np0005604943 nova_compute[238883]: 2026-02-02 12:12:20.095 238887 DEBUG nova.compute.manager [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 07:12:20 np0005604943 nova_compute[238883]: 2026-02-02 12:12:20.169 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:12:20 np0005604943 nova_compute[238883]: 2026-02-02 12:12:20.169 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:12:20 np0005604943 nova_compute[238883]: 2026-02-02 12:12:20.176 238887 DEBUG nova.virt.hardware [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 07:12:20 np0005604943 nova_compute[238883]: 2026-02-02 12:12:20.177 238887 INFO nova.compute.claims [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 07:12:20 np0005604943 nova_compute[238883]: 2026-02-02 12:12:20.389 238887 DEBUG oslo_concurrency.processutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:12:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:12:20 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:12:20 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3550516803' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:12:20 np0005604943 nova_compute[238883]: 2026-02-02 12:12:20.937 238887 DEBUG oslo_concurrency.processutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:12:20 np0005604943 nova_compute[238883]: 2026-02-02 12:12:20.945 238887 DEBUG nova.compute.provider_tree [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:12:20 np0005604943 nova_compute[238883]: 2026-02-02 12:12:20.962 238887 DEBUG nova.scheduler.client.report [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:12:20 np0005604943 nova_compute[238883]: 2026-02-02 12:12:20.980 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:12:20 np0005604943 nova_compute[238883]: 2026-02-02 12:12:20.981 238887 DEBUG nova.compute.manager [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.033 238887 DEBUG nova.compute.manager [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.034 238887 DEBUG nova.network.neutron [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.055 238887 INFO nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.072 238887 DEBUG nova.compute.manager [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.170 238887 DEBUG nova.policy [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '070af1bcc4704072a10de7fa6d563de8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '958cf437f65d4a81920df75a49529bf6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.187 238887 DEBUG nova.compute.manager [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.188 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.189 238887 INFO nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Creating image(s)#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.210 238887 DEBUG nova.storage.rbd_utils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] rbd image 164c5391-dfe4-46ea-869a-95b649a1c3c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.233 238887 DEBUG nova.storage.rbd_utils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] rbd image 164c5391-dfe4-46ea-869a-95b649a1c3c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.257 238887 DEBUG nova.storage.rbd_utils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] rbd image 164c5391-dfe4-46ea-869a-95b649a1c3c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.260 238887 DEBUG oslo_concurrency.processutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.322 238887 DEBUG oslo_concurrency.processutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.323 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.324 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.324 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "0abbf462dbbb0df8d6e00dcd1a826741bca264f8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.350 238887 DEBUG nova.storage.rbd_utils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] rbd image 164c5391-dfe4-46ea-869a-95b649a1c3c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.355 238887 DEBUG oslo_concurrency.processutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 164c5391-dfe4-46ea-869a-95b649a1c3c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.594 238887 DEBUG oslo_concurrency.processutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0abbf462dbbb0df8d6e00dcd1a826741bca264f8 164c5391-dfe4-46ea-869a-95b649a1c3c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.3385110199199163e-06 of space, bias 1.0, pg target 0.0007015533059759749 quantized to 32 (current 32)
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029075373319037267 of space, bias 1.0, pg target 0.872261199571118 quantized to 32 (current 32)
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.5725503871704348e-06 of space, bias 1.0, pg target 0.00047176511615113046 quantized to 32 (current 32)
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006668561136939139 of space, bias 1.0, pg target 0.20005683410817418 quantized to 32 (current 32)
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0811041332497498e-06 of space, bias 4.0, pg target 0.0012973249598996997 quantized to 16 (current 16)
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:12:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.667 238887 DEBUG nova.storage.rbd_utils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] resizing rbd image 164c5391-dfe4-46ea-869a-95b649a1c3c7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.757 238887 DEBUG nova.network.neutron [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Successfully created port: 900e2f84-b3d4-4547-bc57-6f2929841348 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.766 238887 DEBUG nova.objects.instance [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lazy-loading 'migration_context' on Instance uuid 164c5391-dfe4-46ea-869a-95b649a1c3c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.779 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.780 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Ensure instance console log exists: /var/lib/nova/instances/164c5391-dfe4-46ea-869a-95b649a1c3c7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.781 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.781 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:12:21 np0005604943 nova_compute[238883]: 2026-02-02 12:12:21.781 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:12:22 np0005604943 nova_compute[238883]: 2026-02-02 12:12:22.029 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 308 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Feb  2 07:12:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:12:23 np0005604943 nova_compute[238883]: 2026-02-02 12:12:23.355 238887 DEBUG nova.network.neutron [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Successfully updated port: 900e2f84-b3d4-4547-bc57-6f2929841348 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 07:12:23 np0005604943 nova_compute[238883]: 2026-02-02 12:12:23.378 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "refresh_cache-164c5391-dfe4-46ea-869a-95b649a1c3c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:12:23 np0005604943 nova_compute[238883]: 2026-02-02 12:12:23.379 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquired lock "refresh_cache-164c5391-dfe4-46ea-869a-95b649a1c3c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:12:23 np0005604943 nova_compute[238883]: 2026-02-02 12:12:23.379 238887 DEBUG nova.network.neutron [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 07:12:23 np0005604943 nova_compute[238883]: 2026-02-02 12:12:23.452 238887 DEBUG nova.compute.manager [req-86d58680-5b9d-4dc6-b1c1-93a3444e4802 req-73d3560d-4fc2-44b7-905d-e98301ff79ae 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Received event network-changed-900e2f84-b3d4-4547-bc57-6f2929841348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:12:23 np0005604943 nova_compute[238883]: 2026-02-02 12:12:23.453 238887 DEBUG nova.compute.manager [req-86d58680-5b9d-4dc6-b1c1-93a3444e4802 req-73d3560d-4fc2-44b7-905d-e98301ff79ae 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Refreshing instance network info cache due to event network-changed-900e2f84-b3d4-4547-bc57-6f2929841348. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:12:23 np0005604943 nova_compute[238883]: 2026-02-02 12:12:23.454 238887 DEBUG oslo_concurrency.lockutils [req-86d58680-5b9d-4dc6-b1c1-93a3444e4802 req-73d3560d-4fc2-44b7-905d-e98301ff79ae 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-164c5391-dfe4-46ea-869a-95b649a1c3c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:12:23 np0005604943 nova_compute[238883]: 2026-02-02 12:12:23.559 238887 DEBUG nova.network.neutron [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 07:12:24 np0005604943 podman[272169]: 2026-02-02 12:12:24.106811828 +0000 UTC m=+0.117480621 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.107 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:24 np0005604943 podman[272168]: 2026-02-02 12:12:24.124384489 +0000 UTC m=+0.139951930 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.365 238887 DEBUG nova.network.neutron [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Updating instance_info_cache with network_info: [{"id": "900e2f84-b3d4-4547-bc57-6f2929841348", "address": "fa:16:3e:c9:3a:08", "network": {"id": "c59f5e49-0a3a-410a-8325-47d3dec9f7b5", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1181677005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "958cf437f65d4a81920df75a49529bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap900e2f84-b3", "ovs_interfaceid": "900e2f84-b3d4-4547-bc57-6f2929841348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.392 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Releasing lock "refresh_cache-164c5391-dfe4-46ea-869a-95b649a1c3c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.392 238887 DEBUG nova.compute.manager [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Instance network_info: |[{"id": "900e2f84-b3d4-4547-bc57-6f2929841348", "address": "fa:16:3e:c9:3a:08", "network": {"id": "c59f5e49-0a3a-410a-8325-47d3dec9f7b5", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1181677005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "958cf437f65d4a81920df75a49529bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap900e2f84-b3", "ovs_interfaceid": "900e2f84-b3d4-4547-bc57-6f2929841348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.393 238887 DEBUG oslo_concurrency.lockutils [req-86d58680-5b9d-4dc6-b1c1-93a3444e4802 req-73d3560d-4fc2-44b7-905d-e98301ff79ae 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-164c5391-dfe4-46ea-869a-95b649a1c3c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.393 238887 DEBUG nova.network.neutron [req-86d58680-5b9d-4dc6-b1c1-93a3444e4802 req-73d3560d-4fc2-44b7-905d-e98301ff79ae 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Refreshing network info cache for port 900e2f84-b3d4-4547-bc57-6f2929841348 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.396 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Start _get_guest_xml network_info=[{"id": "900e2f84-b3d4-4547-bc57-6f2929841348", "address": "fa:16:3e:c9:3a:08", "network": {"id": "c59f5e49-0a3a-410a-8325-47d3dec9f7b5", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1181677005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "958cf437f65d4a81920df75a49529bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap900e2f84-b3", "ovs_interfaceid": "900e2f84-b3d4-4547-bc57-6f2929841348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'boot_index': 0, 'image_id': '21b263f0-00f1-47be-b8b1-e3c07da0a6a2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.401 238887 WARNING nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.407 238887 DEBUG nova.virt.libvirt.host [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.408 238887 DEBUG nova.virt.libvirt.host [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.415 238887 DEBUG nova.virt.libvirt.host [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.416 238887 DEBUG nova.virt.libvirt.host [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.416 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.417 238887 DEBUG nova.virt.hardware [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T11:53:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b42d87e0-ad8c-4643-a8cf-5c3fee723886',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T11:53:14Z,direct_url=<?>,disk_format='qcow2',id=21b263f0-00f1-47be-b8b1-e3c07da0a6a2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5b850e2943f14fbe871e66a87c8f4ca3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T11:53:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.417 238887 DEBUG nova.virt.hardware [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.418 238887 DEBUG nova.virt.hardware [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.418 238887 DEBUG nova.virt.hardware [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.418 238887 DEBUG nova.virt.hardware [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.418 238887 DEBUG nova.virt.hardware [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.419 238887 DEBUG nova.virt.hardware [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.419 238887 DEBUG nova.virt.hardware [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.419 238887 DEBUG nova.virt.hardware [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.419 238887 DEBUG nova.virt.hardware [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.419 238887 DEBUG nova.virt.hardware [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.422 238887 DEBUG oslo_concurrency.processutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:12:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 07:12:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:12:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2305869380' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.965 238887 DEBUG oslo_concurrency.processutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.986 238887 DEBUG nova.storage.rbd_utils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] rbd image 164c5391-dfe4-46ea-869a-95b649a1c3c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:12:24 np0005604943 nova_compute[238883]: 2026-02-02 12:12:24.990 238887 DEBUG oslo_concurrency.processutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:12:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:12:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2487802370' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.485 238887 DEBUG nova.network.neutron [req-86d58680-5b9d-4dc6-b1c1-93a3444e4802 req-73d3560d-4fc2-44b7-905d-e98301ff79ae 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Updated VIF entry in instance network info cache for port 900e2f84-b3d4-4547-bc57-6f2929841348. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.486 238887 DEBUG nova.network.neutron [req-86d58680-5b9d-4dc6-b1c1-93a3444e4802 req-73d3560d-4fc2-44b7-905d-e98301ff79ae 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Updating instance_info_cache with network_info: [{"id": "900e2f84-b3d4-4547-bc57-6f2929841348", "address": "fa:16:3e:c9:3a:08", "network": {"id": "c59f5e49-0a3a-410a-8325-47d3dec9f7b5", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1181677005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "958cf437f65d4a81920df75a49529bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap900e2f84-b3", "ovs_interfaceid": "900e2f84-b3d4-4547-bc57-6f2929841348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.497 238887 DEBUG oslo_concurrency.processutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.498 238887 DEBUG nova.virt.libvirt.vif [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:12:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1129494028',display_name='tempest-SnapshotDataIntegrityTests-server-1129494028',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1129494028',id=29,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMd27Fbt/2AjwZ7BSXCyqEHmD9EBntm5Erk9UvnC43jyZQiT2isCRvatHrpXLUoPsAJ21gvrK0s2X2qgOwwNSe8NmgEEdwZ83TjSqbD5u0ryoyNIVVBNPA4TMvQ0gqSCyA==',key_name='tempest-keypair-248723971',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='958cf437f65d4a81920df75a49529bf6',ramdisk_id='',reservation_id='r-0xu113v9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-1537204195',owner_user_name='tempest-SnapshotDataIntegrityTests-1537204195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:12:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='070af1bcc4704072a10de7fa6d563de8',uuid=164c5391-dfe4-46ea-869a-95b649a1c3c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "900e2f84-b3d4-4547-bc57-6f2929841348", "address": "fa:16:3e:c9:3a:08", "network": {"id": "c59f5e49-0a3a-410a-8325-47d3dec9f7b5", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1181677005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "958cf437f65d4a81920df75a49529bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap900e2f84-b3", "ovs_interfaceid": "900e2f84-b3d4-4547-bc57-6f2929841348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.499 238887 DEBUG nova.network.os_vif_util [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Converting VIF {"id": "900e2f84-b3d4-4547-bc57-6f2929841348", "address": "fa:16:3e:c9:3a:08", "network": {"id": "c59f5e49-0a3a-410a-8325-47d3dec9f7b5", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1181677005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "958cf437f65d4a81920df75a49529bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap900e2f84-b3", "ovs_interfaceid": "900e2f84-b3d4-4547-bc57-6f2929841348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.499 238887 DEBUG nova.network.os_vif_util [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:3a:08,bridge_name='br-int',has_traffic_filtering=True,id=900e2f84-b3d4-4547-bc57-6f2929841348,network=Network(c59f5e49-0a3a-410a-8325-47d3dec9f7b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap900e2f84-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.500 238887 DEBUG nova.objects.instance [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 164c5391-dfe4-46ea-869a-95b649a1c3c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.518 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] End _get_guest_xml xml=<domain type="kvm">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  <uuid>164c5391-dfe4-46ea-869a-95b649a1c3c7</uuid>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  <name>instance-0000001d</name>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  <memory>131072</memory>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  <vcpu>1</vcpu>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  <metadata>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <nova:name>tempest-SnapshotDataIntegrityTests-server-1129494028</nova:name>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <nova:creationTime>2026-02-02 12:12:24</nova:creationTime>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <nova:flavor name="m1.nano">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:        <nova:memory>128</nova:memory>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:        <nova:disk>1</nova:disk>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:        <nova:swap>0</nova:swap>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:        <nova:vcpus>1</nova:vcpus>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      </nova:flavor>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <nova:owner>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:        <nova:user uuid="070af1bcc4704072a10de7fa6d563de8">tempest-SnapshotDataIntegrityTests-1537204195-project-member</nova:user>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:        <nova:project uuid="958cf437f65d4a81920df75a49529bf6">tempest-SnapshotDataIntegrityTests-1537204195</nova:project>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      </nova:owner>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <nova:root type="image" uuid="21b263f0-00f1-47be-b8b1-e3c07da0a6a2"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <nova:ports>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:        <nova:port uuid="900e2f84-b3d4-4547-bc57-6f2929841348">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:        </nova:port>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      </nova:ports>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    </nova:instance>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  </metadata>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  <sysinfo type="smbios">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <system>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <entry name="manufacturer">RDO</entry>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <entry name="product">OpenStack Compute</entry>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <entry name="serial">164c5391-dfe4-46ea-869a-95b649a1c3c7</entry>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <entry name="uuid">164c5391-dfe4-46ea-869a-95b649a1c3c7</entry>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <entry name="family">Virtual Machine</entry>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    </system>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  </sysinfo>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  <os>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <boot dev="hd"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <smbios mode="sysinfo"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  </os>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  <features>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <acpi/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <apic/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <vmcoreinfo/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  </features>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  <clock offset="utc">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <timer name="hpet" present="no"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  </clock>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  <cpu mode="host-model" match="exact">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  </cpu>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  <devices>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <disk type="network" device="disk">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/164c5391-dfe4-46ea-869a-95b649a1c3c7_disk">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <target dev="vda" bus="virtio"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <disk type="network" device="cdrom">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <driver type="raw" cache="none"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <source protocol="rbd" name="vms/164c5391-dfe4-46ea-869a-95b649a1c3c7_disk.config">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:        <host name="192.168.122.100" port="6789"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      </source>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <auth username="openstack">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:        <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      </auth>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <target dev="sda" bus="sata"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    </disk>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <interface type="ethernet">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <mac address="fa:16:3e:c9:3a:08"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <mtu size="1442"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <target dev="tap900e2f84-b3"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    </interface>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <serial type="pty">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <log file="/var/lib/nova/instances/164c5391-dfe4-46ea-869a-95b649a1c3c7/console.log" append="off"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    </serial>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <video>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <model type="virtio"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    </video>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <input type="tablet" bus="usb"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <rng model="virtio">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <backend model="random">/dev/urandom</backend>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    </rng>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <controller type="usb" index="0"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    <memballoon model="virtio">
Feb  2 07:12:25 np0005604943 nova_compute[238883]:      <stats period="10"/>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:    </memballoon>
Feb  2 07:12:25 np0005604943 nova_compute[238883]:  </devices>
Feb  2 07:12:25 np0005604943 nova_compute[238883]: </domain>
Feb  2 07:12:25 np0005604943 nova_compute[238883]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.519 238887 DEBUG nova.compute.manager [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Preparing to wait for external event network-vif-plugged-900e2f84-b3d4-4547-bc57-6f2929841348 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.519 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.519 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.519 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.520 238887 DEBUG nova.virt.libvirt.vif [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T12:12:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1129494028',display_name='tempest-SnapshotDataIntegrityTests-server-1129494028',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1129494028',id=29,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMd27Fbt/2AjwZ7BSXCyqEHmD9EBntm5Erk9UvnC43jyZQiT2isCRvatHrpXLUoPsAJ21gvrK0s2X2qgOwwNSe8NmgEEdwZ83TjSqbD5u0ryoyNIVVBNPA4TMvQ0gqSCyA==',key_name='tempest-keypair-248723971',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='958cf437f65d4a81920df75a49529bf6',ramdisk_id='',reservation_id='r-0xu113v9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-1537204195',owner_user_name='tempest-SnapshotDataIntegrityTests-1537204195-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T12:12:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='070af1bcc4704072a10de7fa6d563de8',uuid=164c5391-dfe4-46ea-869a-95b649a1c3c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "900e2f84-b3d4-4547-bc57-6f2929841348", "address": "fa:16:3e:c9:3a:08", "network": {"id": "c59f5e49-0a3a-410a-8325-47d3dec9f7b5", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1181677005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "958cf437f65d4a81920df75a49529bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap900e2f84-b3", "ovs_interfaceid": "900e2f84-b3d4-4547-bc57-6f2929841348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.520 238887 DEBUG nova.network.os_vif_util [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Converting VIF {"id": "900e2f84-b3d4-4547-bc57-6f2929841348", "address": "fa:16:3e:c9:3a:08", "network": {"id": "c59f5e49-0a3a-410a-8325-47d3dec9f7b5", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1181677005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "958cf437f65d4a81920df75a49529bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap900e2f84-b3", "ovs_interfaceid": "900e2f84-b3d4-4547-bc57-6f2929841348", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.521 238887 DEBUG nova.network.os_vif_util [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:3a:08,bridge_name='br-int',has_traffic_filtering=True,id=900e2f84-b3d4-4547-bc57-6f2929841348,network=Network(c59f5e49-0a3a-410a-8325-47d3dec9f7b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap900e2f84-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.522 238887 DEBUG os_vif [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:3a:08,bridge_name='br-int',has_traffic_filtering=True,id=900e2f84-b3d4-4547-bc57-6f2929841348,network=Network(c59f5e49-0a3a-410a-8325-47d3dec9f7b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap900e2f84-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.522 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.523 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.523 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.524 238887 DEBUG oslo_concurrency.lockutils [req-86d58680-5b9d-4dc6-b1c1-93a3444e4802 req-73d3560d-4fc2-44b7-905d-e98301ff79ae 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-164c5391-dfe4-46ea-869a-95b649a1c3c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.528 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.528 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap900e2f84-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.528 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap900e2f84-b3, col_values=(('external_ids', {'iface-id': '900e2f84-b3d4-4547-bc57-6f2929841348', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:3a:08', 'vm-uuid': '164c5391-dfe4-46ea-869a-95b649a1c3c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.530 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:25 np0005604943 NetworkManager[49093]: <info>  [1770034345.5310] manager: (tap900e2f84-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.532 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.537 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.538 238887 INFO os_vif [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:3a:08,bridge_name='br-int',has_traffic_filtering=True,id=900e2f84-b3d4-4547-bc57-6f2929841348,network=Network(c59f5e49-0a3a-410a-8325-47d3dec9f7b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap900e2f84-b3')#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.600 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.601 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.601 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No VIF found with MAC fa:16:3e:c9:3a:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.601 238887 INFO nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Using config drive#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.620 238887 DEBUG nova.storage.rbd_utils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] rbd image 164c5391-dfe4-46ea-869a-95b649a1c3c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.931 238887 INFO nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Creating config drive at /var/lib/nova/instances/164c5391-dfe4-46ea-869a-95b649a1c3c7/disk.config#033[00m
Feb  2 07:12:25 np0005604943 nova_compute[238883]: 2026-02-02 12:12:25.937 238887 DEBUG oslo_concurrency.processutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/164c5391-dfe4-46ea-869a-95b649a1c3c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpw5v6eryj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.061 238887 DEBUG oslo_concurrency.processutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/164c5391-dfe4-46ea-869a-95b649a1c3c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpw5v6eryj" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.094 238887 DEBUG nova.storage.rbd_utils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] rbd image 164c5391-dfe4-46ea-869a-95b649a1c3c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.097 238887 DEBUG oslo_concurrency.processutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/164c5391-dfe4-46ea-869a-95b649a1c3c7/disk.config 164c5391-dfe4-46ea-869a-95b649a1c3c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.214 238887 DEBUG oslo_concurrency.processutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/164c5391-dfe4-46ea-869a-95b649a1c3c7/disk.config 164c5391-dfe4-46ea-869a-95b649a1c3c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.216 238887 INFO nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Deleting local config drive /var/lib/nova/instances/164c5391-dfe4-46ea-869a-95b649a1c3c7/disk.config because it was imported into RBD.#033[00m
Feb  2 07:12:26 np0005604943 kernel: tap900e2f84-b3: entered promiscuous mode
Feb  2 07:12:26 np0005604943 NetworkManager[49093]: <info>  [1770034346.2783] manager: (tap900e2f84-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Feb  2 07:12:26 np0005604943 ovn_controller[145056]: 2026-02-02T12:12:26Z|00282|binding|INFO|Claiming lport 900e2f84-b3d4-4547-bc57-6f2929841348 for this chassis.
Feb  2 07:12:26 np0005604943 ovn_controller[145056]: 2026-02-02T12:12:26Z|00283|binding|INFO|900e2f84-b3d4-4547-bc57-6f2929841348: Claiming fa:16:3e:c9:3a:08 10.100.0.10
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.329 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.331 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.334 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.345 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:3a:08 10.100.0.10'], port_security=['fa:16:3e:c9:3a:08 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '164c5391-dfe4-46ea-869a-95b649a1c3c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c59f5e49-0a3a-410a-8325-47d3dec9f7b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '958cf437f65d4a81920df75a49529bf6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '97827449-3627-48d4-98b2-464ce4a68259 fd8be42c-59c5-490e-924a-1e23858192d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f147b1c-ca00-4250-8ab8-ad8ad4e3ed98, chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=900e2f84-b3d4-4547-bc57-6f2929841348) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.347 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 900e2f84-b3d4-4547-bc57-6f2929841348 in datapath c59f5e49-0a3a-410a-8325-47d3dec9f7b5 bound to our chassis#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.349 155011 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c59f5e49-0a3a-410a-8325-47d3dec9f7b5#033[00m
Feb  2 07:12:26 np0005604943 systemd-udevd[272347]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 07:12:26 np0005604943 ovn_controller[145056]: 2026-02-02T12:12:26Z|00284|binding|INFO|Setting lport 900e2f84-b3d4-4547-bc57-6f2929841348 ovn-installed in OVS
Feb  2 07:12:26 np0005604943 ovn_controller[145056]: 2026-02-02T12:12:26Z|00285|binding|INFO|Setting lport 900e2f84-b3d4-4547-bc57-6f2929841348 up in Southbound
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.358 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:26 np0005604943 systemd-machined[206973]: New machine qemu-29-instance-0000001d.
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.361 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[4e095822-cbe6-4517-96b4-605bbf6e5358]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.362 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc59f5e49-01 in ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 07:12:26 np0005604943 systemd[1]: Started Virtual Machine qemu-29-instance-0000001d.
Feb  2 07:12:26 np0005604943 NetworkManager[49093]: <info>  [1770034346.3701] device (tap900e2f84-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 07:12:26 np0005604943 NetworkManager[49093]: <info>  [1770034346.3708] device (tap900e2f84-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.370 245329 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc59f5e49-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.370 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[d0407d5e-c144-4dc1-a4b3-298782e56697]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.371 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[9a70fbf2-dece-4a5c-9f9b-f9749d4d059e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.384 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[942741f3-d5f6-4186-9d7e-f4ba95e46fa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.406 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[69b1e4fb-7a5b-43d2-afab-3d99a569cb9a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.428 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[560d3c11-9017-4f26-a2f7-ab70bf8ebe76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:26 np0005604943 NetworkManager[49093]: <info>  [1770034346.4341] manager: (tapc59f5e49-00): new Veth device (/org/freedesktop/NetworkManager/Devices/144)
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.432 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb41a7f-cee4-4f73-b8fb-98b94ac682d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.471 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[c44bc8fe-de4c-45df-a6dd-561543695c5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.477 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[54f44052-0bfe-4ed1-b50c-5bbe4231034d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:26 np0005604943 NetworkManager[49093]: <info>  [1770034346.5014] device (tapc59f5e49-00): carrier: link connected
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.507 245414 DEBUG oslo.privsep.daemon [-] privsep: reply[483ee93a-4df0-4927-92ef-4b278c4681cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.525 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[30b16ff3-0107-4979-bc88-3383e5e09676]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc59f5e49-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:52:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479185, 'reachable_time': 39508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272381, 'error': None, 'target': 'ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.542 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc56244-218c-4451-bae0-7953d1d826e2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:527b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479185, 'tstamp': 479185}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272382, 'error': None, 'target': 'ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.562 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[183b5e9b-9d84-4804-b603-164373f00d70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc59f5e49-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:52:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479185, 'reachable_time': 39508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272383, 'error': None, 'target': 'ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.591 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[c16548b4-9831-4422-a61c-8582226e3f92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.649 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[06ef595d-a94c-47b6-8147-7119cddb8292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.652 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc59f5e49-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.652 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.652 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc59f5e49-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:12:26 np0005604943 NetworkManager[49093]: <info>  [1770034346.6552] manager: (tapc59f5e49-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Feb  2 07:12:26 np0005604943 kernel: tapc59f5e49-00: entered promiscuous mode
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.654 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.660 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc59f5e49-00, col_values=(('external_ids', {'iface-id': '310ea7d5-de1c-4059-9f23-e1aced8de783'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:12:26 np0005604943 ovn_controller[145056]: 2026-02-02T12:12:26Z|00286|binding|INFO|Releasing lport 310ea7d5-de1c-4059-9f23-e1aced8de783 from this chassis (sb_readonly=0)
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.662 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.665 155011 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c59f5e49-0a3a-410a-8325-47d3dec9f7b5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c59f5e49-0a3a-410a-8325-47d3dec9f7b5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.666 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[7f01e073-2ad2-48b5-9ec9-6091c38238d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.668 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.668 155011 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: global
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    log         /dev/log local0 debug
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    log-tag     haproxy-metadata-proxy-c59f5e49-0a3a-410a-8325-47d3dec9f7b5
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    user        root
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    group       root
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    maxconn     1024
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    pidfile     /var/lib/neutron/external/pids/c59f5e49-0a3a-410a-8325-47d3dec9f7b5.pid.haproxy
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    daemon
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: defaults
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    log global
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    mode http
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    option httplog
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    option dontlognull
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    option http-server-close
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    option forwardfor
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    retries                 3
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    timeout http-request    30s
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    timeout connect         30s
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    timeout client          32s
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    timeout server          32s
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    timeout http-keep-alive 30s
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: listen listener
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    bind 169.254.169.254:80
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]:    http-request add-header X-OVN-Network-ID c59f5e49-0a3a-410a-8325-47d3dec9f7b5
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 07:12:26 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:12:26.669 155011 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5', 'env', 'PROCESS_TAG=haproxy-c59f5e49-0a3a-410a-8325-47d3dec9f7b5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c59f5e49-0a3a-410a-8325-47d3dec9f7b5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.790 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034346.7895117, 164c5391-dfe4-46ea-869a-95b649a1c3c7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.791 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] VM Started (Lifecycle Event)#033[00m
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.820 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.833 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034346.7900336, 164c5391-dfe4-46ea-869a-95b649a1c3c7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.834 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] VM Paused (Lifecycle Event)#033[00m
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.852 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.856 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:12:26 np0005604943 nova_compute[238883]: 2026-02-02 12:12:26.877 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:12:27 np0005604943 podman[272457]: 2026-02-02 12:12:27.029788769 +0000 UTC m=+0.048821267 container create 02a6c1f23f4427298a1f170a49b434ef7ecd02774ca000a52f312d4bc0bcd093 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.032 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:27 np0005604943 systemd[1]: Started libpod-conmon-02a6c1f23f4427298a1f170a49b434ef7ecd02774ca000a52f312d4bc0bcd093.scope.
Feb  2 07:12:27 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:12:27 np0005604943 podman[272457]: 2026-02-02 12:12:27.005532435 +0000 UTC m=+0.024564943 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb  2 07:12:27 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7856b69b91fa469ba5c25d85e11f23dc7525eb9242a952122517d0844d3cd868/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 07:12:27 np0005604943 podman[272457]: 2026-02-02 12:12:27.117220166 +0000 UTC m=+0.136252684 container init 02a6c1f23f4427298a1f170a49b434ef7ecd02774ca000a52f312d4bc0bcd093 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb  2 07:12:27 np0005604943 podman[272457]: 2026-02-02 12:12:27.122755115 +0000 UTC m=+0.141787613 container start 02a6c1f23f4427298a1f170a49b434ef7ecd02774ca000a52f312d4bc0bcd093 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 07:12:27 np0005604943 neutron-haproxy-ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5[272472]: [NOTICE]   (272476) : New worker (272478) forked
Feb  2 07:12:27 np0005604943 neutron-haproxy-ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5[272472]: [NOTICE]   (272476) : Loading success.
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.385 238887 DEBUG nova.compute.manager [req-18bdf9e8-5695-41e8-8800-75b0ce1b862c req-32c3b332-b70e-4ffe-acf5-77a03121688c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Received event network-vif-plugged-900e2f84-b3d4-4547-bc57-6f2929841348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.385 238887 DEBUG oslo_concurrency.lockutils [req-18bdf9e8-5695-41e8-8800-75b0ce1b862c req-32c3b332-b70e-4ffe-acf5-77a03121688c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.386 238887 DEBUG oslo_concurrency.lockutils [req-18bdf9e8-5695-41e8-8800-75b0ce1b862c req-32c3b332-b70e-4ffe-acf5-77a03121688c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.387 238887 DEBUG oslo_concurrency.lockutils [req-18bdf9e8-5695-41e8-8800-75b0ce1b862c req-32c3b332-b70e-4ffe-acf5-77a03121688c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.387 238887 DEBUG nova.compute.manager [req-18bdf9e8-5695-41e8-8800-75b0ce1b862c req-32c3b332-b70e-4ffe-acf5-77a03121688c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Processing event network-vif-plugged-900e2f84-b3d4-4547-bc57-6f2929841348 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.388 238887 DEBUG nova.compute.manager [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.392 238887 DEBUG nova.virt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Emitting event <LifecycleEvent: 1770034347.392378, 164c5391-dfe4-46ea-869a-95b649a1c3c7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.393 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] VM Resumed (Lifecycle Event)#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.395 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.400 238887 INFO nova.virt.libvirt.driver [-] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Instance spawned successfully.#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.401 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.416 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.420 238887 DEBUG nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.473 238887 INFO nova.compute.manager [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.477 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.477 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.478 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.478 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.478 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.479 238887 DEBUG nova.virt.libvirt.driver [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.544 238887 INFO nova.compute.manager [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Took 6.36 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.544 238887 DEBUG nova.compute.manager [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.602 238887 INFO nova.compute.manager [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Took 7.46 seconds to build instance.#033[00m
Feb  2 07:12:27 np0005604943 nova_compute[238883]: 2026-02-02 12:12:27.626 238887 DEBUG oslo_concurrency.lockutils [None req-7cf3463d-5d4b-45e9-a4cb-5621661e7a54 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:12:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:12:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Feb  2 07:12:29 np0005604943 nova_compute[238883]: 2026-02-02 12:12:29.464 238887 DEBUG nova.compute.manager [req-3752aa9d-3525-4546-88a3-07503e59b1fc req-d6c0a211-9a37-4f07-b077-ffdf90ef8b2c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Received event network-vif-plugged-900e2f84-b3d4-4547-bc57-6f2929841348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:12:29 np0005604943 nova_compute[238883]: 2026-02-02 12:12:29.464 238887 DEBUG oslo_concurrency.lockutils [req-3752aa9d-3525-4546-88a3-07503e59b1fc req-d6c0a211-9a37-4f07-b077-ffdf90ef8b2c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:12:29 np0005604943 nova_compute[238883]: 2026-02-02 12:12:29.464 238887 DEBUG oslo_concurrency.lockutils [req-3752aa9d-3525-4546-88a3-07503e59b1fc req-d6c0a211-9a37-4f07-b077-ffdf90ef8b2c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:12:29 np0005604943 nova_compute[238883]: 2026-02-02 12:12:29.465 238887 DEBUG oslo_concurrency.lockutils [req-3752aa9d-3525-4546-88a3-07503e59b1fc req-d6c0a211-9a37-4f07-b077-ffdf90ef8b2c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:12:29 np0005604943 nova_compute[238883]: 2026-02-02 12:12:29.465 238887 DEBUG nova.compute.manager [req-3752aa9d-3525-4546-88a3-07503e59b1fc req-d6c0a211-9a37-4f07-b077-ffdf90ef8b2c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] No waiting events found dispatching network-vif-plugged-900e2f84-b3d4-4547-bc57-6f2929841348 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:12:29 np0005604943 nova_compute[238883]: 2026-02-02 12:12:29.465 238887 WARNING nova.compute.manager [req-3752aa9d-3525-4546-88a3-07503e59b1fc req-d6c0a211-9a37-4f07-b077-ffdf90ef8b2c 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Received unexpected event network-vif-plugged-900e2f84-b3d4-4547-bc57-6f2929841348 for instance with vm_state active and task_state None.#033[00m
Feb  2 07:12:30 np0005604943 nova_compute[238883]: 2026-02-02 12:12:30.531 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 523 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb  2 07:12:30 np0005604943 nova_compute[238883]: 2026-02-02 12:12:30.645 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:30 np0005604943 NetworkManager[49093]: <info>  [1770034350.6459] manager: (patch-br-int-to-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Feb  2 07:12:30 np0005604943 NetworkManager[49093]: <info>  [1770034350.6469] manager: (patch-provnet-b083f27c-a844-4e95-81ce-0ce80ab4824b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Feb  2 07:12:30 np0005604943 nova_compute[238883]: 2026-02-02 12:12:30.670 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:30 np0005604943 ovn_controller[145056]: 2026-02-02T12:12:30Z|00287|binding|INFO|Releasing lport 310ea7d5-de1c-4059-9f23-e1aced8de783 from this chassis (sb_readonly=0)
Feb  2 07:12:30 np0005604943 nova_compute[238883]: 2026-02-02 12:12:30.679 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:31 np0005604943 nova_compute[238883]: 2026-02-02 12:12:31.017 238887 DEBUG nova.compute.manager [req-8056f458-10ff-4143-810c-58a61a98600a req-5f15e869-1535-48ac-b69d-3be655e78b98 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Received event network-changed-900e2f84-b3d4-4547-bc57-6f2929841348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:12:31 np0005604943 nova_compute[238883]: 2026-02-02 12:12:31.018 238887 DEBUG nova.compute.manager [req-8056f458-10ff-4143-810c-58a61a98600a req-5f15e869-1535-48ac-b69d-3be655e78b98 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Refreshing instance network info cache due to event network-changed-900e2f84-b3d4-4547-bc57-6f2929841348. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 07:12:31 np0005604943 nova_compute[238883]: 2026-02-02 12:12:31.018 238887 DEBUG oslo_concurrency.lockutils [req-8056f458-10ff-4143-810c-58a61a98600a req-5f15e869-1535-48ac-b69d-3be655e78b98 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "refresh_cache-164c5391-dfe4-46ea-869a-95b649a1c3c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:12:31 np0005604943 nova_compute[238883]: 2026-02-02 12:12:31.018 238887 DEBUG oslo_concurrency.lockutils [req-8056f458-10ff-4143-810c-58a61a98600a req-5f15e869-1535-48ac-b69d-3be655e78b98 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquired lock "refresh_cache-164c5391-dfe4-46ea-869a-95b649a1c3c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:12:31 np0005604943 nova_compute[238883]: 2026-02-02 12:12:31.018 238887 DEBUG nova.network.neutron [req-8056f458-10ff-4143-810c-58a61a98600a req-5f15e869-1535-48ac-b69d-3be655e78b98 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Refreshing network info cache for port 900e2f84-b3d4-4547-bc57-6f2929841348 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 07:12:32 np0005604943 nova_compute[238883]: 2026-02-02 12:12:32.034 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:32 np0005604943 nova_compute[238883]: 2026-02-02 12:12:32.255 238887 DEBUG nova.network.neutron [req-8056f458-10ff-4143-810c-58a61a98600a req-5f15e869-1535-48ac-b69d-3be655e78b98 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Updated VIF entry in instance network info cache for port 900e2f84-b3d4-4547-bc57-6f2929841348. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 07:12:32 np0005604943 nova_compute[238883]: 2026-02-02 12:12:32.256 238887 DEBUG nova.network.neutron [req-8056f458-10ff-4143-810c-58a61a98600a req-5f15e869-1535-48ac-b69d-3be655e78b98 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Updating instance_info_cache with network_info: [{"id": "900e2f84-b3d4-4547-bc57-6f2929841348", "address": "fa:16:3e:c9:3a:08", "network": {"id": "c59f5e49-0a3a-410a-8325-47d3dec9f7b5", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1181677005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "958cf437f65d4a81920df75a49529bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap900e2f84-b3", "ovs_interfaceid": "900e2f84-b3d4-4547-bc57-6f2929841348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:12:32 np0005604943 nova_compute[238883]: 2026-02-02 12:12:32.281 238887 DEBUG oslo_concurrency.lockutils [req-8056f458-10ff-4143-810c-58a61a98600a req-5f15e869-1535-48ac-b69d-3be655e78b98 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Releasing lock "refresh_cache-164c5391-dfe4-46ea-869a-95b649a1c3c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:12:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb  2 07:12:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:12:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 364 KiB/s wr, 76 op/s
Feb  2 07:12:35 np0005604943 nova_compute[238883]: 2026-02-02 12:12:35.533 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Feb  2 07:12:37 np0005604943 nova_compute[238883]: 2026-02-02 12:12:37.035 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:12:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Feb  2 07:12:39 np0005604943 nova_compute[238883]: 2026-02-02 12:12:39.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:12:39 np0005604943 nova_compute[238883]: 2026-02-02 12:12:39.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:12:39 np0005604943 nova_compute[238883]: 2026-02-02 12:12:39.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:12:39 np0005604943 nova_compute[238883]: 2026-02-02 12:12:39.660 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:12:39 np0005604943 nova_compute[238883]: 2026-02-02 12:12:39.660 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:12:39 np0005604943 nova_compute[238883]: 2026-02-02 12:12:39.660 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:12:39 np0005604943 nova_compute[238883]: 2026-02-02 12:12:39.660 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 07:12:39 np0005604943 nova_compute[238883]: 2026-02-02 12:12:39.660 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:12:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:12:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/933079465' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:12:40 np0005604943 nova_compute[238883]: 2026-02-02 12:12:40.156 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:12:40 np0005604943 ovn_controller[145056]: 2026-02-02T12:12:40Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:3a:08 10.100.0.10
Feb  2 07:12:40 np0005604943 ovn_controller[145056]: 2026-02-02T12:12:40Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:3a:08 10.100.0.10
Feb  2 07:12:40 np0005604943 nova_compute[238883]: 2026-02-02 12:12:40.213 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:12:40 np0005604943 nova_compute[238883]: 2026-02-02 12:12:40.214 238887 DEBUG nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 07:12:40 np0005604943 nova_compute[238883]: 2026-02-02 12:12:40.360 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:12:40 np0005604943 nova_compute[238883]: 2026-02-02 12:12:40.361 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4087MB free_disk=59.96721012983471GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 07:12:40 np0005604943 nova_compute[238883]: 2026-02-02 12:12:40.361 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:12:40 np0005604943 nova_compute[238883]: 2026-02-02 12:12:40.361 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:12:40 np0005604943 nova_compute[238883]: 2026-02-02 12:12:40.508 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 07:12:40 np0005604943 nova_compute[238883]: 2026-02-02 12:12:40.509 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 07:12:40 np0005604943 nova_compute[238883]: 2026-02-02 12:12:40.509 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 07:12:40 np0005604943 nova_compute[238883]: 2026-02-02 12:12:40.535 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 325 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 566 KiB/s wr, 74 op/s
Feb  2 07:12:40 np0005604943 nova_compute[238883]: 2026-02-02 12:12:40.659 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:12:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:12:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:12:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:12:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:12:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:12:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:12:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:12:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1671647015' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:12:41 np0005604943 nova_compute[238883]: 2026-02-02 12:12:41.168 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:12:41 np0005604943 nova_compute[238883]: 2026-02-02 12:12:41.175 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:12:41 np0005604943 nova_compute[238883]: 2026-02-02 12:12:41.194 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:12:41 np0005604943 nova_compute[238883]: 2026-02-02 12:12:41.217 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 07:12:41 np0005604943 nova_compute[238883]: 2026-02-02 12:12:41.217 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:12:42 np0005604943 nova_compute[238883]: 2026-02-02 12:12:42.036 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:42 np0005604943 nova_compute[238883]: 2026-02-02 12:12:42.217 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:12:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 343 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 106 op/s
Feb  2 07:12:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:12:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 350 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb  2 07:12:44 np0005604943 nova_compute[238883]: 2026-02-02 12:12:44.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:12:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:12:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/761476539' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:12:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:12:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/761476539' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:12:45 np0005604943 nova_compute[238883]: 2026-02-02 12:12:45.537 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:45 np0005604943 nova_compute[238883]: 2026-02-02 12:12:45.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:12:45 np0005604943 nova_compute[238883]: 2026-02-02 12:12:45.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:12:45 np0005604943 nova_compute[238883]: 2026-02-02 12:12:45.642 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  2 07:12:45 np0005604943 nova_compute[238883]: 2026-02-02 12:12:45.657 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  2 07:12:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 350 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb  2 07:12:46 np0005604943 nova_compute[238883]: 2026-02-02 12:12:46.658 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:12:46 np0005604943 nova_compute[238883]: 2026-02-02 12:12:46.659 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 07:12:46 np0005604943 nova_compute[238883]: 2026-02-02 12:12:46.659 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 07:12:47 np0005604943 nova_compute[238883]: 2026-02-02 12:12:47.039 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:47 np0005604943 nova_compute[238883]: 2026-02-02 12:12:47.151 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "refresh_cache-164c5391-dfe4-46ea-869a-95b649a1c3c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 07:12:47 np0005604943 nova_compute[238883]: 2026-02-02 12:12:47.151 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquired lock "refresh_cache-164c5391-dfe4-46ea-869a-95b649a1c3c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 07:12:47 np0005604943 nova_compute[238883]: 2026-02-02 12:12:47.152 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 07:12:47 np0005604943 nova_compute[238883]: 2026-02-02 12:12:47.152 238887 DEBUG nova.objects.instance [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 164c5391-dfe4-46ea-869a-95b649a1c3c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:12:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:12:48 np0005604943 nova_compute[238883]: 2026-02-02 12:12:48.176 238887 DEBUG nova.network.neutron [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Updating instance_info_cache with network_info: [{"id": "900e2f84-b3d4-4547-bc57-6f2929841348", "address": "fa:16:3e:c9:3a:08", "network": {"id": "c59f5e49-0a3a-410a-8325-47d3dec9f7b5", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1181677005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "958cf437f65d4a81920df75a49529bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap900e2f84-b3", "ovs_interfaceid": "900e2f84-b3d4-4547-bc57-6f2929841348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:12:48 np0005604943 nova_compute[238883]: 2026-02-02 12:12:48.189 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Releasing lock "refresh_cache-164c5391-dfe4-46ea-869a-95b649a1c3c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 07:12:48 np0005604943 nova_compute[238883]: 2026-02-02 12:12:48.190 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 07:12:48 np0005604943 nova_compute[238883]: 2026-02-02 12:12:48.190 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:12:48 np0005604943 nova_compute[238883]: 2026-02-02 12:12:48.190 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 07:12:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 350 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb  2 07:12:49 np0005604943 nova_compute[238883]: 2026-02-02 12:12:49.168 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:12:49 np0005604943 nova_compute[238883]: 2026-02-02 12:12:49.752 238887 DEBUG oslo_concurrency.lockutils [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:12:49 np0005604943 nova_compute[238883]: 2026-02-02 12:12:49.753 238887 DEBUG oslo_concurrency.lockutils [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:12:49 np0005604943 nova_compute[238883]: 2026-02-02 12:12:49.774 238887 DEBUG nova.objects.instance [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lazy-loading 'flavor' on Instance uuid 164c5391-dfe4-46ea-869a-95b649a1c3c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:12:49 np0005604943 nova_compute[238883]: 2026-02-02 12:12:49.811 238887 DEBUG oslo_concurrency.lockutils [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:12:49 np0005604943 nova_compute[238883]: 2026-02-02 12:12:49.983 238887 DEBUG oslo_concurrency.lockutils [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:12:49 np0005604943 nova_compute[238883]: 2026-02-02 12:12:49.984 238887 DEBUG oslo_concurrency.lockutils [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:12:49 np0005604943 nova_compute[238883]: 2026-02-02 12:12:49.984 238887 INFO nova.compute.manager [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Attaching volume 1b8accc9-9791-47dc-9164-e5beab913851 to /dev/vdb#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.100 238887 DEBUG os_brick.utils [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.104 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.114 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.115 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[e290ff66-1910-425c-bb8f-d757e5fd8ce4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.116 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.121 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.121 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[d08830f8-154e-44e4-9d04-76509b330126]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.123 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.132 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.133 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5f5b35-3807-4d62-af0e-646b9c6f71f4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.135 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[58f5b96f-d43d-4937-b722-16e60d372ea5]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.135 238887 DEBUG oslo_concurrency.processutils [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.165 238887 DEBUG oslo_concurrency.processutils [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.167 238887 DEBUG os_brick.initiator.connectors.lightos [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.168 238887 DEBUG os_brick.initiator.connectors.lightos [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.168 238887 DEBUG os_brick.initiator.connectors.lightos [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.168 238887 DEBUG os_brick.utils [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.168 238887 DEBUG nova.virt.block_device [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Updating existing volume attachment record: 55e752d9-f027-4513-9537-1d665d630dab _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.565 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 350 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Feb  2 07:12:50 np0005604943 nova_compute[238883]: 2026-02-02 12:12:50.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:12:51 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:12:51 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/982894751' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:12:51 np0005604943 nova_compute[238883]: 2026-02-02 12:12:51.132 238887 DEBUG nova.objects.instance [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lazy-loading 'flavor' on Instance uuid 164c5391-dfe4-46ea-869a-95b649a1c3c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:12:51 np0005604943 nova_compute[238883]: 2026-02-02 12:12:51.155 238887 DEBUG nova.virt.libvirt.driver [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Attempting to attach volume 1b8accc9-9791-47dc-9164-e5beab913851 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 07:12:51 np0005604943 nova_compute[238883]: 2026-02-02 12:12:51.157 238887 DEBUG nova.virt.libvirt.guest [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 07:12:51 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:12:51 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-1b8accc9-9791-47dc-9164-e5beab913851">
Feb  2 07:12:51 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:12:51 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:12:51 np0005604943 nova_compute[238883]:  <auth username="openstack">
Feb  2 07:12:51 np0005604943 nova_compute[238883]:    <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:12:51 np0005604943 nova_compute[238883]:  </auth>
Feb  2 07:12:51 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:12:51 np0005604943 nova_compute[238883]:  <serial>1b8accc9-9791-47dc-9164-e5beab913851</serial>
Feb  2 07:12:51 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:12:51 np0005604943 nova_compute[238883]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 07:12:51 np0005604943 nova_compute[238883]: 2026-02-02 12:12:51.258 238887 DEBUG nova.virt.libvirt.driver [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:12:51 np0005604943 nova_compute[238883]: 2026-02-02 12:12:51.259 238887 DEBUG nova.virt.libvirt.driver [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:12:51 np0005604943 nova_compute[238883]: 2026-02-02 12:12:51.259 238887 DEBUG nova.virt.libvirt.driver [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:12:51 np0005604943 nova_compute[238883]: 2026-02-02 12:12:51.259 238887 DEBUG nova.virt.libvirt.driver [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No VIF found with MAC fa:16:3e:c9:3a:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:12:51 np0005604943 nova_compute[238883]: 2026-02-02 12:12:51.445 238887 DEBUG oslo_concurrency.lockutils [None req-ae177edd-8a3d-48e3-bb7a-546ad7d7c9b9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.462s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:12:52 np0005604943 nova_compute[238883]: 2026-02-02 12:12:52.042 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 350 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 1.6 MiB/s wr, 59 op/s
Feb  2 07:12:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e494 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:12:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e494 do_prune osdmap full prune enabled
Feb  2 07:12:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e495 e495: 3 total, 3 up, 3 in
Feb  2 07:12:53 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e495: 3 total, 3 up, 3 in
Feb  2 07:12:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 350 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 21 KiB/s wr, 11 op/s
Feb  2 07:12:55 np0005604943 podman[272562]: 2026-02-02 12:12:55.035610915 +0000 UTC m=+0.054139931 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb  2 07:12:55 np0005604943 podman[272561]: 2026-02-02 12:12:55.106939637 +0000 UTC m=+0.126818070 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127)
Feb  2 07:12:55 np0005604943 nova_compute[238883]: 2026-02-02 12:12:55.567 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e495 do_prune osdmap full prune enabled
Feb  2 07:12:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e496 e496: 3 total, 3 up, 3 in
Feb  2 07:12:55 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e496: 3 total, 3 up, 3 in
Feb  2 07:12:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 350 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 24 KiB/s wr, 13 op/s
Feb  2 07:12:57 np0005604943 nova_compute[238883]: 2026-02-02 12:12:57.045 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:12:57 np0005604943 nova_compute[238883]: 2026-02-02 12:12:57.659 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:12:57 np0005604943 nova_compute[238883]: 2026-02-02 12:12:57.660 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  2 07:12:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e496 do_prune osdmap full prune enabled
Feb  2 07:12:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e497 e497: 3 total, 3 up, 3 in
Feb  2 07:12:57 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e497: 3 total, 3 up, 3 in
Feb  2 07:12:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e497 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:12:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 352 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 184 KiB/s wr, 108 op/s
Feb  2 07:12:58 np0005604943 nova_compute[238883]: 2026-02-02 12:12:58.990 238887 DEBUG oslo_concurrency.lockutils [None req-a1d0613c-4522-4944-a454-05252ad4adf0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:12:58 np0005604943 nova_compute[238883]: 2026-02-02 12:12:58.991 238887 DEBUG oslo_concurrency.lockutils [None req-a1d0613c-4522-4944-a454-05252ad4adf0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:12:59 np0005604943 nova_compute[238883]: 2026-02-02 12:12:59.012 238887 INFO nova.compute.manager [None req-a1d0613c-4522-4944-a454-05252ad4adf0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Detaching volume 1b8accc9-9791-47dc-9164-e5beab913851#033[00m
Feb  2 07:12:59 np0005604943 nova_compute[238883]: 2026-02-02 12:12:59.152 238887 INFO nova.virt.block_device [None req-a1d0613c-4522-4944-a454-05252ad4adf0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Attempting to driver detach volume 1b8accc9-9791-47dc-9164-e5beab913851 from mountpoint /dev/vdb#033[00m
Feb  2 07:12:59 np0005604943 nova_compute[238883]: 2026-02-02 12:12:59.164 238887 DEBUG nova.virt.libvirt.driver [None req-a1d0613c-4522-4944-a454-05252ad4adf0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Attempting to detach device vdb from instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 07:12:59 np0005604943 nova_compute[238883]: 2026-02-02 12:12:59.165 238887 DEBUG nova.virt.libvirt.guest [None req-a1d0613c-4522-4944-a454-05252ad4adf0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 07:12:59 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:12:59 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-1b8accc9-9791-47dc-9164-e5beab913851">
Feb  2 07:12:59 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:12:59 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:12:59 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:12:59 np0005604943 nova_compute[238883]:  <serial>1b8accc9-9791-47dc-9164-e5beab913851</serial>
Feb  2 07:12:59 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 07:12:59 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:12:59 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 07:12:59 np0005604943 nova_compute[238883]: 2026-02-02 12:12:59.173 238887 INFO nova.virt.libvirt.driver [None req-a1d0613c-4522-4944-a454-05252ad4adf0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Successfully detached device vdb from instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 from the persistent domain config.#033[00m
Feb  2 07:12:59 np0005604943 nova_compute[238883]: 2026-02-02 12:12:59.173 238887 DEBUG nova.virt.libvirt.driver [None req-a1d0613c-4522-4944-a454-05252ad4adf0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 07:12:59 np0005604943 nova_compute[238883]: 2026-02-02 12:12:59.174 238887 DEBUG nova.virt.libvirt.guest [None req-a1d0613c-4522-4944-a454-05252ad4adf0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 07:12:59 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:12:59 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-1b8accc9-9791-47dc-9164-e5beab913851">
Feb  2 07:12:59 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:12:59 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:12:59 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:12:59 np0005604943 nova_compute[238883]:  <serial>1b8accc9-9791-47dc-9164-e5beab913851</serial>
Feb  2 07:12:59 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 07:12:59 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:12:59 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 07:12:59 np0005604943 nova_compute[238883]: 2026-02-02 12:12:59.279 238887 DEBUG nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Received event <DeviceRemovedEvent: 1770034379.2793782, 164c5391-dfe4-46ea-869a-95b649a1c3c7 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 07:12:59 np0005604943 nova_compute[238883]: 2026-02-02 12:12:59.281 238887 DEBUG nova.virt.libvirt.driver [None req-a1d0613c-4522-4944-a454-05252ad4adf0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 07:12:59 np0005604943 nova_compute[238883]: 2026-02-02 12:12:59.285 238887 INFO nova.virt.libvirt.driver [None req-a1d0613c-4522-4944-a454-05252ad4adf0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Successfully detached device vdb from instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 from the live domain config.#033[00m
Feb  2 07:12:59 np0005604943 nova_compute[238883]: 2026-02-02 12:12:59.471 238887 DEBUG nova.objects.instance [None req-a1d0613c-4522-4944-a454-05252ad4adf0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lazy-loading 'flavor' on Instance uuid 164c5391-dfe4-46ea-869a-95b649a1c3c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:12:59 np0005604943 nova_compute[238883]: 2026-02-02 12:12:59.531 238887 DEBUG oslo_concurrency.lockutils [None req-a1d0613c-4522-4944-a454-05252ad4adf0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:00 np0005604943 nova_compute[238883]: 2026-02-02 12:13:00.569 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 126 KiB/s rd, 180 KiB/s wr, 173 op/s
Feb  2 07:13:00 np0005604943 ovn_controller[145056]: 2026-02-02T12:13:00Z|00288|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Feb  2 07:13:01 np0005604943 nova_compute[238883]: 2026-02-02 12:13:01.986 238887 DEBUG oslo_concurrency.lockutils [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:01 np0005604943 nova_compute[238883]: 2026-02-02 12:13:01.987 238887 DEBUG oslo_concurrency.lockutils [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.008 238887 DEBUG nova.objects.instance [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lazy-loading 'flavor' on Instance uuid 164c5391-dfe4-46ea-869a-95b649a1c3c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.047 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.055 238887 DEBUG oslo_concurrency.lockutils [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.296 238887 DEBUG oslo_concurrency.lockutils [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.296 238887 DEBUG oslo_concurrency.lockutils [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.297 238887 INFO nova.compute.manager [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Attaching volume c69d912c-751d-44c9-8318-5182741d70c4 to /dev/vdb#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.411 238887 DEBUG os_brick.utils [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.412 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.424 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.424 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[92c7fbb0-eb87-4ec3-af59-b6481b125735]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.426 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.434 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.434 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[65122701-8556-4acc-b502-16ff9eeda33f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.435 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.443 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.443 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ea7683-e45e-4870-b279-7a45e9ea923e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.445 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8ded9e-410b-4b28-b86b-61b94012ca88]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.446 238887 DEBUG oslo_concurrency.processutils [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.469 238887 DEBUG oslo_concurrency.processutils [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.472 238887 DEBUG os_brick.initiator.connectors.lightos [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.472 238887 DEBUG os_brick.initiator.connectors.lightos [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.473 238887 DEBUG os_brick.initiator.connectors.lightos [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.473 238887 DEBUG os_brick.utils [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] <== get_connector_properties: return (62ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:13:02 np0005604943 nova_compute[238883]: 2026-02-02 12:13:02.474 238887 DEBUG nova.virt.block_device [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Updating existing volume attachment record: 0f6b3b28-cd8b-4b41-b0c7-114585e45803 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:13:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 156 KiB/s wr, 155 op/s
Feb  2 07:13:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e497 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:13:03 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:13:03 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3600576778' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:13:03 np0005604943 nova_compute[238883]: 2026-02-02 12:13:03.390 238887 DEBUG nova.objects.instance [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lazy-loading 'flavor' on Instance uuid 164c5391-dfe4-46ea-869a-95b649a1c3c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:13:03 np0005604943 nova_compute[238883]: 2026-02-02 12:13:03.413 238887 DEBUG nova.virt.libvirt.driver [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Attempting to attach volume c69d912c-751d-44c9-8318-5182741d70c4 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 07:13:03 np0005604943 nova_compute[238883]: 2026-02-02 12:13:03.416 238887 DEBUG nova.virt.libvirt.guest [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 07:13:03 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:13:03 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-c69d912c-751d-44c9-8318-5182741d70c4">
Feb  2 07:13:03 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:13:03 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:13:03 np0005604943 nova_compute[238883]:  <auth username="openstack">
Feb  2 07:13:03 np0005604943 nova_compute[238883]:    <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:13:03 np0005604943 nova_compute[238883]:  </auth>
Feb  2 07:13:03 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:13:03 np0005604943 nova_compute[238883]:  <serial>c69d912c-751d-44c9-8318-5182741d70c4</serial>
Feb  2 07:13:03 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:13:03 np0005604943 nova_compute[238883]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 07:13:03 np0005604943 nova_compute[238883]: 2026-02-02 12:13:03.519 238887 DEBUG nova.virt.libvirt.driver [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:13:03 np0005604943 nova_compute[238883]: 2026-02-02 12:13:03.520 238887 DEBUG nova.virt.libvirt.driver [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:13:03 np0005604943 nova_compute[238883]: 2026-02-02 12:13:03.520 238887 DEBUG nova.virt.libvirt.driver [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:13:03 np0005604943 nova_compute[238883]: 2026-02-02 12:13:03.521 238887 DEBUG nova.virt.libvirt.driver [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No VIF found with MAC fa:16:3e:c9:3a:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:13:03 np0005604943 nova_compute[238883]: 2026-02-02 12:13:03.691 238887 DEBUG oslo_concurrency.lockutils [None req-4becae49-b750-458c-85f7-67d3987c19b3 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 116 KiB/s rd, 144 KiB/s wr, 158 op/s
Feb  2 07:13:05 np0005604943 nova_compute[238883]: 2026-02-02 12:13:05.606 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:06 np0005604943 nova_compute[238883]: 2026-02-02 12:13:06.143 238887 DEBUG oslo_concurrency.lockutils [None req-5e1c11a4-ed7b-40b3-adcd-c036589ca6c1 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:06 np0005604943 nova_compute[238883]: 2026-02-02 12:13:06.143 238887 DEBUG oslo_concurrency.lockutils [None req-5e1c11a4-ed7b-40b3-adcd-c036589ca6c1 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:06 np0005604943 nova_compute[238883]: 2026-02-02 12:13:06.158 238887 INFO nova.compute.manager [None req-5e1c11a4-ed7b-40b3-adcd-c036589ca6c1 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Detaching volume c69d912c-751d-44c9-8318-5182741d70c4#033[00m
Feb  2 07:13:06 np0005604943 nova_compute[238883]: 2026-02-02 12:13:06.308 238887 INFO nova.virt.block_device [None req-5e1c11a4-ed7b-40b3-adcd-c036589ca6c1 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Attempting to driver detach volume c69d912c-751d-44c9-8318-5182741d70c4 from mountpoint /dev/vdb#033[00m
Feb  2 07:13:06 np0005604943 nova_compute[238883]: 2026-02-02 12:13:06.320 238887 DEBUG nova.virt.libvirt.driver [None req-5e1c11a4-ed7b-40b3-adcd-c036589ca6c1 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Attempting to detach device vdb from instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 07:13:06 np0005604943 nova_compute[238883]: 2026-02-02 12:13:06.321 238887 DEBUG nova.virt.libvirt.guest [None req-5e1c11a4-ed7b-40b3-adcd-c036589ca6c1 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 07:13:06 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:13:06 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-c69d912c-751d-44c9-8318-5182741d70c4">
Feb  2 07:13:06 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:13:06 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:13:06 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:13:06 np0005604943 nova_compute[238883]:  <serial>c69d912c-751d-44c9-8318-5182741d70c4</serial>
Feb  2 07:13:06 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 07:13:06 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:13:06 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 07:13:06 np0005604943 nova_compute[238883]: 2026-02-02 12:13:06.329 238887 INFO nova.virt.libvirt.driver [None req-5e1c11a4-ed7b-40b3-adcd-c036589ca6c1 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Successfully detached device vdb from instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 from the persistent domain config.#033[00m
Feb  2 07:13:06 np0005604943 nova_compute[238883]: 2026-02-02 12:13:06.330 238887 DEBUG nova.virt.libvirt.driver [None req-5e1c11a4-ed7b-40b3-adcd-c036589ca6c1 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 07:13:06 np0005604943 nova_compute[238883]: 2026-02-02 12:13:06.330 238887 DEBUG nova.virt.libvirt.guest [None req-5e1c11a4-ed7b-40b3-adcd-c036589ca6c1 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 07:13:06 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:13:06 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-c69d912c-751d-44c9-8318-5182741d70c4">
Feb  2 07:13:06 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:13:06 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:13:06 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:13:06 np0005604943 nova_compute[238883]:  <serial>c69d912c-751d-44c9-8318-5182741d70c4</serial>
Feb  2 07:13:06 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 07:13:06 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:13:06 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 07:13:06 np0005604943 nova_compute[238883]: 2026-02-02 12:13:06.441 238887 DEBUG nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Received event <DeviceRemovedEvent: 1770034386.4407117, 164c5391-dfe4-46ea-869a-95b649a1c3c7 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 07:13:06 np0005604943 nova_compute[238883]: 2026-02-02 12:13:06.443 238887 DEBUG nova.virt.libvirt.driver [None req-5e1c11a4-ed7b-40b3-adcd-c036589ca6c1 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 07:13:06 np0005604943 nova_compute[238883]: 2026-02-02 12:13:06.445 238887 INFO nova.virt.libvirt.driver [None req-5e1c11a4-ed7b-40b3-adcd-c036589ca6c1 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Successfully detached device vdb from instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 from the live domain config.#033[00m
Feb  2 07:13:06 np0005604943 nova_compute[238883]: 2026-02-02 12:13:06.598 238887 DEBUG nova.objects.instance [None req-5e1c11a4-ed7b-40b3-adcd-c036589ca6c1 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lazy-loading 'flavor' on Instance uuid 164c5391-dfe4-46ea-869a-95b649a1c3c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:13:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 353 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 127 KiB/s wr, 140 op/s
Feb  2 07:13:06 np0005604943 nova_compute[238883]: 2026-02-02 12:13:06.635 238887 DEBUG oslo_concurrency.lockutils [None req-5e1c11a4-ed7b-40b3-adcd-c036589ca6c1 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.492s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:07 np0005604943 nova_compute[238883]: 2026-02-02 12:13:07.050 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:13:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:13:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 07:13:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:13:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 07:13:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:13:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 07:13:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 07:13:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 07:13:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:13:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:13:07 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:13:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e497 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:13:07 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:13:07 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:13:07 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:13:08 np0005604943 podman[272782]: 2026-02-02 12:13:08.265818945 +0000 UTC m=+0.049884316 container create ef3997dc668946bd1e1491e4faeb1685f7df9574c061bc482dd1dfee30712ae7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:13:08 np0005604943 systemd[1]: Started libpod-conmon-ef3997dc668946bd1e1491e4faeb1685f7df9574c061bc482dd1dfee30712ae7.scope.
Feb  2 07:13:08 np0005604943 podman[272782]: 2026-02-02 12:13:08.241434358 +0000 UTC m=+0.025499759 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:13:08 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:13:08 np0005604943 podman[272782]: 2026-02-02 12:13:08.369938551 +0000 UTC m=+0.154003952 container init ef3997dc668946bd1e1491e4faeb1685f7df9574c061bc482dd1dfee30712ae7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb  2 07:13:08 np0005604943 podman[272782]: 2026-02-02 12:13:08.376868218 +0000 UTC m=+0.160933619 container start ef3997dc668946bd1e1491e4faeb1685f7df9574c061bc482dd1dfee30712ae7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:13:08 np0005604943 infallible_shockley[272798]: 167 167
Feb  2 07:13:08 np0005604943 systemd[1]: libpod-ef3997dc668946bd1e1491e4faeb1685f7df9574c061bc482dd1dfee30712ae7.scope: Deactivated successfully.
Feb  2 07:13:08 np0005604943 conmon[272798]: conmon ef3997dc668946bd1e14 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef3997dc668946bd1e1491e4faeb1685f7df9574c061bc482dd1dfee30712ae7.scope/container/memory.events
Feb  2 07:13:08 np0005604943 podman[272782]: 2026-02-02 12:13:08.385050529 +0000 UTC m=+0.169115960 container attach ef3997dc668946bd1e1491e4faeb1685f7df9574c061bc482dd1dfee30712ae7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb  2 07:13:08 np0005604943 podman[272782]: 2026-02-02 12:13:08.385725787 +0000 UTC m=+0.169791178 container died ef3997dc668946bd1e1491e4faeb1685f7df9574c061bc482dd1dfee30712ae7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 07:13:08 np0005604943 systemd[1]: var-lib-containers-storage-overlay-ed0e3dde64684e020148315839d920af2119499f3ce779ee99ac9b8d1025cee8-merged.mount: Deactivated successfully.
Feb  2 07:13:08 np0005604943 podman[272782]: 2026-02-02 12:13:08.459085725 +0000 UTC m=+0.243151106 container remove ef3997dc668946bd1e1491e4faeb1685f7df9574c061bc482dd1dfee30712ae7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:13:08 np0005604943 systemd[1]: libpod-conmon-ef3997dc668946bd1e1491e4faeb1685f7df9574c061bc482dd1dfee30712ae7.scope: Deactivated successfully.
Feb  2 07:13:08 np0005604943 podman[272821]: 2026-02-02 12:13:08.599044937 +0000 UTC m=+0.042482136 container create f79051b2d04b819e4134f3b1c3df0b61bc97edda5596f4381e07fd3c0f5cca3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_napier, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:13:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 353 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 126 KiB/s rd, 73 KiB/s wr, 85 op/s
Feb  2 07:13:08 np0005604943 systemd[1]: Started libpod-conmon-f79051b2d04b819e4134f3b1c3df0b61bc97edda5596f4381e07fd3c0f5cca3a.scope.
Feb  2 07:13:08 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:13:08 np0005604943 podman[272821]: 2026-02-02 12:13:08.581799503 +0000 UTC m=+0.025236732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:13:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ea25fe198241b37d01afe8ff8b66d82d83fe2587bbbb41c12fa8cd23ae84e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:13:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ea25fe198241b37d01afe8ff8b66d82d83fe2587bbbb41c12fa8cd23ae84e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:13:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ea25fe198241b37d01afe8ff8b66d82d83fe2587bbbb41c12fa8cd23ae84e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:13:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ea25fe198241b37d01afe8ff8b66d82d83fe2587bbbb41c12fa8cd23ae84e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:13:08 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ea25fe198241b37d01afe8ff8b66d82d83fe2587bbbb41c12fa8cd23ae84e0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 07:13:08 np0005604943 podman[272821]: 2026-02-02 12:13:08.692608159 +0000 UTC m=+0.136045388 container init f79051b2d04b819e4134f3b1c3df0b61bc97edda5596f4381e07fd3c0f5cca3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb  2 07:13:08 np0005604943 podman[272821]: 2026-02-02 12:13:08.701741485 +0000 UTC m=+0.145178684 container start f79051b2d04b819e4134f3b1c3df0b61bc97edda5596f4381e07fd3c0f5cca3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:13:08 np0005604943 podman[272821]: 2026-02-02 12:13:08.705904137 +0000 UTC m=+0.149341436 container attach f79051b2d04b819e4134f3b1c3df0b61bc97edda5596f4381e07fd3c0f5cca3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_napier, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:13:09 np0005604943 happy_napier[272838]: --> passed data devices: 0 physical, 3 LVM
Feb  2 07:13:09 np0005604943 happy_napier[272838]: --> All data devices are unavailable
Feb  2 07:13:09 np0005604943 systemd[1]: libpod-f79051b2d04b819e4134f3b1c3df0b61bc97edda5596f4381e07fd3c0f5cca3a.scope: Deactivated successfully.
Feb  2 07:13:09 np0005604943 podman[272858]: 2026-02-02 12:13:09.169960836 +0000 UTC m=+0.025197260 container died f79051b2d04b819e4134f3b1c3df0b61bc97edda5596f4381e07fd3c0f5cca3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 07:13:09 np0005604943 systemd[1]: var-lib-containers-storage-overlay-03ea25fe198241b37d01afe8ff8b66d82d83fe2587bbbb41c12fa8cd23ae84e0-merged.mount: Deactivated successfully.
Feb  2 07:13:09 np0005604943 podman[272858]: 2026-02-02 12:13:09.210460818 +0000 UTC m=+0.065697222 container remove f79051b2d04b819e4134f3b1c3df0b61bc97edda5596f4381e07fd3c0f5cca3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_napier, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:13:09 np0005604943 systemd[1]: libpod-conmon-f79051b2d04b819e4134f3b1c3df0b61bc97edda5596f4381e07fd3c0f5cca3a.scope: Deactivated successfully.
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.358 238887 DEBUG oslo_concurrency.lockutils [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.360 238887 DEBUG oslo_concurrency.lockutils [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.376 238887 DEBUG nova.objects.instance [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lazy-loading 'flavor' on Instance uuid 164c5391-dfe4-46ea-869a-95b649a1c3c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.430 238887 DEBUG oslo_concurrency.lockutils [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.627 238887 DEBUG oslo_concurrency.lockutils [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.627 238887 DEBUG oslo_concurrency.lockutils [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.627 238887 INFO nova.compute.manager [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Attaching volume 7912aff7-a9a3-4b3c-9c43-4929c63d8144 to /dev/vdb#033[00m
Feb  2 07:13:09 np0005604943 podman[272935]: 2026-02-02 12:13:09.665795201 +0000 UTC m=+0.040827271 container create 054b67e572945ca399f060a2c42216a07fce112c841d72bdd60c47f561b4b5b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bose, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb  2 07:13:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_12:13:09
Feb  2 07:13:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 07:13:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 07:13:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['images', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root']
Feb  2 07:13:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 07:13:09 np0005604943 systemd[1]: Started libpod-conmon-054b67e572945ca399f060a2c42216a07fce112c841d72bdd60c47f561b4b5b0.scope.
Feb  2 07:13:09 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:13:09 np0005604943 podman[272935]: 2026-02-02 12:13:09.647624722 +0000 UTC m=+0.022656822 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:13:09 np0005604943 podman[272935]: 2026-02-02 12:13:09.751516342 +0000 UTC m=+0.126548412 container init 054b67e572945ca399f060a2c42216a07fce112c841d72bdd60c47f561b4b5b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bose, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:13:09 np0005604943 podman[272935]: 2026-02-02 12:13:09.758859539 +0000 UTC m=+0.133891609 container start 054b67e572945ca399f060a2c42216a07fce112c841d72bdd60c47f561b4b5b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bose, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb  2 07:13:09 np0005604943 podman[272935]: 2026-02-02 12:13:09.763059243 +0000 UTC m=+0.138091333 container attach 054b67e572945ca399f060a2c42216a07fce112c841d72bdd60c47f561b4b5b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 07:13:09 np0005604943 sleepy_bose[272951]: 167 167
Feb  2 07:13:09 np0005604943 systemd[1]: libpod-054b67e572945ca399f060a2c42216a07fce112c841d72bdd60c47f561b4b5b0.scope: Deactivated successfully.
Feb  2 07:13:09 np0005604943 podman[272935]: 2026-02-02 12:13:09.76477703 +0000 UTC m=+0.139809130 container died 054b67e572945ca399f060a2c42216a07fce112c841d72bdd60c47f561b4b5b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bose, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.786 238887 DEBUG os_brick.utils [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:13:09 np0005604943 systemd[1]: var-lib-containers-storage-overlay-be56668de84695d252a5e34dfcbf69e4874237c4eb188e5b0531f112eac9f355-merged.mount: Deactivated successfully.
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.788 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.802 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.803 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[2e1b4b5d-7104-44cc-b034-9f227f92169e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.805 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:13:09 np0005604943 podman[272935]: 2026-02-02 12:13:09.806147854 +0000 UTC m=+0.181179924 container remove 054b67e572945ca399f060a2c42216a07fce112c841d72bdd60c47f561b4b5b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:13:09 np0005604943 systemd[1]: libpod-conmon-054b67e572945ca399f060a2c42216a07fce112c841d72bdd60c47f561b4b5b0.scope: Deactivated successfully.
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.815 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.816 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[bbf2ad6f-328b-476a-9f79-2929a70f6da8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.818 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.827 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.828 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[d6fce76b-c127-46a8-babb-390fcb8e1074]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.830 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[736c1953-bc33-43a5-bc23-e15cb81f2495]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.831 238887 DEBUG oslo_concurrency.processutils [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.854 238887 DEBUG oslo_concurrency.processutils [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.858 238887 DEBUG os_brick.initiator.connectors.lightos [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.858 238887 DEBUG os_brick.initiator.connectors.lightos [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.859 238887 DEBUG os_brick.initiator.connectors.lightos [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.859 238887 DEBUG os_brick.utils [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:13:09 np0005604943 nova_compute[238883]: 2026-02-02 12:13:09.859 238887 DEBUG nova.virt.block_device [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Updating existing volume attachment record: 261dc786-48fb-4fb1-a528-9da0a94b1e49 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:13:09 np0005604943 podman[272981]: 2026-02-02 12:13:09.97590583 +0000 UTC m=+0.047704597 container create 25dd62aa46d54fd944977ca3ee80e0b91582446a1e4dd6de284d70c1e2a2f8d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:13:10 np0005604943 systemd[1]: Started libpod-conmon-25dd62aa46d54fd944977ca3ee80e0b91582446a1e4dd6de284d70c1e2a2f8d0.scope.
Feb  2 07:13:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:10.039 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:10.043 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:10.043 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:10 np0005604943 podman[272981]: 2026-02-02 12:13:09.95510492 +0000 UTC m=+0.026903707 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:13:10 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:13:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2493172b9a0c9bc511d7a96ee941016bd71729ebcd18d995fd8382c4ffeb62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:13:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2493172b9a0c9bc511d7a96ee941016bd71729ebcd18d995fd8382c4ffeb62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:13:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2493172b9a0c9bc511d7a96ee941016bd71729ebcd18d995fd8382c4ffeb62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:13:10 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2493172b9a0c9bc511d7a96ee941016bd71729ebcd18d995fd8382c4ffeb62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:13:10 np0005604943 podman[272981]: 2026-02-02 12:13:10.073956783 +0000 UTC m=+0.145755570 container init 25dd62aa46d54fd944977ca3ee80e0b91582446a1e4dd6de284d70c1e2a2f8d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 07:13:10 np0005604943 podman[272981]: 2026-02-02 12:13:10.079731839 +0000 UTC m=+0.151530596 container start 25dd62aa46d54fd944977ca3ee80e0b91582446a1e4dd6de284d70c1e2a2f8d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jemison, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 07:13:10 np0005604943 podman[272981]: 2026-02-02 12:13:10.083049328 +0000 UTC m=+0.154848115 container attach 25dd62aa46d54fd944977ca3ee80e0b91582446a1e4dd6de284d70c1e2a2f8d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]: {
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:    "0": [
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:        {
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "devices": [
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "/dev/loop3"
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            ],
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "lv_name": "ceph_lv0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "lv_size": "21470642176",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "name": "ceph_lv0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "tags": {
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.cluster_name": "ceph",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.crush_device_class": "",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.encrypted": "0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.objectstore": "bluestore",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.osd_id": "0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.type": "block",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.vdo": "0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.with_tpm": "0"
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            },
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "type": "block",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "vg_name": "ceph_vg0"
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:        }
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:    ],
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:    "1": [
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:        {
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "devices": [
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "/dev/loop4"
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            ],
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "lv_name": "ceph_lv1",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "lv_size": "21470642176",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "name": "ceph_lv1",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "tags": {
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.cluster_name": "ceph",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.crush_device_class": "",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.encrypted": "0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.objectstore": "bluestore",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.osd_id": "1",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.type": "block",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.vdo": "0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.with_tpm": "0"
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            },
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "type": "block",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "vg_name": "ceph_vg1"
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:        }
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:    ],
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:    "2": [
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:        {
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "devices": [
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "/dev/loop5"
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            ],
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "lv_name": "ceph_lv2",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "lv_size": "21470642176",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "name": "ceph_lv2",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "tags": {
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.cluster_name": "ceph",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.crush_device_class": "",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.encrypted": "0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.objectstore": "bluestore",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.osd_id": "2",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.type": "block",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.vdo": "0",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:                "ceph.with_tpm": "0"
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            },
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "type": "block",
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:            "vg_name": "ceph_vg2"
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:        }
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]:    ]
Feb  2 07:13:10 np0005604943 jovial_jemison[272997]: }
Feb  2 07:13:10 np0005604943 systemd[1]: libpod-25dd62aa46d54fd944977ca3ee80e0b91582446a1e4dd6de284d70c1e2a2f8d0.scope: Deactivated successfully.
Feb  2 07:13:10 np0005604943 podman[273006]: 2026-02-02 12:13:10.431084579 +0000 UTC m=+0.026732681 container died 25dd62aa46d54fd944977ca3ee80e0b91582446a1e4dd6de284d70c1e2a2f8d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb  2 07:13:10 np0005604943 systemd[1]: var-lib-containers-storage-overlay-3f2493172b9a0c9bc511d7a96ee941016bd71729ebcd18d995fd8382c4ffeb62-merged.mount: Deactivated successfully.
Feb  2 07:13:10 np0005604943 podman[273006]: 2026-02-02 12:13:10.467891352 +0000 UTC m=+0.063539434 container remove 25dd62aa46d54fd944977ca3ee80e0b91582446a1e4dd6de284d70c1e2a2f8d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 07:13:10 np0005604943 systemd[1]: libpod-conmon-25dd62aa46d54fd944977ca3ee80e0b91582446a1e4dd6de284d70c1e2a2f8d0.scope: Deactivated successfully.
Feb  2 07:13:10 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:13:10 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/719842687' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:13:10 np0005604943 nova_compute[238883]: 2026-02-02 12:13:10.608 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 353 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 76 KiB/s wr, 90 op/s
Feb  2 07:13:10 np0005604943 nova_compute[238883]: 2026-02-02 12:13:10.716 238887 DEBUG nova.objects.instance [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lazy-loading 'flavor' on Instance uuid 164c5391-dfe4-46ea-869a-95b649a1c3c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:13:10 np0005604943 nova_compute[238883]: 2026-02-02 12:13:10.739 238887 DEBUG nova.virt.libvirt.driver [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Attempting to attach volume 7912aff7-a9a3-4b3c-9c43-4929c63d8144 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 07:13:10 np0005604943 nova_compute[238883]: 2026-02-02 12:13:10.743 238887 DEBUG nova.virt.libvirt.guest [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 07:13:10 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:13:10 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-7912aff7-a9a3-4b3c-9c43-4929c63d8144">
Feb  2 07:13:10 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:13:10 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:13:10 np0005604943 nova_compute[238883]:  <auth username="openstack">
Feb  2 07:13:10 np0005604943 nova_compute[238883]:    <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:13:10 np0005604943 nova_compute[238883]:  </auth>
Feb  2 07:13:10 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:13:10 np0005604943 nova_compute[238883]:  <serial>7912aff7-a9a3-4b3c-9c43-4929c63d8144</serial>
Feb  2 07:13:10 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:13:10 np0005604943 nova_compute[238883]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 07:13:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:13:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:13:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:13:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:13:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:13:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:13:10 np0005604943 nova_compute[238883]: 2026-02-02 12:13:10.903 238887 DEBUG nova.virt.libvirt.driver [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:13:10 np0005604943 nova_compute[238883]: 2026-02-02 12:13:10.903 238887 DEBUG nova.virt.libvirt.driver [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:13:10 np0005604943 nova_compute[238883]: 2026-02-02 12:13:10.904 238887 DEBUG nova.virt.libvirt.driver [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:13:10 np0005604943 nova_compute[238883]: 2026-02-02 12:13:10.904 238887 DEBUG nova.virt.libvirt.driver [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No VIF found with MAC fa:16:3e:c9:3a:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:13:10 np0005604943 podman[273103]: 2026-02-02 12:13:10.946558384 +0000 UTC m=+0.051797477 container create 1e37611136450f263df028ff0476e92b8cf5343318037167c01ef3f10bb9f1e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:13:10 np0005604943 systemd[1]: Started libpod-conmon-1e37611136450f263df028ff0476e92b8cf5343318037167c01ef3f10bb9f1e7.scope.
Feb  2 07:13:11 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:13:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 07:13:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 07:13:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:13:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:13:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:13:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:13:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:13:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:13:11 np0005604943 podman[273103]: 2026-02-02 12:13:10.922148116 +0000 UTC m=+0.027387239 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:13:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:13:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:13:11 np0005604943 podman[273103]: 2026-02-02 12:13:11.050382903 +0000 UTC m=+0.155622016 container init 1e37611136450f263df028ff0476e92b8cf5343318037167c01ef3f10bb9f1e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb  2 07:13:11 np0005604943 podman[273103]: 2026-02-02 12:13:11.05879923 +0000 UTC m=+0.164038323 container start 1e37611136450f263df028ff0476e92b8cf5343318037167c01ef3f10bb9f1e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 07:13:11 np0005604943 sharp_zhukovsky[273119]: 167 167
Feb  2 07:13:11 np0005604943 systemd[1]: libpod-1e37611136450f263df028ff0476e92b8cf5343318037167c01ef3f10bb9f1e7.scope: Deactivated successfully.
Feb  2 07:13:11 np0005604943 nova_compute[238883]: 2026-02-02 12:13:11.071 238887 DEBUG oslo_concurrency.lockutils [None req-f1a185dd-096b-49c4-9983-7f8866a76eb9 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:11 np0005604943 podman[273103]: 2026-02-02 12:13:11.078412568 +0000 UTC m=+0.183651691 container attach 1e37611136450f263df028ff0476e92b8cf5343318037167c01ef3f10bb9f1e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 07:13:11 np0005604943 podman[273103]: 2026-02-02 12:13:11.08071576 +0000 UTC m=+0.185954853 container died 1e37611136450f263df028ff0476e92b8cf5343318037167c01ef3f10bb9f1e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb  2 07:13:11 np0005604943 systemd[1]: var-lib-containers-storage-overlay-fdcaee0e94042ca1826b6d9053da98a69525e6a00b3603d7cc10537ea4bd5404-merged.mount: Deactivated successfully.
Feb  2 07:13:11 np0005604943 podman[273103]: 2026-02-02 12:13:11.152499356 +0000 UTC m=+0.257738449 container remove 1e37611136450f263df028ff0476e92b8cf5343318037167c01ef3f10bb9f1e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb  2 07:13:11 np0005604943 systemd[1]: libpod-conmon-1e37611136450f263df028ff0476e92b8cf5343318037167c01ef3f10bb9f1e7.scope: Deactivated successfully.
Feb  2 07:13:11 np0005604943 podman[273145]: 2026-02-02 12:13:11.291766219 +0000 UTC m=+0.041331465 container create 1582d466c88c047ed970713a150521fda9c0d6d91ff2ba06a039f9554bdbb5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:13:11 np0005604943 systemd[1]: Started libpod-conmon-1582d466c88c047ed970713a150521fda9c0d6d91ff2ba06a039f9554bdbb5a5.scope.
Feb  2 07:13:11 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:13:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f446e7ebac81fe98f8b57b542187430dff38ae9444ccd1ef3ebc67705e26f1fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:13:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f446e7ebac81fe98f8b57b542187430dff38ae9444ccd1ef3ebc67705e26f1fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:13:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f446e7ebac81fe98f8b57b542187430dff38ae9444ccd1ef3ebc67705e26f1fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:13:11 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f446e7ebac81fe98f8b57b542187430dff38ae9444ccd1ef3ebc67705e26f1fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:13:11 np0005604943 podman[273145]: 2026-02-02 12:13:11.2721244 +0000 UTC m=+0.021689666 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:13:11 np0005604943 podman[273145]: 2026-02-02 12:13:11.379380701 +0000 UTC m=+0.128945967 container init 1582d466c88c047ed970713a150521fda9c0d6d91ff2ba06a039f9554bdbb5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jang, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:13:11 np0005604943 podman[273145]: 2026-02-02 12:13:11.384116918 +0000 UTC m=+0.133682164 container start 1582d466c88c047ed970713a150521fda9c0d6d91ff2ba06a039f9554bdbb5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:13:11 np0005604943 podman[273145]: 2026-02-02 12:13:11.386979795 +0000 UTC m=+0.136545071 container attach 1582d466c88c047ed970713a150521fda9c0d6d91ff2ba06a039f9554bdbb5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jang, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 07:13:12 np0005604943 nova_compute[238883]: 2026-02-02 12:13:12.051 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:12 np0005604943 lvm[273241]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 07:13:12 np0005604943 lvm[273241]: VG ceph_vg1 finished
Feb  2 07:13:12 np0005604943 lvm[273240]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:13:12 np0005604943 lvm[273240]: VG ceph_vg0 finished
Feb  2 07:13:12 np0005604943 lvm[273243]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 07:13:12 np0005604943 lvm[273243]: VG ceph_vg2 finished
Feb  2 07:13:12 np0005604943 bold_jang[273162]: {}
Feb  2 07:13:12 np0005604943 systemd[1]: libpod-1582d466c88c047ed970713a150521fda9c0d6d91ff2ba06a039f9554bdbb5a5.scope: Deactivated successfully.
Feb  2 07:13:12 np0005604943 systemd[1]: libpod-1582d466c88c047ed970713a150521fda9c0d6d91ff2ba06a039f9554bdbb5a5.scope: Consumed 1.269s CPU time.
Feb  2 07:13:12 np0005604943 podman[273145]: 2026-02-02 12:13:12.256018611 +0000 UTC m=+1.005583857 container died 1582d466c88c047ed970713a150521fda9c0d6d91ff2ba06a039f9554bdbb5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:13:12 np0005604943 systemd[1]: var-lib-containers-storage-overlay-f446e7ebac81fe98f8b57b542187430dff38ae9444ccd1ef3ebc67705e26f1fd-merged.mount: Deactivated successfully.
Feb  2 07:13:12 np0005604943 podman[273145]: 2026-02-02 12:13:12.304877978 +0000 UTC m=+1.054443224 container remove 1582d466c88c047ed970713a150521fda9c0d6d91ff2ba06a039f9554bdbb5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jang, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:13:12 np0005604943 systemd[1]: libpod-conmon-1582d466c88c047ed970713a150521fda9c0d6d91ff2ba06a039f9554bdbb5a5.scope: Deactivated successfully.
Feb  2 07:13:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:13:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:13:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:13:12 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:13:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 354 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 140 KiB/s rd, 109 KiB/s wr, 47 op/s
Feb  2 07:13:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e497 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:13:13 np0005604943 nova_compute[238883]: 2026-02-02 12:13:13.347 238887 DEBUG oslo_concurrency.lockutils [None req-b0331e3d-7eff-46a1-88cc-f9c8217c2ab0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:13 np0005604943 nova_compute[238883]: 2026-02-02 12:13:13.348 238887 DEBUG oslo_concurrency.lockutils [None req-b0331e3d-7eff-46a1-88cc-f9c8217c2ab0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:13 np0005604943 nova_compute[238883]: 2026-02-02 12:13:13.361 238887 INFO nova.compute.manager [None req-b0331e3d-7eff-46a1-88cc-f9c8217c2ab0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Detaching volume 7912aff7-a9a3-4b3c-9c43-4929c63d8144#033[00m
Feb  2 07:13:13 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:13:13 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:13:13 np0005604943 nova_compute[238883]: 2026-02-02 12:13:13.531 238887 INFO nova.virt.block_device [None req-b0331e3d-7eff-46a1-88cc-f9c8217c2ab0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Attempting to driver detach volume 7912aff7-a9a3-4b3c-9c43-4929c63d8144 from mountpoint /dev/vdb#033[00m
Feb  2 07:13:13 np0005604943 nova_compute[238883]: 2026-02-02 12:13:13.543 238887 DEBUG nova.virt.libvirt.driver [None req-b0331e3d-7eff-46a1-88cc-f9c8217c2ab0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Attempting to detach device vdb from instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 07:13:13 np0005604943 nova_compute[238883]: 2026-02-02 12:13:13.544 238887 DEBUG nova.virt.libvirt.guest [None req-b0331e3d-7eff-46a1-88cc-f9c8217c2ab0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 07:13:13 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:13:13 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-7912aff7-a9a3-4b3c-9c43-4929c63d8144">
Feb  2 07:13:13 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:13:13 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:13:13 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:13:13 np0005604943 nova_compute[238883]:  <serial>7912aff7-a9a3-4b3c-9c43-4929c63d8144</serial>
Feb  2 07:13:13 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 07:13:13 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:13:13 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 07:13:13 np0005604943 nova_compute[238883]: 2026-02-02 12:13:13.552 238887 INFO nova.virt.libvirt.driver [None req-b0331e3d-7eff-46a1-88cc-f9c8217c2ab0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Successfully detached device vdb from instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 from the persistent domain config.#033[00m
Feb  2 07:13:13 np0005604943 nova_compute[238883]: 2026-02-02 12:13:13.553 238887 DEBUG nova.virt.libvirt.driver [None req-b0331e3d-7eff-46a1-88cc-f9c8217c2ab0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 07:13:13 np0005604943 nova_compute[238883]: 2026-02-02 12:13:13.553 238887 DEBUG nova.virt.libvirt.guest [None req-b0331e3d-7eff-46a1-88cc-f9c8217c2ab0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 07:13:13 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:13:13 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-7912aff7-a9a3-4b3c-9c43-4929c63d8144">
Feb  2 07:13:13 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:13:13 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:13:13 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:13:13 np0005604943 nova_compute[238883]:  <serial>7912aff7-a9a3-4b3c-9c43-4929c63d8144</serial>
Feb  2 07:13:13 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 07:13:13 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:13:13 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 07:13:13 np0005604943 nova_compute[238883]: 2026-02-02 12:13:13.669 238887 DEBUG nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Received event <DeviceRemovedEvent: 1770034393.6683517, 164c5391-dfe4-46ea-869a-95b649a1c3c7 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 07:13:13 np0005604943 nova_compute[238883]: 2026-02-02 12:13:13.674 238887 DEBUG nova.virt.libvirt.driver [None req-b0331e3d-7eff-46a1-88cc-f9c8217c2ab0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 07:13:13 np0005604943 nova_compute[238883]: 2026-02-02 12:13:13.677 238887 INFO nova.virt.libvirt.driver [None req-b0331e3d-7eff-46a1-88cc-f9c8217c2ab0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Successfully detached device vdb from instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 from the live domain config.#033[00m
Feb  2 07:13:13 np0005604943 nova_compute[238883]: 2026-02-02 12:13:13.869 238887 DEBUG nova.objects.instance [None req-b0331e3d-7eff-46a1-88cc-f9c8217c2ab0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lazy-loading 'flavor' on Instance uuid 164c5391-dfe4-46ea-869a-95b649a1c3c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:13:13 np0005604943 nova_compute[238883]: 2026-02-02 12:13:13.911 238887 DEBUG oslo_concurrency.lockutils [None req-b0331e3d-7eff-46a1-88cc-f9c8217c2ab0 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 354 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 109 KiB/s wr, 46 op/s
Feb  2 07:13:15 np0005604943 nova_compute[238883]: 2026-02-02 12:13:15.612 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:16 np0005604943 nova_compute[238883]: 2026-02-02 12:13:16.429 238887 DEBUG oslo_concurrency.lockutils [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:16 np0005604943 nova_compute[238883]: 2026-02-02 12:13:16.429 238887 DEBUG oslo_concurrency.lockutils [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:16 np0005604943 nova_compute[238883]: 2026-02-02 12:13:16.457 238887 DEBUG nova.objects.instance [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lazy-loading 'flavor' on Instance uuid 164c5391-dfe4-46ea-869a-95b649a1c3c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:13:16 np0005604943 nova_compute[238883]: 2026-02-02 12:13:16.514 238887 DEBUG oslo_concurrency.lockutils [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 354 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 107 KiB/s wr, 33 op/s
Feb  2 07:13:16 np0005604943 nova_compute[238883]: 2026-02-02 12:13:16.803 238887 DEBUG oslo_concurrency.lockutils [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:16 np0005604943 nova_compute[238883]: 2026-02-02 12:13:16.804 238887 DEBUG oslo_concurrency.lockutils [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:16 np0005604943 nova_compute[238883]: 2026-02-02 12:13:16.804 238887 INFO nova.compute.manager [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Attaching volume fdb61ee2-4559-4178-81c4-6ed26df82214 to /dev/vdb#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.075 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.234 238887 DEBUG os_brick.utils [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.235 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.249 249642 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.249 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[b827a996-8347-4102-b83f-f90d95599ca2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.251 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.260 249642 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.260 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[a600b62b-5576-472f-9d2d-df6d7c84c088]: (4, ('InitiatorName=iqn.1994-05.com.redhat:0358d905acb', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.262 249642 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.270 249642 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.270 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[61b4c514-8212-492c-b301-4782bff61a2e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.272 249642 DEBUG oslo.privsep.daemon [-] privsep: reply[eb475cc6-6835-4cd2-96ac-bb5d23d72281]: (4, '4ccddb6b-e5c4-4cee-96ab-cfd456961526') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.273 238887 DEBUG oslo_concurrency.processutils [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.297 238887 DEBUG oslo_concurrency.processutils [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.299 238887 DEBUG os_brick.initiator.connectors.lightos [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.299 238887 DEBUG os_brick.initiator.connectors.lightos [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.300 238887 DEBUG os_brick.initiator.connectors.lightos [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.300 238887 DEBUG os_brick.utils [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:0358d905acb', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4ccddb6b-e5c4-4cee-96ab-cfd456961526', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Feb  2 07:13:17 np0005604943 nova_compute[238883]: 2026-02-02 12:13:17.300 238887 DEBUG nova.virt.block_device [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Updating existing volume attachment record: 6a18d9b9-675a-4ae7-a568-511309ba6ade _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Feb  2 07:13:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e497 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:13:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 07:13:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3193018949' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb  2 07:13:18 np0005604943 nova_compute[238883]: 2026-02-02 12:13:18.155 238887 DEBUG nova.objects.instance [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lazy-loading 'flavor' on Instance uuid 164c5391-dfe4-46ea-869a-95b649a1c3c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:13:18 np0005604943 nova_compute[238883]: 2026-02-02 12:13:18.185 238887 DEBUG nova.virt.libvirt.driver [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Attempting to attach volume fdb61ee2-4559-4178-81c4-6ed26df82214 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Feb  2 07:13:18 np0005604943 nova_compute[238883]: 2026-02-02 12:13:18.187 238887 DEBUG nova.virt.libvirt.guest [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] attach device xml: <disk type="network" device="disk">
Feb  2 07:13:18 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:13:18 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-fdb61ee2-4559-4178-81c4-6ed26df82214">
Feb  2 07:13:18 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:13:18 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:13:18 np0005604943 nova_compute[238883]:  <auth username="openstack">
Feb  2 07:13:18 np0005604943 nova_compute[238883]:    <secret type="ceph" uuid="4548a36b-7cdc-5e3e-a814-4e1571be1fae"/>
Feb  2 07:13:18 np0005604943 nova_compute[238883]:  </auth>
Feb  2 07:13:18 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:13:18 np0005604943 nova_compute[238883]:  <serial>fdb61ee2-4559-4178-81c4-6ed26df82214</serial>
Feb  2 07:13:18 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:13:18 np0005604943 nova_compute[238883]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 07:13:18 np0005604943 nova_compute[238883]: 2026-02-02 12:13:18.300 238887 DEBUG nova.virt.libvirt.driver [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:13:18 np0005604943 nova_compute[238883]: 2026-02-02 12:13:18.302 238887 DEBUG nova.virt.libvirt.driver [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:13:18 np0005604943 nova_compute[238883]: 2026-02-02 12:13:18.302 238887 DEBUG nova.virt.libvirt.driver [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 07:13:18 np0005604943 nova_compute[238883]: 2026-02-02 12:13:18.302 238887 DEBUG nova.virt.libvirt.driver [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] No VIF found with MAC fa:16:3e:c9:3a:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 07:13:18 np0005604943 nova_compute[238883]: 2026-02-02 12:13:18.480 238887 DEBUG oslo_concurrency.lockutils [None req-bf934d8c-907a-4109-943b-6e61758ae4d7 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 354 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 148 KiB/s rd, 117 KiB/s wr, 42 op/s
Feb  2 07:13:20 np0005604943 nova_compute[238883]: 2026-02-02 12:13:20.614 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 354 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 72 KiB/s wr, 49 op/s
Feb  2 07:13:20 np0005604943 nova_compute[238883]: 2026-02-02 12:13:20.929 238887 DEBUG oslo_concurrency.lockutils [None req-de4cc5ac-e673-43d2-9301-cebe2b2885cc 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:20 np0005604943 nova_compute[238883]: 2026-02-02 12:13:20.930 238887 DEBUG oslo_concurrency.lockutils [None req-de4cc5ac-e673-43d2-9301-cebe2b2885cc 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:20 np0005604943 nova_compute[238883]: 2026-02-02 12:13:20.958 238887 INFO nova.compute.manager [None req-de4cc5ac-e673-43d2-9301-cebe2b2885cc 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Detaching volume fdb61ee2-4559-4178-81c4-6ed26df82214#033[00m
Feb  2 07:13:21 np0005604943 nova_compute[238883]: 2026-02-02 12:13:21.105 238887 INFO nova.virt.block_device [None req-de4cc5ac-e673-43d2-9301-cebe2b2885cc 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Attempting to driver detach volume fdb61ee2-4559-4178-81c4-6ed26df82214 from mountpoint /dev/vdb#033[00m
Feb  2 07:13:21 np0005604943 nova_compute[238883]: 2026-02-02 12:13:21.113 238887 DEBUG nova.virt.libvirt.driver [None req-de4cc5ac-e673-43d2-9301-cebe2b2885cc 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Attempting to detach device vdb from instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 07:13:21 np0005604943 nova_compute[238883]: 2026-02-02 12:13:21.114 238887 DEBUG nova.virt.libvirt.guest [None req-de4cc5ac-e673-43d2-9301-cebe2b2885cc 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 07:13:21 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:13:21 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-fdb61ee2-4559-4178-81c4-6ed26df82214">
Feb  2 07:13:21 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:13:21 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:13:21 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:13:21 np0005604943 nova_compute[238883]:  <serial>fdb61ee2-4559-4178-81c4-6ed26df82214</serial>
Feb  2 07:13:21 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 07:13:21 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:13:21 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 07:13:21 np0005604943 nova_compute[238883]: 2026-02-02 12:13:21.124 238887 INFO nova.virt.libvirt.driver [None req-de4cc5ac-e673-43d2-9301-cebe2b2885cc 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Successfully detached device vdb from instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 from the persistent domain config.#033[00m
Feb  2 07:13:21 np0005604943 nova_compute[238883]: 2026-02-02 12:13:21.124 238887 DEBUG nova.virt.libvirt.driver [None req-de4cc5ac-e673-43d2-9301-cebe2b2885cc 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 07:13:21 np0005604943 nova_compute[238883]: 2026-02-02 12:13:21.125 238887 DEBUG nova.virt.libvirt.guest [None req-de4cc5ac-e673-43d2-9301-cebe2b2885cc 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] detach device xml: <disk type="network" device="disk">
Feb  2 07:13:21 np0005604943 nova_compute[238883]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Feb  2 07:13:21 np0005604943 nova_compute[238883]:  <source protocol="rbd" name="volumes/volume-fdb61ee2-4559-4178-81c4-6ed26df82214">
Feb  2 07:13:21 np0005604943 nova_compute[238883]:    <host name="192.168.122.100" port="6789"/>
Feb  2 07:13:21 np0005604943 nova_compute[238883]:  </source>
Feb  2 07:13:21 np0005604943 nova_compute[238883]:  <target dev="vdb" bus="virtio"/>
Feb  2 07:13:21 np0005604943 nova_compute[238883]:  <serial>fdb61ee2-4559-4178-81c4-6ed26df82214</serial>
Feb  2 07:13:21 np0005604943 nova_compute[238883]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Feb  2 07:13:21 np0005604943 nova_compute[238883]: </disk>
Feb  2 07:13:21 np0005604943 nova_compute[238883]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 07:13:21 np0005604943 nova_compute[238883]: 2026-02-02 12:13:21.233 238887 DEBUG nova.virt.libvirt.driver [None req-25c0e983-2b2a-41e2-81a5-1db40129a06f - - - - - -] Received event <DeviceRemovedEvent: 1770034401.2330947, 164c5391-dfe4-46ea-869a-95b649a1c3c7 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 07:13:21 np0005604943 nova_compute[238883]: 2026-02-02 12:13:21.234 238887 DEBUG nova.virt.libvirt.driver [None req-de4cc5ac-e673-43d2-9301-cebe2b2885cc 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 07:13:21 np0005604943 nova_compute[238883]: 2026-02-02 12:13:21.237 238887 INFO nova.virt.libvirt.driver [None req-de4cc5ac-e673-43d2-9301-cebe2b2885cc 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Successfully detached device vdb from instance 164c5391-dfe4-46ea-869a-95b649a1c3c7 from the live domain config.#033[00m
Feb  2 07:13:21 np0005604943 nova_compute[238883]: 2026-02-02 12:13:21.386 238887 DEBUG nova.objects.instance [None req-de4cc5ac-e673-43d2-9301-cebe2b2885cc 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lazy-loading 'flavor' on Instance uuid 164c5391-dfe4-46ea-869a-95b649a1c3c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:13:21 np0005604943 nova_compute[238883]: 2026-02-02 12:13:21.492 238887 DEBUG oslo_concurrency.lockutils [None req-de4cc5ac-e673-43d2-9301-cebe2b2885cc 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007646563090857036 of space, bias 1.0, pg target 0.22939689272571107 quantized to 32 (current 32)
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029442147630464365 of space, bias 1.0, pg target 0.883264428913931 quantized to 32 (current 32)
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.5787448921558827e-06 of space, bias 1.0, pg target 0.0004736234676467648 quantized to 32 (current 32)
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006670843012484906 of space, bias 1.0, pg target 0.20012529037454718 quantized to 32 (current 32)
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.079706876486115e-06 of space, bias 4.0, pg target 0.0012956482517833378 quantized to 16 (current 16)
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:13:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 07:13:22 np0005604943 nova_compute[238883]: 2026-02-02 12:13:22.114 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 355 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 144 KiB/s rd, 118 KiB/s wr, 47 op/s
Feb  2 07:13:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:13:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3450409402' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:13:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:13:22 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3450409402' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:13:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e497 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:13:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:13:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2045564675' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:13:24 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:13:24 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2045564675' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:13:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 355 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 71 KiB/s wr, 43 op/s
Feb  2 07:13:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:13:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4257814712' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:13:25 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:13:25 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4257814712' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:13:25 np0005604943 nova_compute[238883]: 2026-02-02 12:13:25.617 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:26 np0005604943 podman[273314]: 2026-02-02 12:13:26.055663069 +0000 UTC m=+0.062475415 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Feb  2 07:13:26 np0005604943 podman[273313]: 2026-02-02 12:13:26.087303661 +0000 UTC m=+0.094337863 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller)
Feb  2 07:13:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 355 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 70 KiB/s wr, 39 op/s
Feb  2 07:13:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e497 do_prune osdmap full prune enabled
Feb  2 07:13:26 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e498 e498: 3 total, 3 up, 3 in
Feb  2 07:13:26 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e498: 3 total, 3 up, 3 in
Feb  2 07:13:27 np0005604943 nova_compute[238883]: 2026-02-02 12:13:27.115 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e498 do_prune osdmap full prune enabled
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e499 e499: 3 total, 3 up, 3 in
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e499: 3 total, 3 up, 3 in
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e499 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:13:27.941275) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034407941337, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1155, "num_deletes": 251, "total_data_size": 1625652, "memory_usage": 1647704, "flush_reason": "Manual Compaction"}
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034407951857, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1025490, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34277, "largest_seqno": 35431, "table_properties": {"data_size": 1020962, "index_size": 1988, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11861, "raw_average_key_size": 20, "raw_value_size": 1011184, "raw_average_value_size": 1786, "num_data_blocks": 89, "num_entries": 566, "num_filter_entries": 566, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770034310, "oldest_key_time": 1770034310, "file_creation_time": 1770034407, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 10691 microseconds, and 4291 cpu microseconds.
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:13:27.951912) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1025490 bytes OK
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:13:27.951991) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:13:27.953396) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:13:27.953418) EVENT_LOG_v1 {"time_micros": 1770034407953410, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:13:27.953442) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1620316, prev total WAL file size 1620316, number of live WAL files 2.
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:13:27.954181) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1001KB)], [68(11MB)]
Feb  2 07:13:27 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034407954304, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 13207538, "oldest_snapshot_seqno": -1}
Feb  2 07:13:28 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6897 keys, 10436544 bytes, temperature: kUnknown
Feb  2 07:13:28 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034408033373, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 10436544, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10385274, "index_size": 32918, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17285, "raw_key_size": 172648, "raw_average_key_size": 25, "raw_value_size": 10256412, "raw_average_value_size": 1487, "num_data_blocks": 1317, "num_entries": 6897, "num_filter_entries": 6897, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770034407, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Feb  2 07:13:28 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 07:13:28 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:13:28.033745) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 10436544 bytes
Feb  2 07:13:28 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:13:28.035589) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.8 rd, 131.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.6 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(23.1) write-amplify(10.2) OK, records in: 7372, records dropped: 475 output_compression: NoCompression
Feb  2 07:13:28 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:13:28.035613) EVENT_LOG_v1 {"time_micros": 1770034408035602, "job": 38, "event": "compaction_finished", "compaction_time_micros": 79175, "compaction_time_cpu_micros": 34910, "output_level": 6, "num_output_files": 1, "total_output_size": 10436544, "num_input_records": 7372, "num_output_records": 6897, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 07:13:28 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:13:28 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034408035946, "job": 38, "event": "table_file_deletion", "file_number": 70}
Feb  2 07:13:28 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:13:28 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034408037743, "job": 38, "event": "table_file_deletion", "file_number": 68}
Feb  2 07:13:28 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:13:27.953984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:13:28 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:13:28.037853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:13:28 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:13:28.037860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:13:28 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:13:28.037862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:13:28 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:13:28.037864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:13:28 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:13:28.037866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:13:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 353 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 152 KiB/s rd, 88 KiB/s wr, 82 op/s
Feb  2 07:13:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e499 do_prune osdmap full prune enabled
Feb  2 07:13:29 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e500 e500: 3 total, 3 up, 3 in
Feb  2 07:13:29 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e500: 3 total, 3 up, 3 in
Feb  2 07:13:30 np0005604943 nova_compute[238883]: 2026-02-02 12:13:30.619 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 353 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 111 KiB/s rd, 5.8 KiB/s wr, 147 op/s
Feb  2 07:13:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:13:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2612716247' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:13:31 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:13:31 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2612716247' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.117 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.577 238887 DEBUG oslo_concurrency.lockutils [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.578 238887 DEBUG oslo_concurrency.lockutils [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.578 238887 DEBUG oslo_concurrency.lockutils [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.578 238887 DEBUG oslo_concurrency.lockutils [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.578 238887 DEBUG oslo_concurrency.lockutils [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.579 238887 INFO nova.compute.manager [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Terminating instance#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.580 238887 DEBUG nova.compute.manager [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 07:13:32 np0005604943 kernel: tap900e2f84-b3 (unregistering): left promiscuous mode
Feb  2 07:13:32 np0005604943 NetworkManager[49093]: <info>  [1770034412.6358] device (tap900e2f84-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 07:13:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 351 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 5.8 KiB/s wr, 152 op/s
Feb  2 07:13:32 np0005604943 ovn_controller[145056]: 2026-02-02T12:13:32Z|00289|binding|INFO|Releasing lport 900e2f84-b3d4-4547-bc57-6f2929841348 from this chassis (sb_readonly=0)
Feb  2 07:13:32 np0005604943 ovn_controller[145056]: 2026-02-02T12:13:32Z|00290|binding|INFO|Setting lport 900e2f84-b3d4-4547-bc57-6f2929841348 down in Southbound
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.645 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:32 np0005604943 ovn_controller[145056]: 2026-02-02T12:13:32Z|00291|binding|INFO|Removing iface tap900e2f84-b3 ovn-installed in OVS
Feb  2 07:13:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:32.653 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:3a:08 10.100.0.10'], port_security=['fa:16:3e:c9:3a:08 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '164c5391-dfe4-46ea-869a-95b649a1c3c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c59f5e49-0a3a-410a-8325-47d3dec9f7b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '958cf437f65d4a81920df75a49529bf6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '97827449-3627-48d4-98b2-464ce4a68259 fd8be42c-59c5-490e-924a-1e23858192d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f147b1c-ca00-4250-8ab8-ad8ad4e3ed98, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>], logical_port=900e2f84-b3d4-4547-bc57-6f2929841348) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe337c0fcd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:13:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:32.654 155011 INFO neutron.agent.ovn.metadata.agent [-] Port 900e2f84-b3d4-4547-bc57-6f2929841348 in datapath c59f5e49-0a3a-410a-8325-47d3dec9f7b5 unbound from our chassis#033[00m
Feb  2 07:13:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:32.655 155011 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c59f5e49-0a3a-410a-8325-47d3dec9f7b5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.656 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:32.656 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[50affcd2-127c-49f1-b61e-e85ca8e465ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:32.657 155011 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5 namespace which is not needed anymore#033[00m
Feb  2 07:13:32 np0005604943 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Feb  2 07:13:32 np0005604943 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Consumed 15.976s CPU time.
Feb  2 07:13:32 np0005604943 systemd-machined[206973]: Machine qemu-29-instance-0000001d terminated.
Feb  2 07:13:32 np0005604943 neutron-haproxy-ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5[272472]: [NOTICE]   (272476) : haproxy version is 2.8.14-c23fe91
Feb  2 07:13:32 np0005604943 neutron-haproxy-ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5[272472]: [NOTICE]   (272476) : path to executable is /usr/sbin/haproxy
Feb  2 07:13:32 np0005604943 neutron-haproxy-ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5[272472]: [WARNING]  (272476) : Exiting Master process...
Feb  2 07:13:32 np0005604943 neutron-haproxy-ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5[272472]: [ALERT]    (272476) : Current worker (272478) exited with code 143 (Terminated)
Feb  2 07:13:32 np0005604943 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 07:13:32 np0005604943 neutron-haproxy-ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5[272472]: [WARNING]  (272476) : All workers exited. Exiting... (0)
Feb  2 07:13:32 np0005604943 systemd[1]: libpod-02a6c1f23f4427298a1f170a49b434ef7ecd02774ca000a52f312d4bc0bcd093.scope: Deactivated successfully.
Feb  2 07:13:32 np0005604943 podman[273380]: 2026-02-02 12:13:32.78854001 +0000 UTC m=+0.049799623 container died 02a6c1f23f4427298a1f170a49b434ef7ecd02774ca000a52f312d4bc0bcd093 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.814 238887 INFO nova.virt.libvirt.driver [-] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Instance destroyed successfully.#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.815 238887 DEBUG nova.objects.instance [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lazy-loading 'resources' on Instance uuid 164c5391-dfe4-46ea-869a-95b649a1c3c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 07:13:32 np0005604943 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-02a6c1f23f4427298a1f170a49b434ef7ecd02774ca000a52f312d4bc0bcd093-userdata-shm.mount: Deactivated successfully.
Feb  2 07:13:32 np0005604943 systemd[1]: var-lib-containers-storage-overlay-7856b69b91fa469ba5c25d85e11f23dc7525eb9242a952122517d0844d3cd868-merged.mount: Deactivated successfully.
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.830 238887 DEBUG nova.virt.libvirt.vif [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T12:12:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1129494028',display_name='tempest-SnapshotDataIntegrityTests-server-1129494028',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1129494028',id=29,image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMd27Fbt/2AjwZ7BSXCyqEHmD9EBntm5Erk9UvnC43jyZQiT2isCRvatHrpXLUoPsAJ21gvrK0s2X2qgOwwNSe8NmgEEdwZ83TjSqbD5u0ryoyNIVVBNPA4TMvQ0gqSCyA==',key_name='tempest-keypair-248723971',keypairs=<?>,launch_index=0,launched_at=2026-02-02T12:12:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='958cf437f65d4a81920df75a49529bf6',ramdisk_id='',reservation_id='r-0xu113v9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='21b263f0-00f1-47be-b8b1-e3c07da0a6a2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SnapshotDataIntegrityTests-1537204195',owner_user_name='tempest-SnapshotDataIntegrityTests-1537204195-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T12:12:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='070af1bcc4704072a10de7fa6d563de8',uuid=164c5391-dfe4-46ea-869a-95b649a1c3c7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "900e2f84-b3d4-4547-bc57-6f2929841348", "address": "fa:16:3e:c9:3a:08", "network": {"id": "c59f5e49-0a3a-410a-8325-47d3dec9f7b5", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1181677005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "958cf437f65d4a81920df75a49529bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap900e2f84-b3", "ovs_interfaceid": "900e2f84-b3d4-4547-bc57-6f2929841348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.831 238887 DEBUG nova.network.os_vif_util [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Converting VIF {"id": "900e2f84-b3d4-4547-bc57-6f2929841348", "address": "fa:16:3e:c9:3a:08", "network": {"id": "c59f5e49-0a3a-410a-8325-47d3dec9f7b5", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1181677005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "958cf437f65d4a81920df75a49529bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap900e2f84-b3", "ovs_interfaceid": "900e2f84-b3d4-4547-bc57-6f2929841348", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.832 238887 DEBUG nova.network.os_vif_util [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c9:3a:08,bridge_name='br-int',has_traffic_filtering=True,id=900e2f84-b3d4-4547-bc57-6f2929841348,network=Network(c59f5e49-0a3a-410a-8325-47d3dec9f7b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap900e2f84-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.832 238887 DEBUG os_vif [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:3a:08,bridge_name='br-int',has_traffic_filtering=True,id=900e2f84-b3d4-4547-bc57-6f2929841348,network=Network(c59f5e49-0a3a-410a-8325-47d3dec9f7b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap900e2f84-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 07:13:32 np0005604943 podman[273380]: 2026-02-02 12:13:32.835816455 +0000 UTC m=+0.097076058 container cleanup 02a6c1f23f4427298a1f170a49b434ef7ecd02774ca000a52f312d4bc0bcd093 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.836 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.836 238887 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap900e2f84-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.839 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.840 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.843 238887 INFO os_vif [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:3a:08,bridge_name='br-int',has_traffic_filtering=True,id=900e2f84-b3d4-4547-bc57-6f2929841348,network=Network(c59f5e49-0a3a-410a-8325-47d3dec9f7b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap900e2f84-b3')#033[00m
Feb  2 07:13:32 np0005604943 systemd[1]: libpod-conmon-02a6c1f23f4427298a1f170a49b434ef7ecd02774ca000a52f312d4bc0bcd093.scope: Deactivated successfully.
Feb  2 07:13:32 np0005604943 podman[273422]: 2026-02-02 12:13:32.902597725 +0000 UTC m=+0.044573953 container remove 02a6c1f23f4427298a1f170a49b434ef7ecd02774ca000a52f312d4bc0bcd093 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb  2 07:13:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:32.909 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[3c276c54-5c36-40ea-8ae0-02932f5be1bb]: (4, ('Mon Feb  2 12:13:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5 (02a6c1f23f4427298a1f170a49b434ef7ecd02774ca000a52f312d4bc0bcd093)\n02a6c1f23f4427298a1f170a49b434ef7ecd02774ca000a52f312d4bc0bcd093\nMon Feb  2 12:13:32 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5 (02a6c1f23f4427298a1f170a49b434ef7ecd02774ca000a52f312d4bc0bcd093)\n02a6c1f23f4427298a1f170a49b434ef7ecd02774ca000a52f312d4bc0bcd093\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:32.912 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[effc8c85-e7d8-439f-bfed-dcc57ff2f54d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:32.914 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc59f5e49-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.916 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:32 np0005604943 kernel: tapc59f5e49-00: left promiscuous mode
Feb  2 07:13:32 np0005604943 nova_compute[238883]: 2026-02-02 12:13:32.921 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:32.924 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[bf6e7b50-aff9-4329-ab2e-7d93c568f1ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e500 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:13:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e500 do_prune osdmap full prune enabled
Feb  2 07:13:32 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e501 e501: 3 total, 3 up, 3 in
Feb  2 07:13:32 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e501: 3 total, 3 up, 3 in
Feb  2 07:13:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:32.944 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[befc84fe-abd3-4231-8082-5f7ae05d6c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:32.946 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[6bc7df37-2787-4576-af10-ee1f9b436808]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:32.962 245329 DEBUG oslo.privsep.daemon [-] privsep: reply[1c5be393-485e-4d17-af53-5746d3f13b6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479177, 'reachable_time': 33899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273456, 'error': None, 'target': 'ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:32 np0005604943 systemd[1]: run-netns-ovnmeta\x2dc59f5e49\x2d0a3a\x2d410a\x2d8325\x2d47d3dec9f7b5.mount: Deactivated successfully.
Feb  2 07:13:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:32.966 155575 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c59f5e49-0a3a-410a-8325-47d3dec9f7b5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 07:13:32 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:32.966 155575 DEBUG oslo.privsep.daemon [-] privsep: reply[d2bc7481-b56e-4707-8ceb-5ebd34e4906e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 07:13:33 np0005604943 nova_compute[238883]: 2026-02-02 12:13:33.114 238887 INFO nova.virt.libvirt.driver [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Deleting instance files /var/lib/nova/instances/164c5391-dfe4-46ea-869a-95b649a1c3c7_del#033[00m
Feb  2 07:13:33 np0005604943 nova_compute[238883]: 2026-02-02 12:13:33.115 238887 INFO nova.virt.libvirt.driver [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Deletion of /var/lib/nova/instances/164c5391-dfe4-46ea-869a-95b649a1c3c7_del complete#033[00m
Feb  2 07:13:33 np0005604943 nova_compute[238883]: 2026-02-02 12:13:33.180 238887 INFO nova.compute.manager [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 07:13:33 np0005604943 nova_compute[238883]: 2026-02-02 12:13:33.180 238887 DEBUG oslo.service.loopingcall [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 07:13:33 np0005604943 nova_compute[238883]: 2026-02-02 12:13:33.181 238887 DEBUG nova.compute.manager [-] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 07:13:33 np0005604943 nova_compute[238883]: 2026-02-02 12:13:33.181 238887 DEBUG nova.network.neutron [-] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 07:13:33 np0005604943 nova_compute[238883]: 2026-02-02 12:13:33.425 238887 DEBUG nova.compute.manager [req-c438ec8b-7736-4246-bdbe-3f2ec2dcfd4c req-f7c1bd23-6f1a-40bf-b740-aaebed031d38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Received event network-vif-unplugged-900e2f84-b3d4-4547-bc57-6f2929841348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:13:33 np0005604943 nova_compute[238883]: 2026-02-02 12:13:33.425 238887 DEBUG oslo_concurrency.lockutils [req-c438ec8b-7736-4246-bdbe-3f2ec2dcfd4c req-f7c1bd23-6f1a-40bf-b740-aaebed031d38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:33 np0005604943 nova_compute[238883]: 2026-02-02 12:13:33.426 238887 DEBUG oslo_concurrency.lockutils [req-c438ec8b-7736-4246-bdbe-3f2ec2dcfd4c req-f7c1bd23-6f1a-40bf-b740-aaebed031d38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:33 np0005604943 nova_compute[238883]: 2026-02-02 12:13:33.426 238887 DEBUG oslo_concurrency.lockutils [req-c438ec8b-7736-4246-bdbe-3f2ec2dcfd4c req-f7c1bd23-6f1a-40bf-b740-aaebed031d38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:33 np0005604943 nova_compute[238883]: 2026-02-02 12:13:33.426 238887 DEBUG nova.compute.manager [req-c438ec8b-7736-4246-bdbe-3f2ec2dcfd4c req-f7c1bd23-6f1a-40bf-b740-aaebed031d38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] No waiting events found dispatching network-vif-unplugged-900e2f84-b3d4-4547-bc57-6f2929841348 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:13:33 np0005604943 nova_compute[238883]: 2026-02-02 12:13:33.426 238887 DEBUG nova.compute.manager [req-c438ec8b-7736-4246-bdbe-3f2ec2dcfd4c req-f7c1bd23-6f1a-40bf-b740-aaebed031d38 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Received event network-vif-unplugged-900e2f84-b3d4-4547-bc57-6f2929841348 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 07:13:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:33.639 155011 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:72:bc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e6:6c:c3:d0:0a:db'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 07:13:33 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:33.640 155011 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 07:13:33 np0005604943 nova_compute[238883]: 2026-02-02 12:13:33.639 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:33 np0005604943 nova_compute[238883]: 2026-02-02 12:13:33.973 238887 DEBUG nova.network.neutron [-] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 07:13:33 np0005604943 nova_compute[238883]: 2026-02-02 12:13:33.993 238887 INFO nova.compute.manager [-] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Took 0.81 seconds to deallocate network for instance.#033[00m
Feb  2 07:13:34 np0005604943 nova_compute[238883]: 2026-02-02 12:13:34.035 238887 DEBUG oslo_concurrency.lockutils [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:34 np0005604943 nova_compute[238883]: 2026-02-02 12:13:34.036 238887 DEBUG oslo_concurrency.lockutils [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:34 np0005604943 nova_compute[238883]: 2026-02-02 12:13:34.094 238887 DEBUG oslo_concurrency.processutils [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:13:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 326 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 6.6 KiB/s wr, 115 op/s
Feb  2 07:13:34 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:13:34 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/991259210' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:13:34 np0005604943 nova_compute[238883]: 2026-02-02 12:13:34.732 238887 DEBUG oslo_concurrency.processutils [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:13:34 np0005604943 nova_compute[238883]: 2026-02-02 12:13:34.739 238887 DEBUG nova.compute.provider_tree [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:13:34 np0005604943 nova_compute[238883]: 2026-02-02 12:13:34.760 238887 DEBUG nova.scheduler.client.report [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:13:34 np0005604943 nova_compute[238883]: 2026-02-02 12:13:34.782 238887 DEBUG oslo_concurrency.lockutils [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:34 np0005604943 nova_compute[238883]: 2026-02-02 12:13:34.805 238887 INFO nova.scheduler.client.report [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Deleted allocations for instance 164c5391-dfe4-46ea-869a-95b649a1c3c7#033[00m
Feb  2 07:13:34 np0005604943 nova_compute[238883]: 2026-02-02 12:13:34.869 238887 DEBUG oslo_concurrency.lockutils [None req-d5691e36-5b02-4c8b-9a0a-da812efe6597 070af1bcc4704072a10de7fa6d563de8 958cf437f65d4a81920df75a49529bf6 - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:35 np0005604943 nova_compute[238883]: 2026-02-02 12:13:35.500 238887 DEBUG nova.compute.manager [req-57deac7e-98ae-4ca9-a1a0-c29134deb705 req-adcb746c-78c5-409b-9dcc-dac102f4647e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Received event network-vif-plugged-900e2f84-b3d4-4547-bc57-6f2929841348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:13:35 np0005604943 nova_compute[238883]: 2026-02-02 12:13:35.501 238887 DEBUG oslo_concurrency.lockutils [req-57deac7e-98ae-4ca9-a1a0-c29134deb705 req-adcb746c-78c5-409b-9dcc-dac102f4647e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Acquiring lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:35 np0005604943 nova_compute[238883]: 2026-02-02 12:13:35.501 238887 DEBUG oslo_concurrency.lockutils [req-57deac7e-98ae-4ca9-a1a0-c29134deb705 req-adcb746c-78c5-409b-9dcc-dac102f4647e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:35 np0005604943 nova_compute[238883]: 2026-02-02 12:13:35.501 238887 DEBUG oslo_concurrency.lockutils [req-57deac7e-98ae-4ca9-a1a0-c29134deb705 req-adcb746c-78c5-409b-9dcc-dac102f4647e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] Lock "164c5391-dfe4-46ea-869a-95b649a1c3c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:35 np0005604943 nova_compute[238883]: 2026-02-02 12:13:35.501 238887 DEBUG nova.compute.manager [req-57deac7e-98ae-4ca9-a1a0-c29134deb705 req-adcb746c-78c5-409b-9dcc-dac102f4647e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] No waiting events found dispatching network-vif-plugged-900e2f84-b3d4-4547-bc57-6f2929841348 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 07:13:35 np0005604943 nova_compute[238883]: 2026-02-02 12:13:35.502 238887 WARNING nova.compute.manager [req-57deac7e-98ae-4ca9-a1a0-c29134deb705 req-adcb746c-78c5-409b-9dcc-dac102f4647e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Received unexpected event network-vif-plugged-900e2f84-b3d4-4547-bc57-6f2929841348 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 07:13:35 np0005604943 nova_compute[238883]: 2026-02-02 12:13:35.502 238887 DEBUG nova.compute.manager [req-57deac7e-98ae-4ca9-a1a0-c29134deb705 req-adcb746c-78c5-409b-9dcc-dac102f4647e 90fda3c1571c4bb6a0200c98c8f8822a 5d1391dd753a4b3d8816407391cfb72d - - default default] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Received event network-vif-deleted-900e2f84-b3d4-4547-bc57-6f2929841348 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 07:13:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 326 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 5.6 KiB/s wr, 98 op/s
Feb  2 07:13:37 np0005604943 nova_compute[238883]: 2026-02-02 12:13:37.120 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:37 np0005604943 nova_compute[238883]: 2026-02-02 12:13:37.840 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e501 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:13:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e501 do_prune osdmap full prune enabled
Feb  2 07:13:37 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 e502: 3 total, 3 up, 3 in
Feb  2 07:13:37 np0005604943 ceph-mon[75271]: log_channel(cluster) log [DBG] : osdmap e502: 3 total, 3 up, 3 in
Feb  2 07:13:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 271 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 3.5 KiB/s wr, 78 op/s
Feb  2 07:13:39 np0005604943 nova_compute[238883]: 2026-02-02 12:13:39.318 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:39 np0005604943 nova_compute[238883]: 2026-02-02 12:13:39.355 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:39 np0005604943 nova_compute[238883]: 2026-02-02 12:13:39.659 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:13:39 np0005604943 nova_compute[238883]: 2026-02-02 12:13:39.689 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:39 np0005604943 nova_compute[238883]: 2026-02-02 12:13:39.690 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:39 np0005604943 nova_compute[238883]: 2026-02-02 12:13:39.690 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:39 np0005604943 nova_compute[238883]: 2026-02-02 12:13:39.690 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 07:13:39 np0005604943 nova_compute[238883]: 2026-02-02 12:13:39.690 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:13:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:13:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2284082299' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:13:40 np0005604943 nova_compute[238883]: 2026-02-02 12:13:40.269 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:13:40 np0005604943 nova_compute[238883]: 2026-02-02 12:13:40.416 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:13:40 np0005604943 nova_compute[238883]: 2026-02-02 12:13:40.418 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4329MB free_disk=59.98813335876912GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 07:13:40 np0005604943 nova_compute[238883]: 2026-02-02 12:13:40.419 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:13:40 np0005604943 nova_compute[238883]: 2026-02-02 12:13:40.419 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:13:40 np0005604943 nova_compute[238883]: 2026-02-02 12:13:40.479 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 07:13:40 np0005604943 nova_compute[238883]: 2026-02-02 12:13:40.479 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 07:13:40 np0005604943 nova_compute[238883]: 2026-02-02 12:13:40.500 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:13:40 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:13:40.642 155011 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63c28000-4b99-40fb-b19f-6b3ba1922f6d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 07:13:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 271 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.5 KiB/s wr, 75 op/s
Feb  2 07:13:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:13:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:13:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:13:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:13:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:13:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:13:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:13:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1899768933' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:13:41 np0005604943 nova_compute[238883]: 2026-02-02 12:13:41.067 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:13:41 np0005604943 nova_compute[238883]: 2026-02-02 12:13:41.074 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:13:41 np0005604943 nova_compute[238883]: 2026-02-02 12:13:41.090 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:13:41 np0005604943 nova_compute[238883]: 2026-02-02 12:13:41.111 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 07:13:41 np0005604943 nova_compute[238883]: 2026-02-02 12:13:41.111 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:13:42 np0005604943 nova_compute[238883]: 2026-02-02 12:13:42.095 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:13:42 np0005604943 nova_compute[238883]: 2026-02-02 12:13:42.095 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:13:42 np0005604943 nova_compute[238883]: 2026-02-02 12:13:42.096 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:13:42 np0005604943 nova_compute[238883]: 2026-02-02 12:13:42.124 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.9 KiB/s wr, 62 op/s
Feb  2 07:13:42 np0005604943 nova_compute[238883]: 2026-02-02 12:13:42.843 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:42 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:13:44 np0005604943 nova_compute[238883]: 2026-02-02 12:13:44.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:13:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1023 B/s wr, 27 op/s
Feb  2 07:13:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:13:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3195208807' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:13:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:13:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3195208807' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:13:45 np0005604943 nova_compute[238883]: 2026-02-02 12:13:45.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:13:46 np0005604943 nova_compute[238883]: 2026-02-02 12:13:46.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:13:46 np0005604943 nova_compute[238883]: 2026-02-02 12:13:46.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 07:13:46 np0005604943 nova_compute[238883]: 2026-02-02 12:13:46.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 07:13:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1023 B/s wr, 27 op/s
Feb  2 07:13:46 np0005604943 nova_compute[238883]: 2026-02-02 12:13:46.674 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 07:13:47 np0005604943 nova_compute[238883]: 2026-02-02 12:13:47.126 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:47 np0005604943 nova_compute[238883]: 2026-02-02 12:13:47.812 238887 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770034412.8112807, 164c5391-dfe4-46ea-869a-95b649a1c3c7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 07:13:47 np0005604943 nova_compute[238883]: 2026-02-02 12:13:47.813 238887 INFO nova.compute.manager [-] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] VM Stopped (Lifecycle Event)#033[00m
Feb  2 07:13:47 np0005604943 nova_compute[238883]: 2026-02-02 12:13:47.832 238887 DEBUG nova.compute.manager [None req-04b4dd64-faed-470e-9697-53c6561c88e4 - - - - - -] [instance: 164c5391-dfe4-46ea-869a-95b649a1c3c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 07:13:47 np0005604943 nova_compute[238883]: 2026-02-02 12:13:47.845 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:47 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:13:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:13:49 np0005604943 nova_compute[238883]: 2026-02-02 12:13:49.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:13:49 np0005604943 nova_compute[238883]: 2026-02-02 12:13:49.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:13:49 np0005604943 nova_compute[238883]: 2026-02-02 12:13:49.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 07:13:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:13:52 np0005604943 nova_compute[238883]: 2026-02-02 12:13:52.127 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:52 np0005604943 nova_compute[238883]: 2026-02-02 12:13:52.635 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:13:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:13:52 np0005604943 nova_compute[238883]: 2026-02-02 12:13:52.847 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:13:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:13:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:13:57 np0005604943 podman[273530]: 2026-02-02 12:13:57.052516507 +0000 UTC m=+0.061342085 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Feb  2 07:13:57 np0005604943 podman[273529]: 2026-02-02 12:13:57.081377645 +0000 UTC m=+0.090450889 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb  2 07:13:57 np0005604943 nova_compute[238883]: 2026-02-02 12:13:57.128 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:57 np0005604943 nova_compute[238883]: 2026-02-02 12:13:57.849 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:13:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:13:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:02 np0005604943 nova_compute[238883]: 2026-02-02 12:14:02.129 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:02 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:02 np0005604943 nova_compute[238883]: 2026-02-02 12:14:02.852 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:02 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:14:04 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:06 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:07 np0005604943 nova_compute[238883]: 2026-02-02 12:14:07.131 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:07 np0005604943 nova_compute[238883]: 2026-02-02 12:14:07.854 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:07 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:14:08 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Optimize plan auto_2026-02-02_12:14:09
Feb  2 07:14:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 07:14:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] do_upmap
Feb  2 07:14:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'backups', 'images', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.meta']
Feb  2 07:14:09 np0005604943 ceph-mgr[75558]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 07:14:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:14:10.041 155011 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:14:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:14:10.041 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:14:10 np0005604943 ovn_metadata_agent[155006]: 2026-02-02 12:14:10.041 155011 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:14:10 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:14:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:14:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:14:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:14:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:14:10 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:14:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 07:14:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 07:14:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:14:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 07:14:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:14:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 07:14:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:14:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 07:14:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:14:11 np0005604943 ceph-mgr[75558]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 07:14:12 np0005604943 nova_compute[238883]: 2026-02-02 12:14:12.134 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:12 np0005604943 ovn_controller[145056]: 2026-02-02T12:14:12Z|00292|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Feb  2 07:14:12 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:12 np0005604943 nova_compute[238883]: 2026-02-02 12:14:12.885 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:14:12 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:14:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:14:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:14:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:14:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:14:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:14:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 07:14:13 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:14:13 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 07:14:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:14:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 07:14:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb  2 07:14:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 07:14:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:14:14 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:14:14 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:14:14 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:14:14 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:14:14 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb  2 07:14:14 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:14:14 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb  2 07:14:14 np0005604943 podman[273782]: 2026-02-02 12:14:14.462258354 +0000 UTC m=+0.061459348 container create ab903d307c081481d93d33efbe75eb44f4167dd38a6349e502802d1c3f387e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:14:14 np0005604943 systemd[1]: Started libpod-conmon-ab903d307c081481d93d33efbe75eb44f4167dd38a6349e502802d1c3f387e05.scope.
Feb  2 07:14:14 np0005604943 podman[273782]: 2026-02-02 12:14:14.425598885 +0000 UTC m=+0.024799909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:14:14 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:14:14 np0005604943 podman[273782]: 2026-02-02 12:14:14.578649771 +0000 UTC m=+0.177850795 container init ab903d307c081481d93d33efbe75eb44f4167dd38a6349e502802d1c3f387e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:14:14 np0005604943 podman[273782]: 2026-02-02 12:14:14.58749598 +0000 UTC m=+0.186696974 container start ab903d307c081481d93d33efbe75eb44f4167dd38a6349e502802d1c3f387e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 07:14:14 np0005604943 systemd[1]: libpod-ab903d307c081481d93d33efbe75eb44f4167dd38a6349e502802d1c3f387e05.scope: Deactivated successfully.
Feb  2 07:14:14 np0005604943 tender_hugle[273799]: 167 167
Feb  2 07:14:14 np0005604943 conmon[273799]: conmon ab903d307c081481d93d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab903d307c081481d93d33efbe75eb44f4167dd38a6349e502802d1c3f387e05.scope/container/memory.events
Feb  2 07:14:14 np0005604943 podman[273782]: 2026-02-02 12:14:14.598870976 +0000 UTC m=+0.198071990 container attach ab903d307c081481d93d33efbe75eb44f4167dd38a6349e502802d1c3f387e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hugle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:14:14 np0005604943 podman[273782]: 2026-02-02 12:14:14.600537141 +0000 UTC m=+0.199738155 container died ab903d307c081481d93d33efbe75eb44f4167dd38a6349e502802d1c3f387e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hugle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb  2 07:14:14 np0005604943 systemd[1]: var-lib-containers-storage-overlay-92aecc85d615eea1a63783c07e60cc087a84e17911989b835a2786aeccccb762-merged.mount: Deactivated successfully.
Feb  2 07:14:14 np0005604943 podman[273782]: 2026-02-02 12:14:14.666297734 +0000 UTC m=+0.265498728 container remove ab903d307c081481d93d33efbe75eb44f4167dd38a6349e502802d1c3f387e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hugle, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Feb  2 07:14:14 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:14 np0005604943 systemd[1]: libpod-conmon-ab903d307c081481d93d33efbe75eb44f4167dd38a6349e502802d1c3f387e05.scope: Deactivated successfully.
Feb  2 07:14:14 np0005604943 podman[273822]: 2026-02-02 12:14:14.805146866 +0000 UTC m=+0.048670013 container create 6577e708db21db5887ab8a1cfa6025d74e20a64c72da981278cbb5bf5b4ff214 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hodgkin, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 07:14:14 np0005604943 systemd[1]: Started libpod-conmon-6577e708db21db5887ab8a1cfa6025d74e20a64c72da981278cbb5bf5b4ff214.scope.
Feb  2 07:14:14 np0005604943 podman[273822]: 2026-02-02 12:14:14.779216858 +0000 UTC m=+0.022740035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:14:14 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:14:14 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80b7da7a2950e99bdfc8441bcecb58887f9fe7c3c6ce1e24e6b27891193acf46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:14:14 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80b7da7a2950e99bdfc8441bcecb58887f9fe7c3c6ce1e24e6b27891193acf46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:14:14 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80b7da7a2950e99bdfc8441bcecb58887f9fe7c3c6ce1e24e6b27891193acf46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:14:14 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80b7da7a2950e99bdfc8441bcecb58887f9fe7c3c6ce1e24e6b27891193acf46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:14:14 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80b7da7a2950e99bdfc8441bcecb58887f9fe7c3c6ce1e24e6b27891193acf46/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 07:14:14 np0005604943 podman[273822]: 2026-02-02 12:14:14.90653288 +0000 UTC m=+0.150056057 container init 6577e708db21db5887ab8a1cfa6025d74e20a64c72da981278cbb5bf5b4ff214 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 07:14:14 np0005604943 podman[273822]: 2026-02-02 12:14:14.91176867 +0000 UTC m=+0.155291817 container start 6577e708db21db5887ab8a1cfa6025d74e20a64c72da981278cbb5bf5b4ff214 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 07:14:14 np0005604943 podman[273822]: 2026-02-02 12:14:14.925522411 +0000 UTC m=+0.169045558 container attach 6577e708db21db5887ab8a1cfa6025d74e20a64c72da981278cbb5bf5b4ff214 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hodgkin, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb  2 07:14:15 np0005604943 wonderful_hodgkin[273839]: --> passed data devices: 0 physical, 3 LVM
Feb  2 07:14:15 np0005604943 wonderful_hodgkin[273839]: --> All data devices are unavailable
Feb  2 07:14:15 np0005604943 systemd[1]: libpod-6577e708db21db5887ab8a1cfa6025d74e20a64c72da981278cbb5bf5b4ff214.scope: Deactivated successfully.
Feb  2 07:14:15 np0005604943 podman[273822]: 2026-02-02 12:14:15.364163324 +0000 UTC m=+0.607686531 container died 6577e708db21db5887ab8a1cfa6025d74e20a64c72da981278cbb5bf5b4ff214 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hodgkin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb  2 07:14:15 np0005604943 systemd[1]: var-lib-containers-storage-overlay-80b7da7a2950e99bdfc8441bcecb58887f9fe7c3c6ce1e24e6b27891193acf46-merged.mount: Deactivated successfully.
Feb  2 07:14:15 np0005604943 podman[273822]: 2026-02-02 12:14:15.589415706 +0000 UTC m=+0.832938873 container remove 6577e708db21db5887ab8a1cfa6025d74e20a64c72da981278cbb5bf5b4ff214 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hodgkin, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 07:14:15 np0005604943 systemd[1]: libpod-conmon-6577e708db21db5887ab8a1cfa6025d74e20a64c72da981278cbb5bf5b4ff214.scope: Deactivated successfully.
Feb  2 07:14:16 np0005604943 podman[273935]: 2026-02-02 12:14:16.045153431 +0000 UTC m=+0.037806580 container create 74f8bf13cac052281148fe38efdc4549b5632df700995c669db4a62077362aad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:14:16 np0005604943 systemd[1]: Started libpod-conmon-74f8bf13cac052281148fe38efdc4549b5632df700995c669db4a62077362aad.scope.
Feb  2 07:14:16 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:14:16 np0005604943 podman[273935]: 2026-02-02 12:14:16.027466734 +0000 UTC m=+0.020119893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:14:16 np0005604943 podman[273935]: 2026-02-02 12:14:16.125785524 +0000 UTC m=+0.118438693 container init 74f8bf13cac052281148fe38efdc4549b5632df700995c669db4a62077362aad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dirac, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:14:16 np0005604943 podman[273935]: 2026-02-02 12:14:16.131275662 +0000 UTC m=+0.123928811 container start 74f8bf13cac052281148fe38efdc4549b5632df700995c669db4a62077362aad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dirac, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 07:14:16 np0005604943 dazzling_dirac[273951]: 167 167
Feb  2 07:14:16 np0005604943 systemd[1]: libpod-74f8bf13cac052281148fe38efdc4549b5632df700995c669db4a62077362aad.scope: Deactivated successfully.
Feb  2 07:14:16 np0005604943 podman[273935]: 2026-02-02 12:14:16.140040109 +0000 UTC m=+0.132693268 container attach 74f8bf13cac052281148fe38efdc4549b5632df700995c669db4a62077362aad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dirac, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 07:14:16 np0005604943 podman[273935]: 2026-02-02 12:14:16.140611384 +0000 UTC m=+0.133264573 container died 74f8bf13cac052281148fe38efdc4549b5632df700995c669db4a62077362aad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dirac, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:14:16 np0005604943 systemd[1]: var-lib-containers-storage-overlay-9568044b5d199c20f7e027208d42cdc993e8ab49cd4907c02250d9a0809ff7fe-merged.mount: Deactivated successfully.
Feb  2 07:14:16 np0005604943 podman[273935]: 2026-02-02 12:14:16.204143766 +0000 UTC m=+0.196796915 container remove 74f8bf13cac052281148fe38efdc4549b5632df700995c669db4a62077362aad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dirac, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 07:14:16 np0005604943 systemd[1]: libpod-conmon-74f8bf13cac052281148fe38efdc4549b5632df700995c669db4a62077362aad.scope: Deactivated successfully.
Feb  2 07:14:16 np0005604943 podman[273974]: 2026-02-02 12:14:16.352165236 +0000 UTC m=+0.047709967 container create 09df3e05c4440b6af96902919c70b620de4f5e8b21e33f02d6c521aa4446069a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wu, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:14:16 np0005604943 systemd[1]: Started libpod-conmon-09df3e05c4440b6af96902919c70b620de4f5e8b21e33f02d6c521aa4446069a.scope.
Feb  2 07:14:16 np0005604943 podman[273974]: 2026-02-02 12:14:16.328002085 +0000 UTC m=+0.023546796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:14:16 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:14:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5649366007d5803e449f9f45ecffb766eedb800e63eed3cf064629261e6e9fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:14:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5649366007d5803e449f9f45ecffb766eedb800e63eed3cf064629261e6e9fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:14:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5649366007d5803e449f9f45ecffb766eedb800e63eed3cf064629261e6e9fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:14:16 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5649366007d5803e449f9f45ecffb766eedb800e63eed3cf064629261e6e9fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:14:16 np0005604943 podman[273974]: 2026-02-02 12:14:16.448334049 +0000 UTC m=+0.143878780 container init 09df3e05c4440b6af96902919c70b620de4f5e8b21e33f02d6c521aa4446069a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wu, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 07:14:16 np0005604943 podman[273974]: 2026-02-02 12:14:16.456320333 +0000 UTC m=+0.151865024 container start 09df3e05c4440b6af96902919c70b620de4f5e8b21e33f02d6c521aa4446069a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb  2 07:14:16 np0005604943 podman[273974]: 2026-02-02 12:14:16.473049504 +0000 UTC m=+0.168594215 container attach 09df3e05c4440b6af96902919c70b620de4f5e8b21e33f02d6c521aa4446069a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:14:16 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:16 np0005604943 goofy_wu[273990]: {
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:    "0": [
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:        {
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "devices": [
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "/dev/loop3"
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            ],
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "lv_name": "ceph_lv0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "lv_size": "21470642176",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=e474a366-92f2-422d-9a63-15528361045b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "lv_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "name": "ceph_lv0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "tags": {
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.block_uuid": "dvAs6I-Yv3y-Kh6a-WaaF-FkK1-HUdK-MhaoM0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.cluster_name": "ceph",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.crush_device_class": "",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.encrypted": "0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.objectstore": "bluestore",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.osd_fsid": "e474a366-92f2-422d-9a63-15528361045b",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.osd_id": "0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.type": "block",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.vdo": "0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.with_tpm": "0"
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            },
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "type": "block",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "vg_name": "ceph_vg0"
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:        }
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:    ],
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:    "1": [
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:        {
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "devices": [
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "/dev/loop4"
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            ],
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "lv_name": "ceph_lv1",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "lv_size": "21470642176",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6e5a583e-2cb6-47b2-abc4-810fb33b121b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "lv_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "name": "ceph_lv1",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "tags": {
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.block_uuid": "3KHTou-rQZY-5gdz-xEUa-aSB3-76sz-OSuddp",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.cluster_name": "ceph",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.crush_device_class": "",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.encrypted": "0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.objectstore": "bluestore",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.osd_fsid": "6e5a583e-2cb6-47b2-abc4-810fb33b121b",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.osd_id": "1",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.type": "block",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.vdo": "0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.with_tpm": "0"
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            },
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "type": "block",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "vg_name": "ceph_vg1"
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:        }
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:    ],
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:    "2": [
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:        {
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "devices": [
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "/dev/loop5"
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            ],
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "lv_name": "ceph_lv2",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "lv_size": "21470642176",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4548a36b-7cdc-5e3e-a814-4e1571be1fae,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "lv_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "name": "ceph_lv2",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "tags": {
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.block_uuid": "K6jOCK-7eci-1XP3-fGW3-1UXd-ky2h-nJVIyw",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.cephx_lockbox_secret": "",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.cluster_fsid": "4548a36b-7cdc-5e3e-a814-4e1571be1fae",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.cluster_name": "ceph",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.crush_device_class": "",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.encrypted": "0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.objectstore": "bluestore",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.osd_fsid": "5ff6ef8c-ae3b-44a4-aa54-7d68ca65efa5",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.osd_id": "2",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.type": "block",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.vdo": "0",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:                "ceph.with_tpm": "0"
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            },
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "type": "block",
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:            "vg_name": "ceph_vg2"
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:        }
Feb  2 07:14:16 np0005604943 goofy_wu[273990]:    ]
Feb  2 07:14:16 np0005604943 goofy_wu[273990]: }
Feb  2 07:14:16 np0005604943 systemd[1]: libpod-09df3e05c4440b6af96902919c70b620de4f5e8b21e33f02d6c521aa4446069a.scope: Deactivated successfully.
Feb  2 07:14:16 np0005604943 podman[273974]: 2026-02-02 12:14:16.77228882 +0000 UTC m=+0.467833511 container died 09df3e05c4440b6af96902919c70b620de4f5e8b21e33f02d6c521aa4446069a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wu, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 07:14:16 np0005604943 systemd[1]: var-lib-containers-storage-overlay-c5649366007d5803e449f9f45ecffb766eedb800e63eed3cf064629261e6e9fc-merged.mount: Deactivated successfully.
Feb  2 07:14:16 np0005604943 podman[273974]: 2026-02-02 12:14:16.91471727 +0000 UTC m=+0.610262001 container remove 09df3e05c4440b6af96902919c70b620de4f5e8b21e33f02d6c521aa4446069a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 07:14:16 np0005604943 systemd[1]: libpod-conmon-09df3e05c4440b6af96902919c70b620de4f5e8b21e33f02d6c521aa4446069a.scope: Deactivated successfully.
Feb  2 07:14:17 np0005604943 nova_compute[238883]: 2026-02-02 12:14:17.135 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:17 np0005604943 podman[274074]: 2026-02-02 12:14:17.375449979 +0000 UTC m=+0.038819438 container create d94a8c3f651f87cb14f67cb5868afc6d3d7b6a6e9d2ceb09d2a5412696b4e77e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:14:17 np0005604943 systemd[1]: Started libpod-conmon-d94a8c3f651f87cb14f67cb5868afc6d3d7b6a6e9d2ceb09d2a5412696b4e77e.scope.
Feb  2 07:14:17 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:14:17 np0005604943 podman[274074]: 2026-02-02 12:14:17.447510581 +0000 UTC m=+0.110880070 container init d94a8c3f651f87cb14f67cb5868afc6d3d7b6a6e9d2ceb09d2a5412696b4e77e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:14:17 np0005604943 podman[274074]: 2026-02-02 12:14:17.454737286 +0000 UTC m=+0.118106745 container start d94a8c3f651f87cb14f67cb5868afc6d3d7b6a6e9d2ceb09d2a5412696b4e77e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 07:14:17 np0005604943 podman[274074]: 2026-02-02 12:14:17.35953225 +0000 UTC m=+0.022901739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:14:17 np0005604943 podman[274074]: 2026-02-02 12:14:17.458486867 +0000 UTC m=+0.121856346 container attach d94a8c3f651f87cb14f67cb5868afc6d3d7b6a6e9d2ceb09d2a5412696b4e77e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb  2 07:14:17 np0005604943 fervent_jackson[274090]: 167 167
Feb  2 07:14:17 np0005604943 systemd[1]: libpod-d94a8c3f651f87cb14f67cb5868afc6d3d7b6a6e9d2ceb09d2a5412696b4e77e.scope: Deactivated successfully.
Feb  2 07:14:17 np0005604943 podman[274074]: 2026-02-02 12:14:17.460621775 +0000 UTC m=+0.123991254 container died d94a8c3f651f87cb14f67cb5868afc6d3d7b6a6e9d2ceb09d2a5412696b4e77e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 07:14:17 np0005604943 systemd[1]: var-lib-containers-storage-overlay-33d2a5fe8ad2424bc34a0709d4b9402cb96efef97c5ac0de8a1b2a78a1fd9d61-merged.mount: Deactivated successfully.
Feb  2 07:14:17 np0005604943 podman[274074]: 2026-02-02 12:14:17.503691706 +0000 UTC m=+0.167061175 container remove d94a8c3f651f87cb14f67cb5868afc6d3d7b6a6e9d2ceb09d2a5412696b4e77e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 07:14:17 np0005604943 systemd[1]: libpod-conmon-d94a8c3f651f87cb14f67cb5868afc6d3d7b6a6e9d2ceb09d2a5412696b4e77e.scope: Deactivated successfully.
Feb  2 07:14:17 np0005604943 podman[274114]: 2026-02-02 12:14:17.647784749 +0000 UTC m=+0.045430855 container create 24ac918513b4691df8ffa1c797000b96eb6d090b9554879eb46d71d9aecc6ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb  2 07:14:17 np0005604943 systemd[1]: Started libpod-conmon-24ac918513b4691df8ffa1c797000b96eb6d090b9554879eb46d71d9aecc6ec5.scope.
Feb  2 07:14:17 np0005604943 systemd[1]: Started libcrun container.
Feb  2 07:14:17 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd550e5d5bf2b1bfc23d9a95b739b5de64aa9c320d92af1f57390e4bcb8ea1f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 07:14:17 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd550e5d5bf2b1bfc23d9a95b739b5de64aa9c320d92af1f57390e4bcb8ea1f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 07:14:17 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd550e5d5bf2b1bfc23d9a95b739b5de64aa9c320d92af1f57390e4bcb8ea1f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 07:14:17 np0005604943 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd550e5d5bf2b1bfc23d9a95b739b5de64aa9c320d92af1f57390e4bcb8ea1f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 07:14:17 np0005604943 podman[274114]: 2026-02-02 12:14:17.630700619 +0000 UTC m=+0.028346735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb  2 07:14:17 np0005604943 podman[274114]: 2026-02-02 12:14:17.729624636 +0000 UTC m=+0.127270742 container init 24ac918513b4691df8ffa1c797000b96eb6d090b9554879eb46d71d9aecc6ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_tharp, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 07:14:17 np0005604943 podman[274114]: 2026-02-02 12:14:17.734406864 +0000 UTC m=+0.132052950 container start 24ac918513b4691df8ffa1c797000b96eb6d090b9554879eb46d71d9aecc6ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb  2 07:14:17 np0005604943 podman[274114]: 2026-02-02 12:14:17.739556663 +0000 UTC m=+0.137202769 container attach 24ac918513b4691df8ffa1c797000b96eb6d090b9554879eb46d71d9aecc6ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_tharp, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 07:14:17 np0005604943 nova_compute[238883]: 2026-02-02 12:14:17.888 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:17 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:14:18 np0005604943 lvm[274210]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 07:14:18 np0005604943 lvm[274210]: VG ceph_vg1 finished
Feb  2 07:14:18 np0005604943 lvm[274209]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:14:18 np0005604943 lvm[274209]: VG ceph_vg0 finished
Feb  2 07:14:18 np0005604943 lvm[274212]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 07:14:18 np0005604943 lvm[274212]: VG ceph_vg2 finished
Feb  2 07:14:18 np0005604943 objective_tharp[274131]: {}
Feb  2 07:14:18 np0005604943 systemd[1]: libpod-24ac918513b4691df8ffa1c797000b96eb6d090b9554879eb46d71d9aecc6ec5.scope: Deactivated successfully.
Feb  2 07:14:18 np0005604943 systemd[1]: libpod-24ac918513b4691df8ffa1c797000b96eb6d090b9554879eb46d71d9aecc6ec5.scope: Consumed 1.324s CPU time.
Feb  2 07:14:18 np0005604943 podman[274114]: 2026-02-02 12:14:18.61126326 +0000 UTC m=+1.008909346 container died 24ac918513b4691df8ffa1c797000b96eb6d090b9554879eb46d71d9aecc6ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_tharp, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 07:14:18 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:18 np0005604943 systemd[1]: var-lib-containers-storage-overlay-fd550e5d5bf2b1bfc23d9a95b739b5de64aa9c320d92af1f57390e4bcb8ea1f9-merged.mount: Deactivated successfully.
Feb  2 07:14:18 np0005604943 podman[274114]: 2026-02-02 12:14:18.744401689 +0000 UTC m=+1.142047775 container remove 24ac918513b4691df8ffa1c797000b96eb6d090b9554879eb46d71d9aecc6ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_tharp, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 07:14:18 np0005604943 systemd[1]: libpod-conmon-24ac918513b4691df8ffa1c797000b96eb6d090b9554879eb46d71d9aecc6ec5.scope: Deactivated successfully.
Feb  2 07:14:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 07:14:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:14:18 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 07:14:18 np0005604943 ceph-mon[75271]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:14:19 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:14:19 np0005604943 ceph-mon[75271]: from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' 
Feb  2 07:14:20 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.466127137665234e-06 of space, bias 1.0, pg target 0.0007398381412995702 quantized to 32 (current 32)
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002907625079628483 of space, bias 1.0, pg target 0.8722875238885449 quantized to 32 (current 32)
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.574087369610433e-06 of space, bias 1.0, pg target 0.00047222621088312993 quantized to 32 (current 32)
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006670781533187306 of space, bias 1.0, pg target 0.20012344599561918 quantized to 32 (current 32)
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0765863363806636e-06 of space, bias 4.0, pg target 0.0012919036036567963 quantized to 16 (current 16)
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 07:14:21 np0005604943 ceph-mgr[75558]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 07:14:22 np0005604943 nova_compute[238883]: 2026-02-02 12:14:22.136 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:22 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:22 np0005604943 nova_compute[238883]: 2026-02-02 12:14:22.891 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:22 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:14:24 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:26 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:27 np0005604943 nova_compute[238883]: 2026-02-02 12:14:27.138 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:27 np0005604943 nova_compute[238883]: 2026-02-02 12:14:27.894 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:27 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:14:28 np0005604943 podman[274255]: 2026-02-02 12:14:28.05140522 +0000 UTC m=+0.060153543 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 07:14:28 np0005604943 podman[274254]: 2026-02-02 12:14:28.080136444 +0000 UTC m=+0.088825995 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Feb  2 07:14:28 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:30 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:32 np0005604943 nova_compute[238883]: 2026-02-02 12:14:32.141 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:32 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:32 np0005604943 nova_compute[238883]: 2026-02-02 12:14:32.896 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:33 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:14:34 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:36 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:37 np0005604943 nova_compute[238883]: 2026-02-02 12:14:37.143 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:37 np0005604943 nova_compute[238883]: 2026-02-02 12:14:37.899 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:38 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:14:38 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:39 np0005604943 nova_compute[238883]: 2026-02-02 12:14:39.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:14:40 np0005604943 nova_compute[238883]: 2026-02-02 12:14:40.328 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:14:40 np0005604943 nova_compute[238883]: 2026-02-02 12:14:40.329 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:14:40 np0005604943 nova_compute[238883]: 2026-02-02 12:14:40.329 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:14:40 np0005604943 nova_compute[238883]: 2026-02-02 12:14:40.329 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 07:14:40 np0005604943 nova_compute[238883]: 2026-02-02 12:14:40.330 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:14:40 np0005604943 systemd-logind[786]: New session 51 of user zuul.
Feb  2 07:14:40 np0005604943 systemd[1]: Started Session 51 of User zuul.
Feb  2 07:14:40 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:14:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:14:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:14:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:14:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 07:14:40 np0005604943 ceph-mgr[75558]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 07:14:40 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:14:40 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4169128529' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:14:40 np0005604943 nova_compute[238883]: 2026-02-02 12:14:40.891 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:14:41 np0005604943 nova_compute[238883]: 2026-02-02 12:14:41.066 238887 WARNING nova.virt.libvirt.driver [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 07:14:41 np0005604943 nova_compute[238883]: 2026-02-02 12:14:41.067 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4305MB free_disk=59.98813331127167GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 07:14:41 np0005604943 nova_compute[238883]: 2026-02-02 12:14:41.067 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 07:14:41 np0005604943 nova_compute[238883]: 2026-02-02 12:14:41.067 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 07:14:41 np0005604943 nova_compute[238883]: 2026-02-02 12:14:41.382 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 07:14:41 np0005604943 nova_compute[238883]: 2026-02-02 12:14:41.383 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 07:14:41 np0005604943 nova_compute[238883]: 2026-02-02 12:14:41.417 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 07:14:41 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 07:14:41 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2210923230' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb  2 07:14:41 np0005604943 nova_compute[238883]: 2026-02-02 12:14:41.998 238887 DEBUG oslo_concurrency.processutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 07:14:42 np0005604943 nova_compute[238883]: 2026-02-02 12:14:42.004 238887 DEBUG nova.compute.provider_tree [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed in ProviderTree for provider: 30401227-b88f-415d-9c2d-3119bd1baf61 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 07:14:42 np0005604943 nova_compute[238883]: 2026-02-02 12:14:42.033 238887 DEBUG nova.scheduler.client.report [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Inventory has not changed for provider 30401227-b88f-415d-9c2d-3119bd1baf61 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 07:14:42 np0005604943 nova_compute[238883]: 2026-02-02 12:14:42.035 238887 DEBUG nova.compute.resource_tracker [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 07:14:42 np0005604943 nova_compute[238883]: 2026-02-02 12:14:42.035 238887 DEBUG oslo_concurrency.lockutils [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 07:14:42 np0005604943 nova_compute[238883]: 2026-02-02 12:14:42.144 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:42 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:42 np0005604943 nova_compute[238883]: 2026-02-02 12:14:42.901 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:43 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19092 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:14:43.019440) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034483019526, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 900, "num_deletes": 259, "total_data_size": 1216037, "memory_usage": 1240768, "flush_reason": "Manual Compaction"}
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034483026868, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1193431, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35432, "largest_seqno": 36331, "table_properties": {"data_size": 1188883, "index_size": 2136, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 9999, "raw_average_key_size": 19, "raw_value_size": 1179621, "raw_average_value_size": 2290, "num_data_blocks": 96, "num_entries": 515, "num_filter_entries": 515, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770034408, "oldest_key_time": 1770034408, "file_creation_time": 1770034483, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 7469 microseconds, and 3804 cpu microseconds.
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:14:43.026920) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1193431 bytes OK
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:14:43.026944) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:14:43.028585) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:14:43.028600) EVENT_LOG_v1 {"time_micros": 1770034483028595, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:14:43.028624) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1211602, prev total WAL file size 1211602, number of live WAL files 2.
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:14:43.029219) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303036' seq:72057594037927935, type:22 .. '6C6F676D0031323630' seq:0, type:0; will stop at (end)
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1165KB)], [71(10191KB)]
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034483029269, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 11629975, "oldest_snapshot_seqno": -1}
Feb  2 07:14:43 np0005604943 nova_compute[238883]: 2026-02-02 12:14:43.035 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:14:43 np0005604943 nova_compute[238883]: 2026-02-02 12:14:43.035 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:14:43 np0005604943 nova_compute[238883]: 2026-02-02 12:14:43.035 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6878 keys, 11465595 bytes, temperature: kUnknown
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034483071281, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 11465595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11412949, "index_size": 34358, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 173233, "raw_average_key_size": 25, "raw_value_size": 11282936, "raw_average_value_size": 1640, "num_data_blocks": 1375, "num_entries": 6878, "num_filter_entries": 6878, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770031849, "oldest_key_time": 0, "file_creation_time": 1770034483, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cd28d1c1-a55b-4e90-928b-e550748bad19", "db_session_id": "QIU1XPNVBJBWFCSW99QT", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:14:43.071536) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 11465595 bytes
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:14:43.075514) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 276.3 rd, 272.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.0 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(19.4) write-amplify(9.6) OK, records in: 7412, records dropped: 534 output_compression: NoCompression
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:14:43.075535) EVENT_LOG_v1 {"time_micros": 1770034483075525, "job": 40, "event": "compaction_finished", "compaction_time_micros": 42085, "compaction_time_cpu_micros": 17962, "output_level": 6, "num_output_files": 1, "total_output_size": 11465595, "num_input_records": 7412, "num_output_records": 6878, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034483075777, "job": 40, "event": "table_file_deletion", "file_number": 73}
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770034483077368, "job": 40, "event": "table_file_deletion", "file_number": 71}
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:14:43.029102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:14:43.077441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:14:43.077448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:14:43.077450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:14:43.077452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:14:43 np0005604943 ceph-mon[75271]: rocksdb: (Original Log Time 2026/02/02-12:14:43.077454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 07:14:43 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19094 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 07:14:44 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb  2 07:14:44 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2422483008' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb  2 07:14:44 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 07:14:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3922356475' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb  2 07:14:45 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 07:14:45 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3922356475' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb  2 07:14:46 np0005604943 nova_compute[238883]: 2026-02-02 12:14:46.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:14:46 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:47 np0005604943 nova_compute[238883]: 2026-02-02 12:14:47.146 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:47 np0005604943 nova_compute[238883]: 2026-02-02 12:14:47.642 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:14:47 np0005604943 nova_compute[238883]: 2026-02-02 12:14:47.904 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:48 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:14:48 np0005604943 nova_compute[238883]: 2026-02-02 12:14:48.643 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:14:48 np0005604943 nova_compute[238883]: 2026-02-02 12:14:48.643 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 07:14:48 np0005604943 nova_compute[238883]: 2026-02-02 12:14:48.644 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 07:14:48 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:48 np0005604943 nova_compute[238883]: 2026-02-02 12:14:48.924 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 07:14:49 np0005604943 ovs-vsctl[274671]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb  2 07:14:49 np0005604943 nova_compute[238883]: 2026-02-02 12:14:49.641 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:14:49 np0005604943 nova_compute[238883]: 2026-02-02 12:14:49.642 238887 DEBUG nova.compute.manager [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 07:14:50 np0005604943 virtqemud[238654]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb  2 07:14:50 np0005604943 virtqemud[238654]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb  2 07:14:50 np0005604943 virtqemud[238654]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb  2 07:14:50 np0005604943 nova_compute[238883]: 2026-02-02 12:14:50.636 238887 DEBUG oslo_service.periodic_task [None req-2db00bf8-a3a7-4a5c-8821-4d324af3ae5a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 07:14:50 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:50 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue asok_command: cache status {prefix=cache status} (starting...)
Feb  2 07:14:51 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue asok_command: client ls {prefix=client ls} (starting...)
Feb  2 07:14:51 np0005604943 lvm[275028]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb  2 07:14:51 np0005604943 lvm[275028]: VG ceph_vg1 finished
Feb  2 07:14:51 np0005604943 lvm[275031]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb  2 07:14:51 np0005604943 lvm[275031]: VG ceph_vg2 finished
Feb  2 07:14:51 np0005604943 lvm[275035]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 07:14:51 np0005604943 lvm[275035]: VG ceph_vg0 finished
Feb  2 07:14:51 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19102 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 07:14:51 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue asok_command: damage ls {prefix=damage ls} (starting...)
Feb  2 07:14:51 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue asok_command: dump loads {prefix=dump loads} (starting...)
Feb  2 07:14:51 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19104 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 07:14:51 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb  2 07:14:52 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb  2 07:14:52 np0005604943 nova_compute[238883]: 2026-02-02 12:14:52.147 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:52 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb  2 07:14:52 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb  2 07:14:52 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19108 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 07:14:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Feb  2 07:14:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3785694330' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb  2 07:14:52 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb  2 07:14:52 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb  2 07:14:52 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:52 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 07:14:52 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3038311305' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb  2 07:14:52 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19110 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 07:14:52 np0005604943 ceph-mgr[75558]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb  2 07:14:52 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-twcemg[75554]: 2026-02-02T12:14:52.813+0000 7f564e481640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb  2 07:14:52 np0005604943 nova_compute[238883]: 2026-02-02 12:14:52.905 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:52 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue asok_command: ops {prefix=ops} (starting...)
Feb  2 07:14:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:14:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Feb  2 07:14:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3910132479' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb  2 07:14:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb  2 07:14:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2806803123' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb  2 07:14:53 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue asok_command: session ls {prefix=session ls} (starting...)
Feb  2 07:14:53 np0005604943 ceph-mds[95505]: mds.cephfs.compute-0.mldrue asok_command: status {prefix=status} (starting...)
Feb  2 07:14:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb  2 07:14:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1327951065' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb  2 07:14:53 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb  2 07:14:53 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1026384747' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb  2 07:14:54 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19123 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 07:14:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb  2 07:14:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2346296647' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb  2 07:14:54 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:54 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19126 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 07:14:54 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb  2 07:14:54 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/214675856' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb  2 07:14:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Feb  2 07:14:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/737746428' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb  2 07:14:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 07:14:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2447600602' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb  2 07:14:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb  2 07:14:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2774950326' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb  2 07:14:55 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb  2 07:14:55 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1704268045' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb  2 07:14:56 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19138 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 07:14:56 np0005604943 ceph-mgr[75558]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 07:14:56 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-twcemg[75554]: 2026-02-02T12:14:56.383+0000 7f564e481640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 07:14:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb  2 07:14:56 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2328803820' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb  2 07:14:56 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:56 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb  2 07:14:56 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/768983848' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb  2 07:14:56 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19144 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 07:14:57 np0005604943 nova_compute[238883]: 2026-02-02 12:14:57.150 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 221184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 221184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 221184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 212992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932744 data_alloc: 218103808 data_used: 10746
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 212992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 212992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 212992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 212992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 212992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932744 data_alloc: 218103808 data_used: 10746
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 212992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 212992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 212992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 212992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 212992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932744 data_alloc: 218103808 data_used: 10746
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 212992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 212992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 296.591705322s of 296.757690430s, submitted: 24
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 139264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 843776 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 843776 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932744 data_alloc: 218103808 data_used: 10746
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 843776 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 843776 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 843776 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 843776 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 835584 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932744 data_alloc: 218103808 data_used: 10746
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 835584 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 835584 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 835584 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 835584 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 835584 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932744 data_alloc: 218103808 data_used: 10746
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 835584 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 835584 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 835584 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 835584 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 827392 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932744 data_alloc: 218103808 data_used: 10746
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 819200 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 819200 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 819200 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 819200 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 819200 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcebc000/0x0/0x4ffc00000, data 0xb7bea/0x170000, compress 0x0/0x0/0x0, omap 0x1036b, meta 0x2bbfc95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 120 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.012277603s of 22.186645508s, submitted: 90
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 940990 data_alloc: 218103808 data_used: 10746
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 720896 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fc6b2000/0x0/0x4ffc00000, data 0x8bb376/0x976000, compress 0x0/0x0/0x0, omap 0x10b82, meta 0x2bbf47e), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 123 ms_handle_reset con 0x560f69f2d000 session 0x560f6b4356c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 16367616 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 16138240 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 16080896 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 124 ms_handle_reset con 0x560f69b4fc00 session 0x560f685ee540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 16039936 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994517 data_alloc: 218103808 data_used: 11944
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 16039936 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc6ac000/0x0/0x4ffc00000, data 0x8beb2c/0x97e000, compress 0x0/0x0/0x0, omap 0x115b2, meta 0x2bbea4e), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fc6ac000/0x0/0x4ffc00000, data 0x8beb2c/0x97e000, compress 0x0/0x0/0x0, omap 0x115b2, meta 0x2bbea4e), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997115 data_alloc: 218103808 data_used: 12529
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fc6a9000/0x0/0x4ffc00000, data 0x8c05ab/0x981000, compress 0x0/0x0/0x0, omap 0x11885, meta 0x2bbe77b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997115 data_alloc: 218103808 data_used: 12529
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fc6a9000/0x0/0x4ffc00000, data 0x8c05ab/0x981000, compress 0x0/0x0/0x0, omap 0x11885, meta 0x2bbe77b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997115 data_alloc: 218103808 data_used: 12529
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fc6a9000/0x0/0x4ffc00000, data 0x8c05ab/0x981000, compress 0x0/0x0/0x0, omap 0x11885, meta 0x2bbe77b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.868495941s of 24.994037628s, submitted: 64
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 16179200 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 126 ms_handle_reset con 0x560f69b4e000 session 0x560f6b435c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001641 data_alloc: 218103808 data_used: 12529
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 16039936 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 127 ms_handle_reset con 0x560f6b986000 session 0x560f69372e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fc6a5000/0x0/0x4ffc00000, data 0x8c2157/0x985000, compress 0x0/0x0/0x0, omap 0x11b10, meta 0x2bbe4f0), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 15884288 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 15605760 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 128 ms_handle_reset con 0x560f6b986400 session 0x560f6ba5a700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 128 ms_handle_reset con 0x560f6b986800 session 0x560f69b9b6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 15605760 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 128 ms_handle_reset con 0x560f69b4e000 session 0x560f6ba1c1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 15605760 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 128 ms_handle_reset con 0x560f6b986c00 session 0x560f685ee380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 129 ms_handle_reset con 0x560f6b987000 session 0x560f6ba1d180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1014810 data_alloc: 218103808 data_used: 12545
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 15605760 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 129 ms_handle_reset con 0x560f6b987400 session 0x560f6ba1c000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 129 ms_handle_reset con 0x560f69b4e000 session 0x560f6b897dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 129 ms_handle_reset con 0x560f6b986c00 session 0x560f6adbe540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 15589376 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fc69b000/0x0/0x4ffc00000, data 0x8c7551/0x98f000, compress 0x0/0x0/0x0, omap 0x12457, meta 0x2bbdba9), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 131 ms_handle_reset con 0x560f6b987000 session 0x560f6b897880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 15581184 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 131 ms_handle_reset con 0x560f6b987800 session 0x560f69c02540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 131 ms_handle_reset con 0x560f6b987400 session 0x560f6ba5a380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 131 ms_handle_reset con 0x560f69b4e000 session 0x560f6b896000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 15556608 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 15556608 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 131 ms_handle_reset con 0x560f6b986c00 session 0x560f6ba4c700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.482122421s of 10.797729492s, submitted: 52
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 131 ms_handle_reset con 0x560f6b987000 session 0x560f6ba4cfc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1020209 data_alloc: 218103808 data_used: 12643
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 15671296 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 131 ms_handle_reset con 0x560f6b987800 session 0x560f68edfc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 14491648 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 133 ms_handle_reset con 0x560f6b986400 session 0x560f6ba1c540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fc68d000/0x0/0x4ffc00000, data 0x8ce213/0x99b000, compress 0x0/0x0/0x0, omap 0x13639, meta 0x2bbc9c7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 14475264 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 133 ms_handle_reset con 0x560f69b4e000 session 0x560f6a0c9880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 133 ms_handle_reset con 0x560f6b987c00 session 0x560f6b896e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 14409728 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 133 ms_handle_reset con 0x560f6b986000 session 0x560f6b896700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 14409728 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fc691000/0x0/0x4ffc00000, data 0x8ce213/0x99b000, compress 0x0/0x0/0x0, omap 0x137b5, meta 0x2bbc84b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025237 data_alloc: 218103808 data_used: 13512
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 14409728 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 134 ms_handle_reset con 0x560f6b986800 session 0x560f6ba4ce00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fc68c000/0x0/0x4ffc00000, data 0x8cfdcb/0x99e000, compress 0x0/0x0/0x0, omap 0x13a40, meta 0x2bbc5c0), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 14409728 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 134 ms_handle_reset con 0x560f6b987400 session 0x560f6ba1cfc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 134 ms_handle_reset con 0x560f69b4e000 session 0x560f6b4348c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 14270464 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 14270464 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 135 ms_handle_reset con 0x560f6b986000 session 0x560f68edefc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 136 ms_handle_reset con 0x560f6b986800 session 0x560f68edec40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fc685000/0x0/0x4ffc00000, data 0x8d3593/0x9a5000, compress 0x0/0x0/0x0, omap 0x14165, meta 0x2bbbe9b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 14270464 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.965662956s of 10.091511726s, submitted: 66
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037779 data_alloc: 218103808 data_used: 13544
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 79200256 unmapped: 14123008 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 137 ms_handle_reset con 0x560f6b987c00 session 0x560f6ad27a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 137 ms_handle_reset con 0x560f6b987800 session 0x560f69b5a700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 137 ms_handle_reset con 0x560f69b4e000 session 0x560f692b4fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 14213120 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 137 ms_handle_reset con 0x560f6b986000 session 0x560f6ba1ce00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 137 ms_handle_reset con 0x560f6b986800 session 0x560f6ba5afc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 79159296 unmapped: 14163968 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 138 ms_handle_reset con 0x560f6b987c00 session 0x560f69b5a540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 138 ms_handle_reset con 0x560f6b986c00 session 0x560f6a0c9180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 139 ms_handle_reset con 0x560f6b987000 session 0x560f6ba1ca80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 139 ms_handle_reset con 0x560f6b986800 session 0x560f69372000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 14262272 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 139 ms_handle_reset con 0x560f69b4e000 session 0x560f68edf6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 139 ms_handle_reset con 0x560f6b986000 session 0x560f685ef880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 14262272 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fc67a000/0x0/0x4ffc00000, data 0x8d8973/0x9ae000, compress 0x0/0x0/0x0, omap 0x15082, meta 0x2bbaf7e), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 139 handle_osd_map epochs [140,140], i have 140, src has [1,140]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fc67a000/0x0/0x4ffc00000, data 0x8d8973/0x9ae000, compress 0x0/0x0/0x0, omap 0x15082, meta 0x2bbaf7e), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050857 data_alloc: 218103808 data_used: 14770
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 13172736 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 140 ms_handle_reset con 0x560f6b987c00 session 0x560f692b4000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 140 ms_handle_reset con 0x560f6b986000 session 0x560f6ad27500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 140 ms_handle_reset con 0x560f6b986800 session 0x560f69b9aa80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 141 ms_handle_reset con 0x560f6b987000 session 0x560f6952a1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 141 ms_handle_reset con 0x560f69b4e000 session 0x560f6ba5ae00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 12115968 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc675000/0x0/0x4ffc00000, data 0x8dc17b/0x9b3000, compress 0x0/0x0/0x0, omap 0x15730, meta 0x2bba8d0), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 141 ms_handle_reset con 0x560f69f2d000 session 0x560f69b5a000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 141 ms_handle_reset con 0x560f69b4fc00 session 0x560f6b484540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 12107776 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 142 ms_handle_reset con 0x560f69b4e000 session 0x560f6ba4d500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 12107776 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 142 ms_handle_reset con 0x560f6b986000 session 0x560f6b897c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 142 ms_handle_reset con 0x560f69f2d000 session 0x560f6ba1da40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 12107776 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 143 ms_handle_reset con 0x560f6b986800 session 0x560f6ad26c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 143 ms_handle_reset con 0x560f69b4e000 session 0x560f69f24380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.797811508s of 10.170699120s, submitted: 129
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062525 data_alloc: 218103808 data_used: 14754
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 144 ms_handle_reset con 0x560f6b986000 session 0x560f6b896380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 12238848 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fc66f000/0x0/0x4ffc00000, data 0x8df923/0x9b9000, compress 0x0/0x0/0x0, omap 0x15c90, meta 0x2bba370), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 144 ms_handle_reset con 0x560f69f2d000 session 0x560f69c028c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 12189696 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 145 ms_handle_reset con 0x560f6b986800 session 0x560f6ba1d340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 145 ms_handle_reset con 0x560f6b987000 session 0x560f6ba1c380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 145 ms_handle_reset con 0x560f69b4e000 session 0x560f6b897180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 146 ms_handle_reset con 0x560f69f2d000 session 0x560f6b484a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 12189696 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 12189696 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fc667000/0x0/0x4ffc00000, data 0x8e4d03/0x9c1000, compress 0x0/0x0/0x0, omap 0x1648d, meta 0x2bb9b73), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81051648 unmapped: 12271616 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fc667000/0x0/0x4ffc00000, data 0x8e4d03/0x9c1000, compress 0x0/0x0/0x0, omap 0x1648d, meta 0x2bb9b73), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068099 data_alloc: 218103808 data_used: 15936
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81051648 unmapped: 12271616 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 12263424 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 12263424 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 12263424 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 12263424 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fc666000/0x0/0x4ffc00000, data 0x8e67ae/0x9c4000, compress 0x0/0x0/0x0, omap 0x167d7, meta 0x2bb9829), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070409 data_alloc: 218103808 data_used: 15936
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 12263424 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.643649101s of 10.723531723s, submitted: 48
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 12263424 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 12263424 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 12263424 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 12263424 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073183 data_alloc: 218103808 data_used: 15936
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fc663000/0x0/0x4ffc00000, data 0x8e8249/0x9c7000, compress 0x0/0x0/0x0, omap 0x16b0b, meta 0x2bb94f5), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 12263424 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 12263424 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 12263424 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 12263424 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x8e9cc8/0x9ca000, compress 0x0/0x0/0x0, omap 0x16da4, meta 0x2bb925c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 12320768 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075957 data_alloc: 218103808 data_used: 15936
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 12320768 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 12320768 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 12320768 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x8e9cc8/0x9ca000, compress 0x0/0x0/0x0, omap 0x16da4, meta 0x2bb925c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.371385574s of 12.385271072s, submitted: 20
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 149 ms_handle_reset con 0x560f6b986000 session 0x560f6ad26fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 12304384 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 12304384 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fc660000/0x0/0x4ffc00000, data 0x8e9d3a/0x9cc000, compress 0x0/0x0/0x0, omap 0x16f38, meta 0x2bb90c8), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 149 ms_handle_reset con 0x560f6b986800 session 0x560f69f25a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079941 data_alloc: 218103808 data_used: 15936
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 12148736 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 149 ms_handle_reset con 0x560f6b4d2400 session 0x560f68edf180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 149 ms_handle_reset con 0x560f69b4e000 session 0x560f6ba1d880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 12148736 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 149 ms_handle_reset con 0x560f69f2d000 session 0x560f6ba1c000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 12148736 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 12148736 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fc661000/0x0/0x4ffc00000, data 0x8e9d2a/0x9cb000, compress 0x0/0x0/0x0, omap 0x170a8, meta 0x2bb8f58), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 12148736 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 149 ms_handle_reset con 0x560f6b986000 session 0x560f69c02c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079116 data_alloc: 218103808 data_used: 15936
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fc661000/0x0/0x4ffc00000, data 0x8e9d2a/0x9cb000, compress 0x0/0x0/0x0, omap 0x170a8, meta 0x2bb8f58), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 12148736 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 12148736 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 12148736 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 12148736 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 12148736 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079116 data_alloc: 218103808 data_used: 15936
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 12148736 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fc662000/0x0/0x4ffc00000, data 0x8e9cc8/0x9ca000, compress 0x0/0x0/0x0, omap 0x170a8, meta 0x2bb8f58), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 12148736 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 12148736 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 12148736 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.510160446s of 15.575152397s, submitted: 35
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 149 ms_handle_reset con 0x560f6b986800 session 0x560f68edec40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 12148736 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fc662000/0x0/0x4ffc00000, data 0x8e9cc8/0x9ca000, compress 0x0/0x0/0x0, omap 0x17258, meta 0x2bb8da8), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080896 data_alloc: 218103808 data_used: 15936
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 12017664 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 150 ms_handle_reset con 0x560f69b4fc00 session 0x560f69c02a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 10944512 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 150 ms_handle_reset con 0x560f69b4e000 session 0x560f6b897880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 10862592 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 151 ms_handle_reset con 0x560f69b4fc00 session 0x560f6b896700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 10854400 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 151 ms_handle_reset con 0x560f69f2d000 session 0x560f69373a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 151 ms_handle_reset con 0x560f6b986000 session 0x560f69b9aa80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fc65a000/0x0/0x4ffc00000, data 0x8ed454/0x9d0000, compress 0x0/0x0/0x0, omap 0x17dbe, meta 0x2bb8242), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 10862592 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086345 data_alloc: 218103808 data_used: 15936
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 10862592 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 10862592 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 10862592 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 10862592 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 10862592 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fc65a000/0x0/0x4ffc00000, data 0x8ed454/0x9d0000, compress 0x0/0x0/0x0, omap 0x17dbe, meta 0x2bb8242), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086345 data_alloc: 218103808 data_used: 15936
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 10862592 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.940203667s of 12.057000160s, submitted: 44
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 152 ms_handle_reset con 0x560f6b986800 session 0x560f6b8961c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 10977280 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 152 ms_handle_reset con 0x560f69b4e000 session 0x560f69b5b6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 10977280 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 152 ms_handle_reset con 0x560f69b4fc00 session 0x560f6b896fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 10952704 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 152 ms_handle_reset con 0x560f69f2d000 session 0x560f6ba1c8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fc657000/0x0/0x4ffc00000, data 0x8eef35/0x9d4000, compress 0x0/0x0/0x0, omap 0x18354, meta 0x2bb7cac), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 10952704 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091903 data_alloc: 218103808 data_used: 15936
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 10952704 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 10952704 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 10952704 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 154 ms_handle_reset con 0x560f6b986000 session 0x560f6adbfc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 10805248 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fc649000/0x0/0x4ffc00000, data 0x8f42ee/0x9df000, compress 0x0/0x0/0x0, omap 0x18c75, meta 0x2bb738b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 155 ms_handle_reset con 0x560f6b4d2000 session 0x560f6b897a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 10379264 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 156 ms_handle_reset con 0x560f6b987000 session 0x560f69b9b340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 156 ms_handle_reset con 0x560f69b4f400 session 0x560f6ad27500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110471 data_alloc: 218103808 data_used: 16034
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 10379264 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fc643000/0x0/0x4ffc00000, data 0x8f5f08/0x9e3000, compress 0x0/0x0/0x0, omap 0x18f3c, meta 0x2bb70c4), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 10379264 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.174243927s of 11.277175903s, submitted: 58
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 156 ms_handle_reset con 0x560f6b4d2000 session 0x560f6ba1c1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 156 ms_handle_reset con 0x560f69b4e000 session 0x560f69b9b880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 10362880 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 10362880 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 156 ms_handle_reset con 0x560f69f2d000 session 0x560f69372700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 156 ms_handle_reset con 0x560f69b4fc00 session 0x560f69c02540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 156 ms_handle_reset con 0x560f69b4f400 session 0x560f6ba4ca80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 156 ms_handle_reset con 0x560f6b987000 session 0x560f6bfe2540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 10362880 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fc647000/0x0/0x4ffc00000, data 0x8f5fcc/0x9e5000, compress 0x0/0x0/0x0, omap 0x18f3c, meta 0x2bb70c4), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 157 ms_handle_reset con 0x560f6b4d2000 session 0x560f6b897500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 157 ms_handle_reset con 0x560f6b986000 session 0x560f685efc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 157 ms_handle_reset con 0x560f69b4e000 session 0x560f692b4700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116679 data_alloc: 218103808 data_used: 17400
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 10338304 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 158 ms_handle_reset con 0x560f69b4f400 session 0x560f6b896540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 10321920 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 158 ms_handle_reset con 0x560f69b4fc00 session 0x560f69b5ae00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 158 ms_handle_reset con 0x560f6b4d2000 session 0x560f6b484c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 158 ms_handle_reset con 0x560f6b4d2400 session 0x560f6ad26700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 91062272 unmapped: 2260992 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 159 ms_handle_reset con 0x560f6b987000 session 0x560f69b9a700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 159 ms_handle_reset con 0x560f69b4f400 session 0x560f69f24000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 2228224 heap: 93323264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 160 ms_handle_reset con 0x560f69b4e000 session 0x560f692b4380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 160 ms_handle_reset con 0x560f6b4d2000 session 0x560f6bfe2e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 160 ms_handle_reset con 0x560f6b1f9c00 session 0x560f6be01c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 160 ms_handle_reset con 0x560f69b4fc00 session 0x560f6bfe36c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 160 ms_handle_reset con 0x560f69b4e000 session 0x560f68edf180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 160 ms_handle_reset con 0x560f69b4f400 session 0x560f6b434fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 91766784 unmapped: 7856128 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 161 ms_handle_reset con 0x560f6b4d2000 session 0x560f6ad26fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174389 data_alloc: 218103808 data_used: 6833840
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 161 ms_handle_reset con 0x560f6b987000 session 0x560f6bfe2fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 8003584 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 161 ms_handle_reset con 0x560f69b4e000 session 0x560f6be01dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fc267000/0x0/0x4ffc00000, data 0xcce04c/0xdc1000, compress 0x0/0x0/0x0, omap 0x19c05, meta 0x2bb63fb), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 161 ms_handle_reset con 0x560f69b4f400 session 0x560f6a0c8380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 7979008 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 161 ms_handle_reset con 0x560f69b4fc00 session 0x560f6ba5a8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 7979008 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 91643904 unmapped: 7979008 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 161 ms_handle_reset con 0x560f6b4d2000 session 0x560f69c02a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.170081139s of 11.751296043s, submitted: 187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 161 ms_handle_reset con 0x560f6b987000 session 0x560f69373500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 161 ms_handle_reset con 0x560f69b4f400 session 0x560f693736c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fc244000/0x0/0x4ffc00000, data 0xcf3c21/0xde8000, compress 0x0/0x0/0x0, omap 0x1a0d1, meta 0x2bb5f2f), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 162 ms_handle_reset con 0x560f6b4d2000 session 0x560f6b896c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 6701056 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 163 ms_handle_reset con 0x560f69b4fc00 session 0x560f6ba1dc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 163 ms_handle_reset con 0x560f69b4e000 session 0x560f692b5500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192595 data_alloc: 218103808 data_used: 6943404
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 96952320 unmapped: 2670592 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fc238000/0x0/0x4ffc00000, data 0xcf7473/0xdf0000, compress 0x0/0x0/0x0, omap 0x1a714, meta 0x2bb58ec), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97083392 unmapped: 2539520 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 165 ms_handle_reset con 0x560f6b1f8800 session 0x560f69c021c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 96935936 unmapped: 2686976 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 96919552 unmapped: 2703360 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 166 ms_handle_reset con 0x560f69b4e000 session 0x560f6b897dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 166 ms_handle_reset con 0x560f6b1f8c00 session 0x560f6b4848c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 96944128 unmapped: 2678784 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fc234000/0x0/0x4ffc00000, data 0xcfc68a/0xdf6000, compress 0x0/0x0/0x0, omap 0x1b468, meta 0x2bb4b98), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219295 data_alloc: 234881024 data_used: 10624074
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 166 ms_handle_reset con 0x560f6b1f9000 session 0x560f6bfe3500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 96944128 unmapped: 2678784 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 96944128 unmapped: 2678784 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 166 ms_handle_reset con 0x560f6b1f8000 session 0x560f6ba1c540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 166 ms_handle_reset con 0x560f6b1f8400 session 0x560f6be00000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 166 ms_handle_reset con 0x560f69b4e000 session 0x560f68edf6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 4210688 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 4210688 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 4210688 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168065 data_alloc: 218103808 data_used: 6834762
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 4210688 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fc5e4000/0x0/0x4ffc00000, data 0x907619/0x9ff000, compress 0x0/0x0/0x0, omap 0x1b5ab, meta 0x2bb4a55), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.012515068s of 12.193579674s, submitted: 139
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 167 ms_handle_reset con 0x560f6b1f8000 session 0x560f68edfc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 4210688 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fc628000/0x0/0x4ffc00000, data 0x9090d0/0xa02000, compress 0x0/0x0/0x0, omap 0x1bb42, meta 0x2bb44be), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 4268032 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 167 ms_handle_reset con 0x560f6b1f8c00 session 0x560f6952a1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 167 ms_handle_reset con 0x560f6b1f9000 session 0x560f6ad268c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 95264768 unmapped: 4358144 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 95272960 unmapped: 4349952 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 167 ms_handle_reset con 0x560f6b1f9400 session 0x560f685ef880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 168 ms_handle_reset con 0x560f69b4e000 session 0x560f6ba5b6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180210 data_alloc: 218103808 data_used: 6838760
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 95272960 unmapped: 4349952 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 168 ms_handle_reset con 0x560f6b1f8000 session 0x560f6a0c8e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 169 ms_handle_reset con 0x560f6b1f8c00 session 0x560f68ede1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 95100928 unmapped: 4521984 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 169 ms_handle_reset con 0x560f6b1f9000 session 0x560f685ef880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fc621000/0x0/0x4ffc00000, data 0x90c86a/0xa09000, compress 0x0/0x0/0x0, omap 0x1c4d6, meta 0x2bb3b2a), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 95100928 unmapped: 4521984 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 170 ms_handle_reset con 0x560f6b1f9400 session 0x560f6ba4d880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 170 ms_handle_reset con 0x560f6b1f8000 session 0x560f69b5ba40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 95100928 unmapped: 4521984 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 171 ms_handle_reset con 0x560f69b4e000 session 0x560f6952a8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 96157696 unmapped: 3465216 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 171 ms_handle_reset con 0x560f6b1f8c00 session 0x560f69b5b500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 171 ms_handle_reset con 0x560f6b1f9000 session 0x560f69b5ac40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188166 data_alloc: 218103808 data_used: 6838760
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 2408448 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 171 ms_handle_reset con 0x560f69b4ec00 session 0x560f6adbfc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 171 heartbeat osd_stat(store_statfs(0x4fb47b000/0x0/0x4ffc00000, data 0x91006a/0xa0f000, compress 0x0/0x0/0x0, omap 0x1ca22, meta 0x3d535de), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.204796791s of 10.446177483s, submitted: 96
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 2392064 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 171 handle_osd_map epochs [171,172], i have 171, src has [1,172]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 172 ms_handle_reset con 0x560f69b4e000 session 0x560f6ba4d340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 172 ms_handle_reset con 0x560f6b1f8000 session 0x560f6b897500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 172 ms_handle_reset con 0x560f6b1f8c00 session 0x560f6ba4c380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97247232 unmapped: 2375680 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97247232 unmapped: 2375680 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fb478000/0x0/0x4ffc00000, data 0x911c22/0xa12000, compress 0x0/0x0/0x0, omap 0x1cf9b, meta 0x3d53065), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 172 ms_handle_reset con 0x560f6b1f9000 session 0x560f69373340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 172 ms_handle_reset con 0x560f6b4d2000 session 0x560f69373880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 172 ms_handle_reset con 0x560f69b4e000 session 0x560f6be01340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97239040 unmapped: 2383872 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 172 ms_handle_reset con 0x560f6b1f8c00 session 0x560f685ef6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194476 data_alloc: 218103808 data_used: 6839487
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97247232 unmapped: 2375680 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 173 ms_handle_reset con 0x560f6b1f9000 session 0x560f6adbfdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 173 ms_handle_reset con 0x560f6b1f8000 session 0x560f68edf340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 173 ms_handle_reset con 0x560f6b4d2000 session 0x560f69b9bdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 173 ms_handle_reset con 0x560f69b4e000 session 0x560f69f256c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 2310144 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 2310144 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 2310144 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fb476000/0x0/0x4ffc00000, data 0x9137e8/0xa14000, compress 0x0/0x0/0x0, omap 0x1d5da, meta 0x3d52a26), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 2310144 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 173 ms_handle_reset con 0x560f6b1f8000 session 0x560f69b9a1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197522 data_alloc: 218103808 data_used: 6839389
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 2310144 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97312768 unmapped: 2310144 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.564840317s of 10.767470360s, submitted: 79
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 173 ms_handle_reset con 0x560f6b1f8c00 session 0x560f6ba4da40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 173 ms_handle_reset con 0x560f6b1f9000 session 0x560f6ba1d6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97280000 unmapped: 2342912 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fb478000/0x0/0x4ffc00000, data 0x9137e8/0xa14000, compress 0x0/0x0/0x0, omap 0x1d8de, meta 0x3d52722), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97280000 unmapped: 2342912 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97280000 unmapped: 2342912 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 173 heartbeat osd_stat(store_statfs(0x4fb478000/0x0/0x4ffc00000, data 0x9137e8/0xa14000, compress 0x0/0x0/0x0, omap 0x1d8de, meta 0x3d52722), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 173 ms_handle_reset con 0x560f6b4d2400 session 0x560f69b5aa80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196229 data_alloc: 218103808 data_used: 6839389
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 173 handle_osd_map epochs [173,174], i have 173, src has [1,174]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97411072 unmapped: 2211840 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97411072 unmapped: 2211840 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x915267/0xa17000, compress 0x0/0x0/0x0, omap 0x1dbf3, meta 0x3d5240d), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97411072 unmapped: 2211840 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97419264 unmapped: 2203648 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97419264 unmapped: 2203648 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f69b4e000 session 0x560f6b8961c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f6b1f8000 session 0x560f6b896540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201429 data_alloc: 218103808 data_used: 6839389
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97427456 unmapped: 2195456 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f6b1f8c00 session 0x560f6ba1c000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97452032 unmapped: 2170880 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.169409752s of 10.205729485s, submitted: 28
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f6b1f9000 session 0x560f6ba4dc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x9152d9/0xa19000, compress 0x0/0x0/0x0, omap 0x1dbf3, meta 0x3d5240d), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97476608 unmapped: 2146304 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f6b4d2400 session 0x560f685ee380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f69b4e000 session 0x560f6adbe1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97525760 unmapped: 2097152 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f6b1f8000 session 0x560f6ad27c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f6b1f8c00 session 0x560f6ba1c700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f6b4d3c00 session 0x560f6b435340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f6b1f9000 session 0x560f6ba4d180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 2039808 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f69b4e000 session 0x560f6b896e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211846 data_alloc: 218103808 data_used: 6839389
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 2039808 heap: 99622912 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f6b1f8000 session 0x560f6bfe3a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f6b4d3c00 session 0x560f693721c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f6b1f8c00 session 0x560f6b897340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f6b4d2800 session 0x560f6ba4d880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f69b4e000 session 0x560f69372e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f6b1f8000 session 0x560f6bfe2c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f6b1f8c00 session 0x560f6ba1cc40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 98394112 unmapped: 16056320 heap: 114450432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fac68000/0x0/0x4ffc00000, data 0x11202d9/0x1224000, compress 0x0/0x0/0x0, omap 0x1e603, meta 0x3d519fd), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 98394112 unmapped: 16056320 heap: 114450432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f6b4d2800 session 0x560f6b897dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 98394112 unmapped: 16056320 heap: 114450432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99319808 unmapped: 15130624 heap: 114450432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316582 data_alloc: 234881024 data_used: 15067229
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 104742912 unmapped: 9707520 heap: 114450432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f6b4d3c00 session 0x560f6be00380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f69f2d000 session 0x560f6ba5ae00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fac67000/0x0/0x4ffc00000, data 0x11202fc/0x1225000, compress 0x0/0x0/0x0, omap 0x1e603, meta 0x3d519fd), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 9674752 heap: 114450432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 ms_handle_reset con 0x560f69b4e000 session 0x560f6c7f1a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 14696448 heap: 114450432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x9152d9/0xa19000, compress 0x0/0x0/0x0, omap 0x1e791, meta 0x3d5186f), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 14696448 heap: 114450432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.671505928s of 11.926602364s, submitted: 122
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99500032 unmapped: 14950400 heap: 114450432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 175 ms_handle_reset con 0x560f6b1f8000 session 0x560f6952bc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221793 data_alloc: 218103808 data_used: 6843450
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 14934016 heap: 114450432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 175 ms_handle_reset con 0x560f6b1f8c00 session 0x560f69b5ae00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 14909440 heap: 114450432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 175 ms_handle_reset con 0x560f6b4d2800 session 0x560f69372c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 14737408 heap: 114450432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 176 ms_handle_reset con 0x560f69b4e000 session 0x560f68edfa40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fb46b000/0x0/0x4ffc00000, data 0x918a65/0xa1f000, compress 0x0/0x0/0x0, omap 0x1f4f0, meta 0x3d50b10), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 14876672 heap: 114450432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 176 handle_osd_map epochs [176,177], i have 176, src has [1,177]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 177 ms_handle_reset con 0x560f69f2d000 session 0x560f6cfc7180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99590144 unmapped: 14860288 heap: 114450432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230743 data_alloc: 218103808 data_used: 6843450
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 177 ms_handle_reset con 0x560f6b1f8c00 session 0x560f685eea80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99598336 unmapped: 14852096 heap: 114450432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 177 handle_osd_map epochs [178,178], i have 177, src has [1,178]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fb465000/0x0/0x4ffc00000, data 0x91a62c/0xa23000, compress 0x0/0x0/0x0, omap 0x1fd1a, meta 0x3d502e6), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 178 ms_handle_reset con 0x560f6b1f8000 session 0x560f692b4540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fb465000/0x0/0x4ffc00000, data 0x91a62c/0xa23000, compress 0x0/0x0/0x0, omap 0x1fd1a, meta 0x3d502e6), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99622912 unmapped: 14827520 heap: 114450432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 178 ms_handle_reset con 0x560f6b4d3c00 session 0x560f6be00000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 178 ms_handle_reset con 0x560f69b4e000 session 0x560f6ba4c8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 22372352 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 178 ms_handle_reset con 0x560f69f2d000 session 0x560f69ee2700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99442688 unmapped: 22364160 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 178 ms_handle_reset con 0x560f6b1f8000 session 0x560f69f256c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 178 ms_handle_reset con 0x560f6b1f8c00 session 0x560f6ba4cfc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.703257561s of 10.002370834s, submitted: 144
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 178 ms_handle_reset con 0x560f6b987000 session 0x560f6ba5a8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99418112 unmapped: 22388736 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 178 ms_handle_reset con 0x560f69b4e000 session 0x560f6bfe3dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 178 ms_handle_reset con 0x560f69f2d000 session 0x560f6bfe3c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 178 ms_handle_reset con 0x560f6b1f8000 session 0x560f6bfe3180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 178 ms_handle_reset con 0x560f6b1f8c00 session 0x560f6ad27880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232924 data_alloc: 218103808 data_used: 6844035
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 178 ms_handle_reset con 0x560f6b986800 session 0x560f6a0c8c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 179 ms_handle_reset con 0x560f69b4e000 session 0x560f6cfc6e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 179 ms_handle_reset con 0x560f69f2d000 session 0x560f69373dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 179 ms_handle_reset con 0x560f6b1f8000 session 0x560f69373a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 179 ms_handle_reset con 0x560f6b1f8c00 session 0x560f6c7f1180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 22282240 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 22282240 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 179 heartbeat osd_stat(store_statfs(0x4fafd8000/0x0/0x4ffc00000, data 0xda7c55/0xeb2000, compress 0x0/0x0/0x0, omap 0x20b98, meta 0x3d4f468), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 179 heartbeat osd_stat(store_statfs(0x4fafd8000/0x0/0x4ffc00000, data 0xda7c55/0xeb2000, compress 0x0/0x0/0x0, omap 0x20b98, meta 0x3d4f468), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 22282240 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 22282240 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 179 ms_handle_reset con 0x560f6b1f9800 session 0x560f6c7f0c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99655680 unmapped: 22151168 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 179 ms_handle_reset con 0x560f69b4e000 session 0x560f69153180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265275 data_alloc: 218103808 data_used: 6844035
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99655680 unmapped: 22151168 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 180 ms_handle_reset con 0x560f69f2d000 session 0x560f6b8968c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 180 ms_handle_reset con 0x560f6b1f8000 session 0x560f6bfe3880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99688448 unmapped: 22118400 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99704832 unmapped: 22102016 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 180 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xda96e4/0xeb6000, compress 0x0/0x0/0x0, omap 0x21073, meta 0x3d4ef8d), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 101302272 unmapped: 20504576 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 101302272 unmapped: 20504576 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299776 data_alloc: 234881024 data_used: 11603603
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 101302272 unmapped: 20504576 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 180 heartbeat osd_stat(store_statfs(0x4fafd4000/0x0/0x4ffc00000, data 0xda96e4/0xeb6000, compress 0x0/0x0/0x0, omap 0x21073, meta 0x3d4ef8d), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.892621994s of 11.988497734s, submitted: 60
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 180 ms_handle_reset con 0x560f6b1f8c00 session 0x560f6ba4d180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 180 ms_handle_reset con 0x560f6b4d1800 session 0x560f6952a1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 180 ms_handle_reset con 0x560f69b4e000 session 0x560f68edec40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 21635072 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 21635072 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 180 handle_osd_map epochs [180,181], i have 180, src has [1,181]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99917824 unmapped: 21889024 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 heartbeat osd_stat(store_statfs(0x4fb462000/0x0/0x4ffc00000, data 0x91f6c4/0xa2a000, compress 0x0/0x0/0x0, omap 0x21213, meta 0x3d4eded), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 heartbeat osd_stat(store_statfs(0x4fb45d000/0x0/0x4ffc00000, data 0x921260/0xa2d000, compress 0x0/0x0/0x0, omap 0x214b4, meta 0x3d4eb4c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99917824 unmapped: 21889024 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f69f2d000 session 0x560f6cfc7880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249733 data_alloc: 218103808 data_used: 6844035
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f6b1f8000 session 0x560f6cfc6fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 99917824 unmapped: 21889024 heap: 121806848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 heartbeat osd_stat(store_statfs(0x4fb45c000/0x0/0x4ffc00000, data 0x9212c2/0xa2e000, compress 0x0/0x0/0x0, omap 0x214b4, meta 0x3d4eb4c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 100065280 unmapped: 34349056 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f6a0e4800 session 0x560f6c7f01c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 104620032 unmapped: 29794304 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f6a0e4400 session 0x560f6ba4dc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 33431552 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f69f2d000 session 0x560f6cfc76c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f6a0e4800 session 0x560f6cfc6700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f6b1f8000 session 0x560f6cfc68c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f6a0e4000 session 0x560f69f24700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f6a0e5000 session 0x560f69b9a380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f69b4e000 session 0x560f6ba4c540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f69f2d000 session 0x560f69373880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f6a0e4000 session 0x560f69b9a700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f6a0e4800 session 0x560f6a0c96c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f6b1f8000 session 0x560f6ba1c8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 110288896 unmapped: 24125440 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2557945 data_alloc: 218103808 data_used: 6844035
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 107831296 unmapped: 26583040 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.953901768s of 10.135463715s, submitted: 144
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 103849984 unmapped: 30564352 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 heartbeat osd_stat(store_statfs(0x4e9a87000/0x0/0x4ffc00000, data 0x122f82c2/0x12405000, compress 0x0/0x0/0x0, omap 0x21b02, meta 0x3d4e4fe), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f6b1f8c00 session 0x560f69b5a540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f69b4e000 session 0x560f6ba5b6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f6b5ca000 session 0x560f6cfc7340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f69f2d000 session 0x560f6cfc61c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 heartbeat osd_stat(store_statfs(0x4e9a87000/0x0/0x4ffc00000, data 0x122f82c2/0x12405000, compress 0x0/0x0/0x0, omap 0x21b02, meta 0x3d4e4fe), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 103866368 unmapped: 30547968 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f6a0e4800 session 0x560f6ba4ca80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f6a0e4000 session 0x560f6ba4c380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f69b4e000 session 0x560f6bfe21c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f69f2d000 session 0x560f69b9a540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 102850560 unmapped: 31563776 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f6a0e4800 session 0x560f69b9aa80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 ms_handle_reset con 0x560f6b1f8c00 session 0x560f6c7f0e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 182 ms_handle_reset con 0x560f69b4e000 session 0x560f6ba1c540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 103153664 unmapped: 31260672 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381896 data_alloc: 218103808 data_used: 6844035
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 103153664 unmapped: 31260672 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fae5f000/0x0/0x4ffc00000, data 0xf1de50/0x102b000, compress 0x0/0x0/0x0, omap 0x21e19, meta 0x3d4e1e7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 103161856 unmapped: 31252480 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 182 ms_handle_reset con 0x560f6a0e4800 session 0x560f69373340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 28155904 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 182 ms_handle_reset con 0x560f6b5ca000 session 0x560f6b435340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 182 ms_handle_reset con 0x560f69f2d000 session 0x560f69c021c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 182 ms_handle_reset con 0x560f6a0e4000 session 0x560f69c02540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 106782720 unmapped: 27631616 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 182 ms_handle_reset con 0x560f69b4e000 session 0x560f6ba5bc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 182 ms_handle_reset con 0x560f69f2d000 session 0x560f6ba1c000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fae61000/0x0/0x4ffc00000, data 0xf1de50/0x102b000, compress 0x0/0x0/0x0, omap 0x21e19, meta 0x3d4e1e7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 29294592 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fb45b000/0x0/0x4ffc00000, data 0x922eb2/0xa31000, compress 0x0/0x0/0x0, omap 0x220c1, meta 0x3d4df3f), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346574 data_alloc: 218103808 data_used: 6844035
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 29294592 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.550458908s of 10.176035881s, submitted: 131
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 ms_handle_reset con 0x560f6a0e4800 session 0x560f67803c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 heartbeat osd_stat(store_statfs(0x4fb456000/0x0/0x4ffc00000, data 0x924931/0xa34000, compress 0x0/0x0/0x0, omap 0x223cd, meta 0x3d4dc33), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 ms_handle_reset con 0x560f6b5ca000 session 0x560f6c7f1340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 105021440 unmapped: 29392896 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 ms_handle_reset con 0x560f6a0e5400 session 0x560f6be00540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 ms_handle_reset con 0x560f69b4e000 session 0x560f6c7f0700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 29376512 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 ms_handle_reset con 0x560f69f2d000 session 0x560f69b9bdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 29351936 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 heartbeat osd_stat(store_statfs(0x4fb159000/0x0/0x4ffc00000, data 0xc248cf/0xd33000, compress 0x0/0x0/0x0, omap 0x2276c, meta 0x3d4d894), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 heartbeat osd_stat(store_statfs(0x4fb159000/0x0/0x4ffc00000, data 0xc248cf/0xd33000, compress 0x0/0x0/0x0, omap 0x2276c, meta 0x3d4d894), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 105054208 unmapped: 29360128 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1489954 data_alloc: 218103808 data_used: 6844050
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 105095168 unmapped: 29319168 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 heartbeat osd_stat(store_statfs(0x4f7959000/0x0/0x4ffc00000, data 0x44248cf/0x4533000, compress 0x0/0x0/0x0, omap 0x228a9, meta 0x3d4d757), peers [0,1] op hist [0,0,0,0,0,0,1])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 105111552 unmapped: 29302784 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 heartbeat osd_stat(store_statfs(0x4f7959000/0x0/0x4ffc00000, data 0x44248cf/0x4533000, compress 0x0/0x0/0x0, omap 0x228a9, meta 0x3d4d757), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 105103360 unmapped: 29310976 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 12525568 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 105103360 unmapped: 29310976 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2321874 data_alloc: 218103808 data_used: 6844050
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 105136128 unmapped: 29278208 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 heartbeat osd_stat(store_statfs(0x4ef959000/0x0/0x4ffc00000, data 0xc4248cf/0xc533000, compress 0x0/0x0/0x0, omap 0x228a9, meta 0x3d4d757), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 105136128 unmapped: 29278208 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.009611130s of 10.870128632s, submitted: 71
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 20873216 heap: 134414336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 ms_handle_reset con 0x560f6b5ca000 session 0x560f6b4848c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 ms_handle_reset con 0x560f6a0e5800 session 0x560f6ad27500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 ms_handle_reset con 0x560f6a0e5c00 session 0x560f6b485340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 ms_handle_reset con 0x560f69b4e000 session 0x560f6b485c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 ms_handle_reset con 0x560f6b5ca000 session 0x560f685efc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 105750528 unmapped: 32866304 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 ms_handle_reset con 0x560f6d03e400 session 0x560f6ba5a380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122437632 unmapped: 16179200 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070756 data_alloc: 218103808 data_used: 6844050
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 24420352 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 heartbeat osd_stat(store_statfs(0x4e6b04000/0x0/0x4ffc00000, data 0x152798cf/0x15388000, compress 0x0/0x0/0x0, omap 0x22a74, meta 0x3d4d58c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 105988096 unmapped: 32628736 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 106045440 unmapped: 32571392 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 ms_handle_reset con 0x560f6d03e800 session 0x560f6ba1d340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 32358400 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 ms_handle_reset con 0x560f6d03ec00 session 0x560f69b9a380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 heartbeat osd_stat(store_statfs(0x4e3304000/0x0/0x4ffc00000, data 0x18a798cf/0x18b88000, compress 0x0/0x0/0x0, omap 0x22a74, meta 0x3d4d58c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 ms_handle_reset con 0x560f69b4e000 session 0x560f6a0c9340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 106471424 unmapped: 32145408 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 ms_handle_reset con 0x560f6b5ca000 session 0x560f6cfc6700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654862 data_alloc: 218103808 data_used: 6844050
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 ms_handle_reset con 0x560f6a0e4800 session 0x560f685ef340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 31924224 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 31727616 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 108355584 unmapped: 30261248 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 184 heartbeat osd_stat(store_statfs(0x4df2fe000/0x0/0x4ffc00000, data 0x1ca7b47b/0x1cb8c000, compress 0x0/0x0/0x0, omap 0x22d1e, meta 0x3d4d2e2), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 184 handle_osd_map epochs [184,185], i have 184, src has [1,185]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.705772400s of 11.582686424s, submitted: 42
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 30195712 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 185 ms_handle_reset con 0x560f6d03f000 session 0x560f6c7f0540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 185 ms_handle_reset con 0x560f6b4d2c00 session 0x560f6b897500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 185 ms_handle_reset con 0x560f69b4e000 session 0x560f6cfc6000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 185 ms_handle_reset con 0x560f6a0e4800 session 0x560f6b4856c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 185 ms_handle_reset con 0x560f6b5ca000 session 0x560f6cfc7880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 185 ms_handle_reset con 0x560f6d03f000 session 0x560f6b4356c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 185 ms_handle_reset con 0x560f69ec0000 session 0x560f6ba4c540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 185 ms_handle_reset con 0x560f69b4e000 session 0x560f6ba5bdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 29368320 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1584008 data_alloc: 234881024 data_used: 13269123
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 29368320 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 29368320 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 185 ms_handle_reset con 0x560f69ec0800 session 0x560f692b4540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 29368320 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 185 ms_handle_reset con 0x560f69ec0400 session 0x560f6a0c8000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 185 ms_handle_reset con 0x560f69ec0000 session 0x560f69373a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 185 ms_handle_reset con 0x560f6a0e4800 session 0x560f6a0c8e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 109207552 unmapped: 29409280 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 185 heartbeat osd_stat(store_statfs(0x4fa16b000/0x0/0x4ffc00000, data 0x1c1005c/0x1d21000, compress 0x0/0x0/0x0, omap 0x2303c, meta 0x3d4cfc4), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 109207552 unmapped: 29409280 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1637438 data_alloc: 234881024 data_used: 21401731
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 118235136 unmapped: 20381696 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 15720448 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 16613376 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 186 heartbeat osd_stat(store_statfs(0x4f9730000/0x0/0x4ffc00000, data 0x2649adb/0x275c000, compress 0x0/0x0/0x0, omap 0x23350, meta 0x3d4ccb0), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 16613376 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 186 heartbeat osd_stat(store_statfs(0x4f9730000/0x0/0x4ffc00000, data 0x2649adb/0x275c000, compress 0x0/0x0/0x0, omap 0x23350, meta 0x3d4ccb0), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122003456 unmapped: 16613376 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1705002 data_alloc: 234881024 data_used: 21825667
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 16580608 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 16580608 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122036224 unmapped: 16580608 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.329204559s of 14.715489388s, submitted: 177
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 16285696 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 186 heartbeat osd_stat(store_statfs(0x4f970e000/0x0/0x4ffc00000, data 0x266badb/0x277e000, compress 0x0/0x0/0x0, omap 0x23350, meta 0x3d4ccb0), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 16293888 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1762300 data_alloc: 234881024 data_used: 22018179
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 126115840 unmapped: 12500992 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 186 ms_handle_reset con 0x560f69ec0400 session 0x560f68edf6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 186 heartbeat osd_stat(store_statfs(0x4f8e62000/0x0/0x4ffc00000, data 0x2f17adb/0x302a000, compress 0x0/0x0/0x0, omap 0x23350, meta 0x3d4ccb0), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 13172736 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125452288 unmapped: 13164544 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125452288 unmapped: 13164544 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 187 ms_handle_reset con 0x560f6b5ca000 session 0x560f6be001c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125534208 unmapped: 13082624 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1770084 data_alloc: 234881024 data_used: 22308995
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 188 ms_handle_reset con 0x560f6d03fc00 session 0x560f6ad27880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f8e51000/0x0/0x4ffc00000, data 0x2f21275/0x3037000, compress 0x0/0x0/0x0, omap 0x23a4d, meta 0x3d4c5b3), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125607936 unmapped: 13008896 heap: 138616832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 189 ms_handle_reset con 0x560f6ad9a400 session 0x560f6bfe2e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 189 ms_handle_reset con 0x560f6d03f000 session 0x560f6ad27dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 189 ms_handle_reset con 0x560f6d03f000 session 0x560f6ba1d500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 189 ms_handle_reset con 0x560f69ec0400 session 0x560f6cfc7dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 189 ms_handle_reset con 0x560f6ad9a400 session 0x560f6bfe3880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 189 ms_handle_reset con 0x560f6b5ca000 session 0x560f6b435880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125460480 unmapped: 21553152 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 189 ms_handle_reset con 0x560f6d03fc00 session 0x560f6bfe3a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 21929984 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 189 handle_osd_map epochs [189,190], i have 189, src has [1,190]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.884089470s of 10.235930443s, submitted: 123
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 190 ms_handle_reset con 0x560f6d03fc00 session 0x560f692b4700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 21929984 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 21929984 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 190 ms_handle_reset con 0x560f6ad9a400 session 0x560f6b897880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 190 ms_handle_reset con 0x560f6b5ca000 session 0x560f6ad26000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 190 heartbeat osd_stat(store_statfs(0x4f8178000/0x0/0x4ffc00000, data 0x3bf9a01/0x3d12000, compress 0x0/0x0/0x0, omap 0x243fe, meta 0x3d4bc02), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 190 ms_handle_reset con 0x560f6ad9a800 session 0x560f69ee28c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1856617 data_alloc: 234881024 data_used: 22308995
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 190 handle_osd_map epochs [190,191], i have 190, src has [1,191]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 190 handle_osd_map epochs [191,191], i have 191, src has [1,191]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 191 ms_handle_reset con 0x560f6d03f000 session 0x560f6ba5ba40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125272064 unmapped: 21741568 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 191 ms_handle_reset con 0x560f6ad9a400 session 0x560f69c02c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 191 ms_handle_reset con 0x560f6ad9a800 session 0x560f6bfe3500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 191 ms_handle_reset con 0x560f6b5ca000 session 0x560f69ee2a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 191 ms_handle_reset con 0x560f6d03f000 session 0x560f6be00e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f8178000/0x0/0x4ffc00000, data 0x3bfc99f/0x3d14000, compress 0x0/0x0/0x0, omap 0x24614, meta 0x3d4b9ec), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 192 ms_handle_reset con 0x560f6d03fc00 session 0x560f69c02fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 21823488 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134348800 unmapped: 12664832 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 192 ms_handle_reset con 0x560f69ec0800 session 0x560f69f25c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 192 ms_handle_reset con 0x560f6d03e400 session 0x560f69373dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 192 ms_handle_reset con 0x560f6d03e800 session 0x560f6b435a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 192 ms_handle_reset con 0x560f69ec0400 session 0x560f69b5afc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133693440 unmapped: 13320192 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 192 ms_handle_reset con 0x560f6ad9a400 session 0x560f6fe0d500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 192 ms_handle_reset con 0x560f69b4e000 session 0x560f6b484c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 192 ms_handle_reset con 0x560f69ec0000 session 0x560f6ba5a1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f8171000/0x0/0x4ffc00000, data 0x3c0019f/0x3d1b000, compress 0x0/0x0/0x0, omap 0x24f51, meta 0x3d4b0af), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 192 ms_handle_reset con 0x560f6b5ca000 session 0x560f6ba4c700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127590400 unmapped: 19423232 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 192 handle_osd_map epochs [192,193], i have 192, src has [1,193]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1643408 data_alloc: 234881024 data_used: 19070184
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127590400 unmapped: 19423232 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127590400 unmapped: 19423232 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 193 ms_handle_reset con 0x560f6d03f000 session 0x560f6fdb7500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 193 ms_handle_reset con 0x560f6d804c00 session 0x560f6fe0c8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127590400 unmapped: 19423232 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fa766000/0x0/0x4ffc00000, data 0x160ac2a/0x1726000, compress 0x0/0x0/0x0, omap 0x253b0, meta 0x3d4ac50), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 193 ms_handle_reset con 0x560f69b4e000 session 0x560f6ba1ca80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 193 ms_handle_reset con 0x560f69ec0000 session 0x560f6fb22a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 193 handle_osd_map epochs [193,194], i have 193, src has [1,194]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.543696404s of 10.187750816s, submitted: 127
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 194 ms_handle_reset con 0x560f6b5ca000 session 0x560f6fdb7a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127647744 unmapped: 19365888 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127647744 unmapped: 19365888 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 194 ms_handle_reset con 0x560f6d03f000 session 0x560f6fdb76c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 194 heartbeat osd_stat(store_statfs(0x4fa760000/0x0/0x4ffc00000, data 0x160c844/0x172a000, compress 0x0/0x0/0x0, omap 0x25838, meta 0x3d4a7c8), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1630680 data_alloc: 234881024 data_used: 19070184
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 194 ms_handle_reset con 0x560f6d804c00 session 0x560f6cfc6fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127270912 unmapped: 19742720 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 194 handle_osd_map epochs [196,196], i have 194, src has [1,196]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 194 handle_osd_map epochs [195,196], i have 194, src has [1,196]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 196 ms_handle_reset con 0x560f69b4e000 session 0x560f6adbea80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127287296 unmapped: 19726336 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 16793600 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 17195008 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 17195008 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 196 handle_osd_map epochs [196,197], i have 196, src has [1,197]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 197 heartbeat osd_stat(store_statfs(0x4f9fd3000/0x0/0x4ffc00000, data 0x1d81fec/0x1ea2000, compress 0x0/0x0/0x0, omap 0x2598b, meta 0x3d4a675), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1696500 data_alloc: 234881024 data_used: 19985229
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127934464 unmapped: 19079168 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127934464 unmapped: 19079168 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127934464 unmapped: 19079168 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127934464 unmapped: 19079168 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.005832672s of 10.239164352s, submitted: 105
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 197 heartbeat osd_stat(store_statfs(0x4f9fe5000/0x0/0x4ffc00000, data 0x1d83a87/0x1ea5000, compress 0x0/0x0/0x0, omap 0x25cb1, meta 0x3d4a34f), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 197 ms_handle_reset con 0x560f69ec0000 session 0x560f6b485880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 197 ms_handle_reset con 0x560f6b5ca000 session 0x560f68edefc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127934464 unmapped: 19079168 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 197 ms_handle_reset con 0x560f6d03f000 session 0x560f69ee3180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 197 ms_handle_reset con 0x560f6b222000 session 0x560f6fb22c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 197 ms_handle_reset con 0x560f69b4e000 session 0x560f6fb22fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 197 ms_handle_reset con 0x560f69ec0000 session 0x560f6fafea80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 197 ms_handle_reset con 0x560f6b5ca000 session 0x560f6fb23880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 197 ms_handle_reset con 0x560f6d03f000 session 0x560f6c7f0c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 197 ms_handle_reset con 0x560f6b987c00 session 0x560f6eb3bc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1732563 data_alloc: 234881024 data_used: 19993421
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127975424 unmapped: 19038208 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 197 heartbeat osd_stat(store_statfs(0x4f9aff000/0x0/0x4ffc00000, data 0x226aa97/0x238d000, compress 0x0/0x0/0x0, omap 0x25fb8, meta 0x3d4a048), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 197 handle_osd_map epochs [198,198], i have 198, src has [1,198]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 198 heartbeat osd_stat(store_statfs(0x4f9afa000/0x0/0x4ffc00000, data 0x226c516/0x2390000, compress 0x0/0x0/0x0, omap 0x2616b, meta 0x3d49e95), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127975424 unmapped: 19038208 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 198 ms_handle_reset con 0x560f69b4e000 session 0x560f6eb3aa80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127975424 unmapped: 19038208 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127975424 unmapped: 19038208 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 198 ms_handle_reset con 0x560f69ec0000 session 0x560f6eb3b6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 198 ms_handle_reset con 0x560f6b5ca000 session 0x560f6fafe1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127975424 unmapped: 19038208 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 198 handle_osd_map epochs [198,199], i have 198, src has [1,199]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 199 ms_handle_reset con 0x560f6cf78400 session 0x560f6eb3a700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1741588 data_alloc: 234881024 data_used: 19993437
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127983616 unmapped: 19030016 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 200 ms_handle_reset con 0x560f6a0e3c00 session 0x560f6b4841c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 200 ms_handle_reset con 0x560f691d5800 session 0x560f6eb3a540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 200 ms_handle_reset con 0x560f69b4e000 session 0x560f6b484000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129712128 unmapped: 17301504 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 200 ms_handle_reset con 0x560f68ee5000 session 0x560f6cfc6000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 200 ms_handle_reset con 0x560f6da93800 session 0x560f6952a700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 200 heartbeat osd_stat(store_statfs(0x4f9af0000/0x0/0x4ffc00000, data 0x226fc81/0x2398000, compress 0x0/0x0/0x0, omap 0x26870, meta 0x3d49790), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129728512 unmapped: 17285120 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 201 ms_handle_reset con 0x560f6da93400 session 0x560f6b484540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 201 ms_handle_reset con 0x560f68ee5000 session 0x560f6ba5b340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129720320 unmapped: 17293312 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129720320 unmapped: 17293312 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1776638 data_alloc: 234881024 data_used: 24849757
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 201 ms_handle_reset con 0x560f6d804000 session 0x560f6b896540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 201 ms_handle_reset con 0x560f6d804400 session 0x560f6b897a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129572864 unmapped: 17440768 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.019957542s of 12.144400597s, submitted: 58
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 202 heartbeat osd_stat(store_statfs(0x4f9af0000/0x0/0x4ffc00000, data 0x2271861/0x239a000, compress 0x0/0x0/0x0, omap 0x26bb8, meta 0x3d49448), peers [0,1] op hist [0,6])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 202 ms_handle_reset con 0x560f68ee4c00 session 0x560f6952a380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 202 ms_handle_reset con 0x560f68ee4800 session 0x560f6ef33180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122806272 unmapped: 24207360 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 202 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ef33500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 202 ms_handle_reset con 0x560f68ee5000 session 0x560f6b897180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 202 heartbeat osd_stat(store_statfs(0x4faf35000/0x0/0x4ffc00000, data 0xe2c409/0xf55000, compress 0x0/0x0/0x0, omap 0x26fb5, meta 0x3d4904b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 24190976 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 24190976 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 202 handle_osd_map epochs [202,203], i have 202, src has [1,203]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122265600 unmapped: 24748032 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 203 handle_osd_map epochs [203,204], i have 203, src has [1,204]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1586798 data_alloc: 234881024 data_used: 11701597
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 24158208 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 204 heartbeat osd_stat(store_statfs(0x4faf2d000/0x0/0x4ffc00000, data 0xe2fab0/0xf5b000, compress 0x0/0x0/0x0, omap 0x27669, meta 0x3d48997), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125960192 unmapped: 21053440 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 204 heartbeat osd_stat(store_statfs(0x4fa67c000/0x0/0x4ffc00000, data 0x16e4ab0/0x1810000, compress 0x0/0x0/0x0, omap 0x27669, meta 0x3d48997), peers [0,1] op hist [0,0,0,0,0,1])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 204 ms_handle_reset con 0x560f6d804000 session 0x560f6ba4c1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 20267008 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 204 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x1780ab0/0x18ac000, compress 0x0/0x0/0x0, omap 0x27669, meta 0x3d48997), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 204 ms_handle_reset con 0x560f6d804400 session 0x560f6ba1cfc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 20930560 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 204 ms_handle_reset con 0x560f6da93800 session 0x560f6ef328c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 204 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fdb6fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 124878848 unmapped: 22134784 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1646304 data_alloc: 234881024 data_used: 12709213
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 124870656 unmapped: 22142976 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 205 ms_handle_reset con 0x560f68ee5000 session 0x560f6fe0c1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 124870656 unmapped: 22142976 heap: 147013632 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.442721367s of 11.146203995s, submitted: 211
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 205 ms_handle_reset con 0x560f6d804000 session 0x560f6faff880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 205 ms_handle_reset con 0x560f6d804400 session 0x560f6eb3a1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 25149440 heap: 151207936 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 205 heartbeat osd_stat(store_statfs(0x4f9bb1000/0x0/0x4ffc00000, data 0x21ac5f3/0x22db000, compress 0x0/0x0/0x0, omap 0x279ab, meta 0x3d48655), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 205 ms_handle_reset con 0x560f6da93400 session 0x560f69373c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 205 ms_handle_reset con 0x560f68ee4c00 session 0x560f6eb3a700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 126263296 unmapped: 24944640 heap: 151207936 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 205 heartbeat osd_stat(store_statfs(0x4f9b90000/0x0/0x4ffc00000, data 0x21ce591/0x22fc000, compress 0x0/0x0/0x0, omap 0x279ab, meta 0x3d48655), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 205 ms_handle_reset con 0x560f68ee5000 session 0x560f6c7f1340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 126296064 unmapped: 24911872 heap: 151207936 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 206 ms_handle_reset con 0x560f6d804400 session 0x560f6fb23dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1757267 data_alloc: 234881024 data_used: 12709485
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 206 ms_handle_reset con 0x560f6a0e3000 session 0x560f6fe0d880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125730816 unmapped: 25477120 heap: 151207936 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 207 ms_handle_reset con 0x560f6a0e3400 session 0x560f6fafe540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 207 ms_handle_reset con 0x560f6d804000 session 0x560f6be00540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 207 ms_handle_reset con 0x560f68ee4c00 session 0x560f69373500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125779968 unmapped: 25427968 heap: 151207936 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 208 ms_handle_reset con 0x560f68ee5000 session 0x560f69ee2380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125829120 unmapped: 25378816 heap: 151207936 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 208 ms_handle_reset con 0x560f6a0e3000 session 0x560f6be00fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 208 handle_osd_map epochs [208,209], i have 208, src has [1,209]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125845504 unmapped: 25362432 heap: 151207936 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 209 ms_handle_reset con 0x560f6a0e2c00 session 0x560f6ef33880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 209 ms_handle_reset con 0x560f6d804400 session 0x560f6fe0c380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 209 heartbeat osd_stat(store_statfs(0x4f9a48000/0x0/0x4ffc00000, data 0x230d4c5/0x2442000, compress 0x0/0x0/0x0, omap 0x283fc, meta 0x3d47c04), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 209 ms_handle_reset con 0x560f68ee5000 session 0x560f6a0c9500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 209 ms_handle_reset con 0x560f6d804000 session 0x560f6ba5aa80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 209 ms_handle_reset con 0x560f6a0e3000 session 0x560f6fb23a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 210 ms_handle_reset con 0x560f6d804c00 session 0x560f6cfc7340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 210 ms_handle_reset con 0x560f68ee4c00 session 0x560f6b897500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127098880 unmapped: 28311552 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1811173 data_alloc: 234881024 data_used: 12709485
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127172608 unmapped: 28237824 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 210 heartbeat osd_stat(store_statfs(0x4f8f87000/0x0/0x4ffc00000, data 0x2dcb061/0x2f01000, compress 0x0/0x0/0x0, omap 0x286d6, meta 0x3d4792a), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 210 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 212 ms_handle_reset con 0x560f68ee5000 session 0x560f6fe0dc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 212 ms_handle_reset con 0x560f6a0e3000 session 0x560f685efa40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127221760 unmapped: 28188672 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 212 handle_osd_map epochs [212,213], i have 212, src has [1,213]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.851129532s of 10.219752312s, submitted: 133
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 213 ms_handle_reset con 0x560f6d804000 session 0x560f68ede8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 213 ms_handle_reset con 0x560f6d804400 session 0x560f6ba5bdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127229952 unmapped: 28180480 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 213 ms_handle_reset con 0x560f68ee4c00 session 0x560f6952bc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 213 ms_handle_reset con 0x560f68ee5000 session 0x560f6a0c8000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127238144 unmapped: 28172288 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 213 handle_osd_map epochs [213,214], i have 213, src has [1,214]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 214 ms_handle_reset con 0x560f6a0e3000 session 0x560f6ba5a380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 214 ms_handle_reset con 0x560f6d804000 session 0x560f6b897dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 214 ms_handle_reset con 0x560f6d804c00 session 0x560f6cfc6700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127287296 unmapped: 28123136 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 214 heartbeat osd_stat(store_statfs(0x4fa3f0000/0x0/0x4ffc00000, data 0x17cb08e/0x1903000, compress 0x0/0x0/0x0, omap 0x29146, meta 0x3d46eba), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 214 handle_osd_map epochs [214,215], i have 214, src has [1,215]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 215 ms_handle_reset con 0x560f6d805400 session 0x560f69b9b6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1698987 data_alloc: 234881024 data_used: 12713499
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 215 ms_handle_reset con 0x560f68ee4c00 session 0x560f6faffa40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127295488 unmapped: 28114944 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 215 ms_handle_reset con 0x560f6d03f000 session 0x560f6fb23c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 216 ms_handle_reset con 0x560f6b986800 session 0x560f6cfc6e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 216 ms_handle_reset con 0x560f68ee5000 session 0x560f6cfc7880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 121225216 unmapped: 34185216 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 216 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fdb6380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 216 heartbeat osd_stat(store_statfs(0x4fa3e3000/0x0/0x4ffc00000, data 0x17d188e/0x190c000, compress 0x0/0x0/0x0, omap 0x2970e, meta 0x3d468f2), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 216 ms_handle_reset con 0x560f6b7ffc00 session 0x560f6ba5afc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 216 ms_handle_reset con 0x560f6b7ff000 session 0x560f6b897c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 121225216 unmapped: 34185216 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 216 ms_handle_reset con 0x560f6b7fec00 session 0x560f6b485dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 217 ms_handle_reset con 0x560f6b1f9400 session 0x560f6fb22700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 217 heartbeat osd_stat(store_statfs(0x4fb3ef000/0x0/0x4ffc00000, data 0x95f4a1/0xa9a000, compress 0x0/0x0/0x0, omap 0x29e8b, meta 0x3d46175), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 121225216 unmapped: 34185216 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 217 heartbeat osd_stat(store_statfs(0x4fb3ef000/0x0/0x4ffc00000, data 0x95f4a1/0xa9a000, compress 0x0/0x0/0x0, omap 0x29e8b, meta 0x3d46175), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 218 ms_handle_reset con 0x560f6b1f8000 session 0x560f6fb228c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 218 ms_handle_reset con 0x560f6b1f9400 session 0x560f6fb221c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 218 ms_handle_reset con 0x560f6b1f9800 session 0x560f69b9afc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 34177024 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 219 heartbeat osd_stat(store_statfs(0x4fb3e9000/0x0/0x4ffc00000, data 0x9615d1/0xa9e000, compress 0x0/0x0/0x0, omap 0x2a1dc, meta 0x3d45e24), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1606284 data_alloc: 218103808 data_used: 7382491
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 34177024 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 121233408 unmapped: 34177024 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.180172920s of 10.540587425s, submitted: 151
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 121241600 unmapped: 34168832 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 220 ms_handle_reset con 0x560f6b7fec00 session 0x560f69b5a000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 220 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ef33dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 220 ms_handle_reset con 0x560f6b1f8000 session 0x560f6adbf180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 221 ms_handle_reset con 0x560f6b1f9400 session 0x560f6fdb6380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 221 ms_handle_reset con 0x560f6b1f9800 session 0x560f6fdb6c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 221 ms_handle_reset con 0x560f6b7fec00 session 0x560f6a0c9500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 221 ms_handle_reset con 0x560f6b7ff000 session 0x560f6a0c96c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 121249792 unmapped: 34160640 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 121249792 unmapped: 34160640 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 221 ms_handle_reset con 0x560f6b1f8000 session 0x560f68ede8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1613342 data_alloc: 218103808 data_used: 7385033
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 221 heartbeat osd_stat(store_statfs(0x4fb3e2000/0x0/0x4ffc00000, data 0x966834/0xaa7000, compress 0x0/0x0/0x0, omap 0x2ab31, meta 0x3d454cf), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 221 handle_osd_map epochs [222,222], i have 222, src has [1,222]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 221 handle_osd_map epochs [222,222], i have 222, src has [1,222]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 222 ms_handle_reset con 0x560f6b1f9400 session 0x560f6a0c8380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 33103872 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 222 handle_osd_map epochs [222,223], i have 223, src has [1,223]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 33103872 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 223 ms_handle_reset con 0x560f6b7fec00 session 0x560f6bfe2540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 224 ms_handle_reset con 0x560f6b1f9800 session 0x560f6eb3aa80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 224 ms_handle_reset con 0x560f6b7ffc00 session 0x560f6fdb6fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 32030720 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 224 heartbeat osd_stat(store_statfs(0x4fb3d8000/0x0/0x4ffc00000, data 0x96bc58/0xab0000, compress 0x0/0x0/0x0, omap 0x2b412, meta 0x3d44bee), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 224 heartbeat osd_stat(store_statfs(0x4fb3d8000/0x0/0x4ffc00000, data 0x96bc58/0xab0000, compress 0x0/0x0/0x0, omap 0x2b412, meta 0x3d44bee), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 224 ms_handle_reset con 0x560f6b1f9400 session 0x560f69b9a000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 224 handle_osd_map epochs [224,225], i have 224, src has [1,225]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 225 ms_handle_reset con 0x560f6b1f8000 session 0x560f6b434fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 32014336 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 32014336 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 225 ms_handle_reset con 0x560f6b7fec00 session 0x560f6c7f16c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1625380 data_alloc: 218103808 data_used: 7384887
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 226 ms_handle_reset con 0x560f6b1f9800 session 0x560f6ad26fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 31965184 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 31965184 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 226 ms_handle_reset con 0x560f6b1f8800 session 0x560f6eb3a700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 226 ms_handle_reset con 0x560f6b1f8400 session 0x560f6ba5bdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 31965184 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 226 ms_handle_reset con 0x560f6b1f8000 session 0x560f6a0c8000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.677922249s of 10.912144661s, submitted: 87
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 226 ms_handle_reset con 0x560f6b1f9400 session 0x560f68edec40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 31965184 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 226 heartbeat osd_stat(store_statfs(0x4fb3d3000/0x0/0x4ffc00000, data 0x96ef0c/0xab7000, compress 0x0/0x0/0x0, omap 0x2bc54, meta 0x3d443ac), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 31965184 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 226 ms_handle_reset con 0x560f6b1f9800 session 0x560f6fdb6e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 226 ms_handle_reset con 0x560f6b7fec00 session 0x560f69ee3dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1630950 data_alloc: 218103808 data_used: 7385516
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 30916608 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6b1f8000 session 0x560f69ee3500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6b1f8400 session 0x560f69b5ae00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 heartbeat osd_stat(store_statfs(0x4fb3d3000/0x0/0x4ffc00000, data 0x970aea/0xab7000, compress 0x0/0x0/0x0, omap 0x2be81, meta 0x3d4417f), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 30916608 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 heartbeat osd_stat(store_statfs(0x4fb3d3000/0x0/0x4ffc00000, data 0x970aea/0xab7000, compress 0x0/0x0/0x0, omap 0x2be81, meta 0x3d4417f), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 124493824 unmapped: 30916608 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6b1f9400 session 0x560f69b5b180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6b1f9800 session 0x560f6ba1d880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6b222400 session 0x560f6fdb6700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6b1f8000 session 0x560f6ba5afc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6b1f8400 session 0x560f6b897500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6b1f9400 session 0x560f69372e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 124346368 unmapped: 31064064 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 124346368 unmapped: 31064064 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1633779 data_alloc: 218103808 data_used: 7385500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 124346368 unmapped: 31064064 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6b1f9800 session 0x560f6952aa80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6d03ec00 session 0x560f6cfc68c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6b1f8000 session 0x560f6952a1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6b987c00 session 0x560f6be01340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6b986800 session 0x560f69b9ac40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 124510208 unmapped: 30900224 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 heartbeat osd_stat(store_statfs(0x4fb125000/0x0/0x4ffc00000, data 0xc20aea/0xd67000, compress 0x0/0x0/0x0, omap 0x2c0f7, meta 0x3d43f09), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6d03f000 session 0x560f6fb23180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 124510208 unmapped: 30900224 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6d805400 session 0x560f6eb3b6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6b1f8000 session 0x560f6fb228c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.028331757s of 10.135429382s, submitted: 60
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6b986800 session 0x560f6fb23c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 ms_handle_reset con 0x560f6d03fc00 session 0x560f6eb3ae00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 124870656 unmapped: 30539776 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 heartbeat osd_stat(store_statfs(0x4fb0fa000/0x0/0x4ffc00000, data 0xc4aafa/0xd92000, compress 0x0/0x0/0x0, omap 0x2c2cb, meta 0x3d43d35), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 124870656 unmapped: 30539776 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 227 handle_osd_map epochs [227,228], i have 228, src has [1,228]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1660878 data_alloc: 218103808 data_used: 7385500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 124878848 unmapped: 30531584 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 228 ms_handle_reset con 0x560f6d804c00 session 0x560f69b5b6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 30392320 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125288448 unmapped: 30121984 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125288448 unmapped: 30121984 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125288448 unmapped: 30121984 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 228 heartbeat osd_stat(store_statfs(0x4fb0f5000/0x0/0x4ffc00000, data 0xc4c579/0xd95000, compress 0x0/0x0/0x0, omap 0x2c73b, meta 0x3d438c5), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1679079 data_alloc: 234881024 data_used: 10114460
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125288448 unmapped: 30121984 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125288448 unmapped: 30121984 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 228 heartbeat osd_stat(store_statfs(0x4fb0f5000/0x0/0x4ffc00000, data 0xc4c579/0xd95000, compress 0x0/0x0/0x0, omap 0x2c73b, meta 0x3d438c5), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125288448 unmapped: 30121984 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125288448 unmapped: 30121984 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125288448 unmapped: 30121984 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1679079 data_alloc: 234881024 data_used: 10114460
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 125288448 unmapped: 30121984 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 124805120 unmapped: 30605312 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.167225838s of 13.204381943s, submitted: 22
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 228 heartbeat osd_stat(store_statfs(0x4fb0f5000/0x0/0x4ffc00000, data 0xc4c579/0xd95000, compress 0x0/0x0/0x0, omap 0x2c73b, meta 0x3d438c5), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 28164096 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 128311296 unmapped: 27099136 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129990656 unmapped: 25419776 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 228 heartbeat osd_stat(store_statfs(0x4fa0b6000/0x0/0x4ffc00000, data 0x1c8d579/0x1dd6000, compress 0x0/0x0/0x0, omap 0x2c73b, meta 0x3d438c5), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1799065 data_alloc: 234881024 data_used: 10664348
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 228 heartbeat osd_stat(store_statfs(0x4fa086000/0x0/0x4ffc00000, data 0x1cbd579/0x1e06000, compress 0x0/0x0/0x0, omap 0x2c73b, meta 0x3d438c5), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 24158208 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 24158208 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 24158208 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 24158208 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 24158208 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1795201 data_alloc: 234881024 data_used: 10664348
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 24707072 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 228 heartbeat osd_stat(store_statfs(0x4fa083000/0x0/0x4ffc00000, data 0x1cc0579/0x1e09000, compress 0x0/0x0/0x0, omap 0x2c73b, meta 0x3d438c5), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 24707072 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 228 heartbeat osd_stat(store_statfs(0x4fa083000/0x0/0x4ffc00000, data 0x1cc0579/0x1e09000, compress 0x0/0x0/0x0, omap 0x2c73b, meta 0x3d438c5), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 24707072 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.436706543s of 11.887578011s, submitted: 124
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 228 ms_handle_reset con 0x560f68ee5000 session 0x560f69ee36c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 24576000 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 228 handle_osd_map epochs [228,229], i have 229, src has [1,229]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 229 ms_handle_reset con 0x560f68ee5000 session 0x560f69f256c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 24576000 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 229 handle_osd_map epochs [229,230], i have 229, src has [1,230]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 230 ms_handle_reset con 0x560f6b1f8000 session 0x560f69ee3880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 230 ms_handle_reset con 0x560f68ee4c00 session 0x560f6b896700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1801773 data_alloc: 234881024 data_used: 10664364
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 230 heartbeat osd_stat(store_statfs(0x4fa079000/0x0/0x4ffc00000, data 0x1cc3cb1/0x1e0f000, compress 0x0/0x0/0x0, omap 0x2cd41, meta 0x3d432bf), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 24576000 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 230 ms_handle_reset con 0x560f6d03fc00 session 0x560f6cfc6380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 230 ms_handle_reset con 0x560f68ee4800 session 0x560f6fdb7340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 130965504 unmapped: 24444928 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 231 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ef32fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 24436736 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 232 ms_handle_reset con 0x560f68ee5000 session 0x560f69f24700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 232 ms_handle_reset con 0x560f6d804c00 session 0x560f6fafefc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 24436736 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 232 ms_handle_reset con 0x560f6b986800 session 0x560f69f24380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 232 ms_handle_reset con 0x560f6d03fc00 session 0x560f6ba5a8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 232 heartbeat osd_stat(store_statfs(0x4fa075000/0x0/0x4ffc00000, data 0x1cc745b/0x1e17000, compress 0x0/0x0/0x0, omap 0x2d345, meta 0x3d42cbb), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 232 ms_handle_reset con 0x560f6d03fc00 session 0x560f6fb22e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 232 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ba4da40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 24862720 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 232 ms_handle_reset con 0x560f68ee5000 session 0x560f69ee2540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 232 ms_handle_reset con 0x560f6b986800 session 0x560f6eb3ba40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 233 ms_handle_reset con 0x560f6b1f8000 session 0x560f6ef33a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 233 ms_handle_reset con 0x560f6d804400 session 0x560f69ee2e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 233 ms_handle_reset con 0x560f6b7fec00 session 0x560f69372fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869821 data_alloc: 234881024 data_used: 10664380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127770624 unmapped: 27639808 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 233 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ba5aa80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 27615232 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 233 heartbeat osd_stat(store_statfs(0x4fa482000/0x0/0x4ffc00000, data 0x18b80ad/0x1a0a000, compress 0x0/0x0/0x0, omap 0x2d7af, meta 0x3d42851), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 233 ms_handle_reset con 0x560f6b1f8000 session 0x560f69f25340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 27615232 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 234 ms_handle_reset con 0x560f6b986800 session 0x560f6b485a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 234 ms_handle_reset con 0x560f68ee4c00 session 0x560f69373180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 27615232 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.169899940s of 10.325560570s, submitted: 97
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 234 handle_osd_map epochs [234,235], i have 234, src has [1,235]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 235 ms_handle_reset con 0x560f6d03fc00 session 0x560f69b9b340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 235 ms_handle_reset con 0x560f6b986800 session 0x560f692b4000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 235 ms_handle_reset con 0x560f68ee5000 session 0x560f6fafee00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 235 ms_handle_reset con 0x560f6d804400 session 0x560f6be01dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 235 ms_handle_reset con 0x560f6d804400 session 0x560f6be00380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 27549696 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 236 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fafec40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 236 ms_handle_reset con 0x560f68ee5000 session 0x560f685ef6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1829013 data_alloc: 234881024 data_used: 13860780
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129630208 unmapped: 25780224 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129630208 unmapped: 25780224 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 236 handle_osd_map epochs [236,237], i have 236, src has [1,237]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 237 heartbeat osd_stat(store_statfs(0x4fa478000/0x0/0x4ffc00000, data 0x18bd3fd/0x1a12000, compress 0x0/0x0/0x0, omap 0x2e2a4, meta 0x3d41d5c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 237 ms_handle_reset con 0x560f6b986800 session 0x560f685ef880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129630208 unmapped: 25780224 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129646592 unmapped: 25763840 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 237 ms_handle_reset con 0x560f6b7ffc00 session 0x560f6be01a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129646592 unmapped: 25763840 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 238 ms_handle_reset con 0x560f6b987c00 session 0x560f6c7f0a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 238 ms_handle_reset con 0x560f6d03f000 session 0x560f6b896540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 238 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ba5a540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1778708 data_alloc: 234881024 data_used: 13633452
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129654784 unmapped: 25755648 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129654784 unmapped: 25755648 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 238 heartbeat osd_stat(store_statfs(0x4fac4a000/0x0/0x4ffc00000, data 0x10e8a98/0x1240000, compress 0x0/0x0/0x0, omap 0x2e8ae, meta 0x3d41752), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129654784 unmapped: 25755648 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 238 heartbeat osd_stat(store_statfs(0x4fac4a000/0x0/0x4ffc00000, data 0x10e8a98/0x1240000, compress 0x0/0x0/0x0, omap 0x2e8ae, meta 0x3d41752), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129654784 unmapped: 25755648 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.636238098s of 10.784171104s, submitted: 77
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 129884160 unmapped: 25526272 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1852236 data_alloc: 234881024 data_used: 14048172
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 19759104 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 239 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x1c49537/0x1da2000, compress 0x0/0x0/0x0, omap 0x2ec90, meta 0x3d41370), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134070272 unmapped: 21340160 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 239 handle_osd_map epochs [239,240], i have 239, src has [1,240]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 240 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1ce60c5/0x1e3f000, compress 0x0/0x0/0x0, omap 0x2edef, meta 0x3d41211), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134070272 unmapped: 21340160 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134078464 unmapped: 21331968 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 241 handle_osd_map epochs [241,242], i have 241, src has [1,242]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 242 ms_handle_reset con 0x560f68ee5000 session 0x560f6ba4c000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 242 ms_handle_reset con 0x560f6b986800 session 0x560f6c7f1180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134078464 unmapped: 21331968 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 242 heartbeat osd_stat(store_statfs(0x4fa03b000/0x0/0x4ffc00000, data 0x1ce98a5/0x1e45000, compress 0x0/0x0/0x0, omap 0x2f41b, meta 0x3d40be5), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869678 data_alloc: 234881024 data_used: 14604700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134078464 unmapped: 21331968 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134078464 unmapped: 21331968 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134078464 unmapped: 21331968 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134078464 unmapped: 21331968 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 242 heartbeat osd_stat(store_statfs(0x4fa027000/0x0/0x4ffc00000, data 0x1d0a895/0x1e65000, compress 0x0/0x0/0x0, omap 0x2f41b, meta 0x3d40be5), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.803764343s of 10.126688004s, submitted: 192
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134086656 unmapped: 21323776 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 243 ms_handle_reset con 0x560f68ee4c00 session 0x560f6b896000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869084 data_alloc: 234881024 data_used: 14608796
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 243 heartbeat osd_stat(store_statfs(0x4fa022000/0x0/0x4ffc00000, data 0x1d0c314/0x1e68000, compress 0x0/0x0/0x0, omap 0x2f5e4, meta 0x3d40a1c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134086656 unmapped: 21323776 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 244 ms_handle_reset con 0x560f6d03f000 session 0x560f6be016c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134094848 unmapped: 21315584 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 244 ms_handle_reset con 0x560f6b987c00 session 0x560f692b4380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 245 ms_handle_reset con 0x560f6d804400 session 0x560f6bfe28c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134217728 unmapped: 21192704 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 246 ms_handle_reset con 0x560f6d03fc00 session 0x560f68ede1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 246 heartbeat osd_stat(store_statfs(0x4f9e18000/0x0/0x4ffc00000, data 0x1f10a5c/0x2070000, compress 0x0/0x0/0x0, omap 0x2fdd7, meta 0x3d40229), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134307840 unmapped: 21102592 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 246 ms_handle_reset con 0x560f6b987c00 session 0x560f6ba1d340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 246 handle_osd_map epochs [246,247], i have 246, src has [1,247]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 246 ms_handle_reset con 0x560f68ee5000 session 0x560f68edf500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 247 ms_handle_reset con 0x560f6d03f000 session 0x560f6bfe3880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 247 ms_handle_reset con 0x560f6d804400 session 0x560f6b485a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134324224 unmapped: 21086208 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 247 handle_osd_map epochs [247,248], i have 247, src has [1,248]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 248 ms_handle_reset con 0x560f6d804c00 session 0x560f6fdb6000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 248 ms_handle_reset con 0x560f68ee4c00 session 0x560f6b485880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1902966 data_alloc: 234881024 data_used: 14621116
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134332416 unmapped: 21078016 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 248 handle_osd_map epochs [248,249], i have 249, src has [1,249]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 249 ms_handle_reset con 0x560f68ee5000 session 0x560f69372380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 249 ms_handle_reset con 0x560f6d03f000 session 0x560f69372fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134356992 unmapped: 21053440 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 250 ms_handle_reset con 0x560f6da93400 session 0x560f6b897180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 250 ms_handle_reset con 0x560f6ad9a400 session 0x560f6fe0ca80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 250 ms_handle_reset con 0x560f6d804400 session 0x560f6ef33dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 21028864 heap: 155410432 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 250 handle_osd_map epochs [250,251], i have 250, src has [1,251]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 251 handle_osd_map epochs [251,251], i have 251, src has [1,251]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 251 ms_handle_reset con 0x560f6da93800 session 0x560f6faffc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 251 ms_handle_reset con 0x560f68ee4c00 session 0x560f685ef340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 251 ms_handle_reset con 0x560f6b987c00 session 0x560f69b9afc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 251 ms_handle_reset con 0x560f6d03f000 session 0x560f6faff880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 251 ms_handle_reset con 0x560f6ad9a400 session 0x560f69ee2e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 251 heartbeat osd_stat(store_statfs(0x4f9df2000/0x0/0x4ffc00000, data 0x1f2e98d/0x2098000, compress 0x0/0x0/0x0, omap 0x30ff3, meta 0x3d3f00d), peers [0,1] op hist [0,0,0,1,1])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 251 ms_handle_reset con 0x560f6d03f000 session 0x560f691528c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153911296 unmapped: 14598144 heap: 168509440 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 252 ms_handle_reset con 0x560f68ee4c00 session 0x560f6c7f0c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 252 ms_handle_reset con 0x560f68ee5000 session 0x560f6adbe380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 252 ms_handle_reset con 0x560f6b987c00 session 0x560f6c7f1340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 252 ms_handle_reset con 0x560f6da93800 session 0x560f6fe0d880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.404502869s of 10.013297081s, submitted: 232
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 253 ms_handle_reset con 0x560f6d804400 session 0x560f6ba1cc40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153952256 unmapped: 14557184 heap: 168509440 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 253 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ba1dc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2043479 data_alloc: 251658240 data_used: 27920633
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153952256 unmapped: 14557184 heap: 168509440 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153952256 unmapped: 14557184 heap: 168509440 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 253 ms_handle_reset con 0x560f6ad9a400 session 0x560f6ef321c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 253 ms_handle_reset con 0x560f68ee5000 session 0x560f6ba1c1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 253 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ba1ce00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147955712 unmapped: 20553728 heap: 168509440 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 254 ms_handle_reset con 0x560f6da93800 session 0x560f69b9b500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147955712 unmapped: 20553728 heap: 168509440 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 255 ms_handle_reset con 0x560f6d804400 session 0x560f6fafefc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f9195000/0x0/0x4ffc00000, data 0x2b83ec8/0x2cf5000, compress 0x0/0x0/0x0, omap 0x32920, meta 0x3d3d6e0), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 255 ms_handle_reset con 0x560f6d03f000 session 0x560f685eea80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 255 ms_handle_reset con 0x560f6ad9a400 session 0x560f69373180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147955712 unmapped: 20553728 heap: 168509440 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2041571 data_alloc: 251658240 data_used: 27920829
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147955712 unmapped: 20553728 heap: 168509440 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 256 ms_handle_reset con 0x560f68ee4c00 session 0x560f6b485340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 256 ms_handle_reset con 0x560f6d804400 session 0x560f69ee2380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 256 ms_handle_reset con 0x560f6d03f000 session 0x560f69ee3dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 256 ms_handle_reset con 0x560f6da93800 session 0x560f69ee2a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 256 ms_handle_reset con 0x560f6ad9ac00 session 0x560f6fafe000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 256 ms_handle_reset con 0x560f6da93400 session 0x560f6bfe2c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f918f000/0x0/0x4ffc00000, data 0x2b85faa/0x2cf9000, compress 0x0/0x0/0x0, omap 0x32ae6, meta 0x3d3d51a), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 256 ms_handle_reset con 0x560f6d03f000 session 0x560f685ef6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 256 ms_handle_reset con 0x560f68ee4c00 session 0x560f6be00380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 256 ms_handle_reset con 0x560f6d804400 session 0x560f6ba4c8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 256 ms_handle_reset con 0x560f6da93800 session 0x560f6be00e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 256 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fafe1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147603456 unmapped: 20905984 heap: 168509440 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 257 ms_handle_reset con 0x560f6d03f000 session 0x560f69b9b340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 257 heartbeat osd_stat(store_statfs(0x4f918e000/0x0/0x4ffc00000, data 0x2b87a99/0x2cfc000, compress 0x0/0x0/0x0, omap 0x33452, meta 0x3d3cbae), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 257 ms_handle_reset con 0x560f6da93400 session 0x560f6be01dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147603456 unmapped: 20905984 heap: 168509440 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 258 ms_handle_reset con 0x560f6d804400 session 0x560f6fafe540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 258 ms_handle_reset con 0x560f6da93800 session 0x560f6fafec40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 258 ms_handle_reset con 0x560f6ad9ac00 session 0x560f6c7f0700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147652608 unmapped: 20856832 heap: 168509440 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.929156303s of 10.082487106s, submitted: 75
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 20807680 heap: 168509440 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 258 ms_handle_reset con 0x560f68ee4c00 session 0x560f685ef880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 258 ms_handle_reset con 0x560f6d804400 session 0x560f6fb23c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 258 ms_handle_reset con 0x560f6d03f000 session 0x560f692b4000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 258 ms_handle_reset con 0x560f6da93800 session 0x560f6c7f1340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2143582 data_alloc: 251658240 data_used: 28446303
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147914752 unmapped: 28991488 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 258 heartbeat osd_stat(store_statfs(0x4f82b2000/0x0/0x4ffc00000, data 0x3a606c3/0x3bd8000, compress 0x0/0x0/0x0, omap 0x3398f, meta 0x3d3c671), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 258 ms_handle_reset con 0x560f68ee4c00 session 0x560f69ee2380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 259 ms_handle_reset con 0x560f6d804400 session 0x560f69ee3dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 259 ms_handle_reset con 0x560f6ad9ac00 session 0x560f685efa40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 259 heartbeat osd_stat(store_statfs(0x4f82b2000/0x0/0x4ffc00000, data 0x3a606c3/0x3bd8000, compress 0x0/0x0/0x0, omap 0x3398f, meta 0x3d3c671), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147955712 unmapped: 28950528 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 260 ms_handle_reset con 0x560f6d03f000 session 0x560f69c02fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 260 ms_handle_reset con 0x560f6ad9a800 session 0x560f69f24a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 260 ms_handle_reset con 0x560f6da93400 session 0x560f6bfe3dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 260 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fe0da40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 151347200 unmapped: 25559040 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 260 ms_handle_reset con 0x560f6ad9ac00 session 0x560f6b896000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 260 ms_handle_reset con 0x560f6d03f000 session 0x560f6fe0d180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153059328 unmapped: 23846912 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 260 ms_handle_reset con 0x560f6d804400 session 0x560f6c7f16c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153108480 unmapped: 23797760 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 261 ms_handle_reset con 0x560f6d03f000 session 0x560f6eb3a8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2184308 data_alloc: 251658240 data_used: 34054239
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153108480 unmapped: 23797760 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 261 heartbeat osd_stat(store_statfs(0x4f82a9000/0x0/0x4ffc00000, data 0x3a65997/0x3be1000, compress 0x0/0x0/0x0, omap 0x34701, meta 0x3d3b8ff), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 262 ms_handle_reset con 0x560f6ad9ac00 session 0x560f68edf6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 262 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ba1d6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 262 heartbeat osd_stat(store_statfs(0x4f82a4000/0x0/0x4ffc00000, data 0x3a67533/0x3be4000, compress 0x0/0x0/0x0, omap 0x34a41, meta 0x3d3b5bf), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153165824 unmapped: 23740416 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 262 heartbeat osd_stat(store_statfs(0x4f82a4000/0x0/0x4ffc00000, data 0x3a67533/0x3be4000, compress 0x0/0x0/0x0, omap 0x34a41, meta 0x3d3b5bf), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 263 ms_handle_reset con 0x560f6ad9b800 session 0x560f6bfe3a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 263 ms_handle_reset con 0x560f6da93400 session 0x560f69373dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153190400 unmapped: 23715840 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 263 handle_osd_map epochs [263,264], i have 263, src has [1,264]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 264 ms_handle_reset con 0x560f68ee4c00 session 0x560f6faff500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153190400 unmapped: 23715840 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 264 ms_handle_reset con 0x560f6ad9ac00 session 0x560f69ee2a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 264 ms_handle_reset con 0x560f6ad9b800 session 0x560f6be016c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153190400 unmapped: 23715840 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.143525124s of 10.567707062s, submitted: 92
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 265 ms_handle_reset con 0x560f6d03f000 session 0x560f6ba5ba40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 265 ms_handle_reset con 0x560f6ad9a000 session 0x560f6ba1c540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2195332 data_alloc: 251658240 data_used: 34054239
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 23584768 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 265 heartbeat osd_stat(store_statfs(0x4f829d000/0x0/0x4ffc00000, data 0x3a6c8af/0x3bed000, compress 0x0/0x0/0x0, omap 0x35370, meta 0x3d3ac90), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 23584768 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 265 handle_osd_map epochs [265,266], i have 265, src has [1,266]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 266 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fe0ce00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 266 ms_handle_reset con 0x560f6ad9ac00 session 0x560f6c7f0c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 23584768 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 267 ms_handle_reset con 0x560f6ad9b800 session 0x560f6adbe380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 267 ms_handle_reset con 0x560f6d03f000 session 0x560f69372380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156065792 unmapped: 20840448 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 268 heartbeat osd_stat(store_statfs(0x4f8295000/0x0/0x4ffc00000, data 0x3a7469b/0x3bf5000, compress 0x0/0x0/0x0, omap 0x35c4b, meta 0x3d3a3b5), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156123136 unmapped: 20783104 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2218823 data_alloc: 251658240 data_used: 37486772
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156188672 unmapped: 20717568 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 269 handle_osd_map epochs [269,270], i have 270, src has [1,270]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156262400 unmapped: 20643840 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156270592 unmapped: 20635648 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 272 ms_handle_reset con 0x560f6a0e4800 session 0x560f6fdb6000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156303360 unmapped: 20602880 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 272 ms_handle_reset con 0x560f68ee4c00 session 0x560f6b485880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156303360 unmapped: 20602880 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.949384689s of 10.192997932s, submitted: 131
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2231536 data_alloc: 251658240 data_used: 37487559
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 273 heartbeat osd_stat(store_statfs(0x4f8289000/0x0/0x4ffc00000, data 0x3a7d114/0x3c03000, compress 0x0/0x0/0x0, omap 0x36ac7, meta 0x3d39539), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 273 handle_osd_map epochs [273,274], i have 273, src has [1,274]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 274 ms_handle_reset con 0x560f6ad9ac00 session 0x560f6fe0cc40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 157401088 unmapped: 19505152 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 274 ms_handle_reset con 0x560f6ad9b800 session 0x560f6fb22c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 274 heartbeat osd_stat(store_statfs(0x4f8280000/0x0/0x4ffc00000, data 0x3a807af/0x3c08000, compress 0x0/0x0/0x0, omap 0x37177, meta 0x3d38e89), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 157425664 unmapped: 19480576 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 157425664 unmapped: 19480576 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 275 ms_handle_reset con 0x560f6d03f000 session 0x560f6be01180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 275 ms_handle_reset con 0x560f6a0e5400 session 0x560f6bfe3c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 157958144 unmapped: 18948096 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 157966336 unmapped: 18939904 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2235345 data_alloc: 251658240 data_used: 37488058
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 157990912 unmapped: 18915328 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 157990912 unmapped: 18915328 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 276 heartbeat osd_stat(store_statfs(0x4f827d000/0x0/0x4ffc00000, data 0x3a83e28/0x3c0d000, compress 0x0/0x0/0x0, omap 0x3776b, meta 0x3d38895), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 276 heartbeat osd_stat(store_statfs(0x4f827d000/0x0/0x4ffc00000, data 0x3a83e28/0x3c0d000, compress 0x0/0x0/0x0, omap 0x3776b, meta 0x3d38895), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 276 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ba4c1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 158236672 unmapped: 18669568 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 276 ms_handle_reset con 0x560f6ad9ac00 session 0x560f6bfe2a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 276 ms_handle_reset con 0x560f6ad9b800 session 0x560f6fafe540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 158351360 unmapped: 18554880 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 158351360 unmapped: 18554880 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 277 ms_handle_reset con 0x560f6d03f000 session 0x560f6b8968c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 277 heartbeat osd_stat(store_statfs(0x4f79e7000/0x0/0x4ffc00000, data 0x4314969/0x44a0000, compress 0x0/0x0/0x0, omap 0x37b68, meta 0x3d38498), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.781032562s of 10.025880814s, submitted: 125
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 277 ms_handle_reset con 0x560f6a0e4000 session 0x560f6c7f1a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2249058 data_alloc: 251658240 data_used: 37488058
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 277 handle_osd_map epochs [277,278], i have 277, src has [1,278]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 158400512 unmapped: 18505728 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 278 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fe0ddc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 278 ms_handle_reset con 0x560f6ad9b800 session 0x560f6ef33180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 279 ms_handle_reset con 0x560f6ad9ac00 session 0x560f6be00540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 158408704 unmapped: 18497536 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 280 ms_handle_reset con 0x560f6d03f000 session 0x560f6ef33500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 158408704 unmapped: 18497536 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 280 ms_handle_reset con 0x560f6b4d2c00 session 0x560f6ef32fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 280 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ef328c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 280 ms_handle_reset con 0x560f6ad9ac00 session 0x560f68edfc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 158416896 unmapped: 18489344 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 280 heartbeat osd_stat(store_statfs(0x4f8265000/0x0/0x4ffc00000, data 0x3a8fce1/0x3c23000, compress 0x0/0x0/0x0, omap 0x387ee, meta 0x3d37812), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 280 ms_handle_reset con 0x560f6ad9b800 session 0x560f6952aa80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 280 ms_handle_reset con 0x560f6b4d2c00 session 0x560f69373c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 280 handle_osd_map epochs [280,281], i have 280, src has [1,281]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 158441472 unmapped: 18464768 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 281 ms_handle_reset con 0x560f6d03f000 session 0x560f6be01dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 281 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fafec40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2266070 data_alloc: 251658240 data_used: 37488931
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 281 ms_handle_reset con 0x560f6ad9b800 session 0x560f692b4000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 281 handle_osd_map epochs [281,282], i have 282, src has [1,282]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 158466048 unmapped: 18440192 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 282 ms_handle_reset con 0x560f6ad9ac00 session 0x560f6cfc6fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 282 ms_handle_reset con 0x560f6b4d2c00 session 0x560f6fb22a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 158466048 unmapped: 18440192 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 282 ms_handle_reset con 0x560f6b4d3400 session 0x560f6fe0ce00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 282 ms_handle_reset con 0x560f6b4cf000 session 0x560f69ee3dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 282 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fafee00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 158474240 unmapped: 18432000 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 282 ms_handle_reset con 0x560f6b4cec00 session 0x560f691528c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 282 heartbeat osd_stat(store_statfs(0x4f8266000/0x0/0x4ffc00000, data 0x3a93409/0x3c26000, compress 0x0/0x0/0x0, omap 0x39077, meta 0x3d36f89), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 282 ms_handle_reset con 0x560f6ad9b800 session 0x560f6fe0c540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 158466048 unmapped: 18440192 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 158466048 unmapped: 18440192 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2267931 data_alloc: 251658240 data_used: 38361395
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161103872 unmapped: 15802368 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 282 ms_handle_reset con 0x560f6b4d3800 session 0x560f69b5afc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.724837303s of 10.970345497s, submitted: 97
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 282 ms_handle_reset con 0x560f68ee4c00 session 0x560f69372e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 282 ms_handle_reset con 0x560f6ad9ac00 session 0x560f6ba1d880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161234944 unmapped: 15671296 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 282 heartbeat osd_stat(store_statfs(0x4f8268000/0x0/0x4ffc00000, data 0x3a93397/0x3c24000, compress 0x0/0x0/0x0, omap 0x39266, meta 0x3d36d9a), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 282 ms_handle_reset con 0x560f6ad9b800 session 0x560f6b897a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161234944 unmapped: 15671296 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 283 ms_handle_reset con 0x560f6ad9b000 session 0x560f6ba1d500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 283 ms_handle_reset con 0x560f6ad9b400 session 0x560f6b485a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 283 ms_handle_reset con 0x560f6b4cf000 session 0x560f6f0221c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161349632 unmapped: 15556608 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 283 ms_handle_reset con 0x560f6b4cec00 session 0x560f6ef33180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 283 heartbeat osd_stat(store_statfs(0x4f826b000/0x0/0x4ffc00000, data 0x3a8b015/0x3c1f000, compress 0x0/0x0/0x0, omap 0x3974e, meta 0x3d368b2), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161349632 unmapped: 15556608 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2285547 data_alloc: 251658240 data_used: 40459059
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162406400 unmapped: 14499840 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 284 ms_handle_reset con 0x560f68ee5400 session 0x560f6ef33500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 284 ms_handle_reset con 0x560f68ee4c00 session 0x560f6952aa80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 284 ms_handle_reset con 0x560f6ad9b800 session 0x560f6be01dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162406400 unmapped: 14499840 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 284 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fafec40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 284 ms_handle_reset con 0x560f68ee5400 session 0x560f6fb22a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 284 ms_handle_reset con 0x560f6b4cec00 session 0x560f691528c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 284 handle_osd_map epochs [284,285], i have 285, src has [1,285]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162430976 unmapped: 14475264 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 285 ms_handle_reset con 0x560f6b4cf000 session 0x560f69b5afc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 285 ms_handle_reset con 0x560f6ad9b000 session 0x560f6fb22fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 285 ms_handle_reset con 0x560f68ee5400 session 0x560f6ba5a540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 285 ms_handle_reset con 0x560f6b4cec00 session 0x560f6fb23880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 157990912 unmapped: 18915328 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 286 ms_handle_reset con 0x560f68ee4c00 session 0x560f6be00fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 287 ms_handle_reset con 0x560f6b4cf000 session 0x560f6fe0c1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 157433856 unmapped: 19472384 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 287 ms_handle_reset con 0x560f6ad9ac00 session 0x560f6b485dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 287 heartbeat osd_stat(store_statfs(0x4f8ead000/0x0/0x4ffc00000, data 0x2e46d45/0x2fdd000, compress 0x0/0x0/0x0, omap 0x3a71c, meta 0x3d358e4), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 287 ms_handle_reset con 0x560f68ee4c00 session 0x560f69b5ae00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2177077 data_alloc: 251658240 data_used: 30550923
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 287 ms_handle_reset con 0x560f6b1f8000 session 0x560f6ba5b880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 287 ms_handle_reset con 0x560f6b7fec00 session 0x560f6adbfdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 287 ms_handle_reset con 0x560f68ee5400 session 0x560f6fb22c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 157483008 unmapped: 19423232 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 287 ms_handle_reset con 0x560f6ad9ac00 session 0x560f69372e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 287 ms_handle_reset con 0x560f68ee4c00 session 0x560f69ee2380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.703970909s of 10.025319099s, submitted: 243
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 287 ms_handle_reset con 0x560f68ee5400 session 0x560f6fb23180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140673024 unmapped: 36233216 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140673024 unmapped: 36233216 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 287 ms_handle_reset con 0x560f6b1f8000 session 0x560f69373c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140681216 unmapped: 36225024 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 287 ms_handle_reset con 0x560f6b7fec00 session 0x560f6fe0c540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 288 ms_handle_reset con 0x560f6b4cec00 session 0x560f6952a1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 138903552 unmapped: 38002688 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 288 heartbeat osd_stat(store_statfs(0x4fa448000/0x0/0x4ffc00000, data 0x18af4c6/0x1a42000, compress 0x0/0x0/0x0, omap 0x3ae8c, meta 0x3d35174), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 288 ms_handle_reset con 0x560f68ee4c00 session 0x560f6eb3a8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1970335 data_alloc: 218103808 data_used: 7924606
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 138903552 unmapped: 38002688 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 138903552 unmapped: 38002688 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 289 ms_handle_reset con 0x560f68ee5400 session 0x560f6ef33dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 138903552 unmapped: 38002688 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 289 ms_handle_reset con 0x560f6b1f8000 session 0x560f6b897180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 138928128 unmapped: 37978112 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 290 ms_handle_reset con 0x560f6b7fec00 session 0x560f6952a700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 39477248 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 291 ms_handle_reset con 0x560f6b4d2c00 session 0x560f68edf6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 291 ms_handle_reset con 0x560f6b4cf000 session 0x560f6fdb6000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1983324 data_alloc: 218103808 data_used: 7924704
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 291 heartbeat osd_stat(store_statfs(0x4fa440000/0x0/0x4ffc00000, data 0x18b2cc2/0x1a4a000, compress 0x0/0x0/0x0, omap 0x3b6a0, meta 0x3d34960), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 39477248 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.814341545s of 10.235242844s, submitted: 83
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 291 ms_handle_reset con 0x560f68ee4c00 session 0x560f69373dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 136232960 unmapped: 40673280 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 291 ms_handle_reset con 0x560f68ee5400 session 0x560f6be00380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135217152 unmapped: 41689088 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 291 handle_osd_map epochs [292,292], i have 292, src has [1,292]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 292 ms_handle_reset con 0x560f6b1f8000 session 0x560f6fdb76c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 292 ms_handle_reset con 0x560f6b7fec00 session 0x560f69372e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135233536 unmapped: 41672704 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 292 ms_handle_reset con 0x560f68ee4c00 session 0x560f6adbfdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 292 heartbeat osd_stat(store_statfs(0x4fa438000/0x0/0x4ffc00000, data 0x18b64b0/0x1a51000, compress 0x0/0x0/0x0, omap 0x3c1a6, meta 0x3d33e5a), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 41664512 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 293 ms_handle_reset con 0x560f68ee5400 session 0x560f6bfe36c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 293 ms_handle_reset con 0x560f6b1f8000 session 0x560f6faff500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 293 heartbeat osd_stat(store_statfs(0x4fa43b000/0x0/0x4ffc00000, data 0x18b64b0/0x1a51000, compress 0x0/0x0/0x0, omap 0x3c2c3, meta 0x3d33d3d), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1991335 data_alloc: 218103808 data_used: 7925972
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 293 ms_handle_reset con 0x560f6b4cf000 session 0x560f69b5a000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135290880 unmapped: 41615360 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 293 ms_handle_reset con 0x560f6b4d3c00 session 0x560f6fe0ca80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135290880 unmapped: 41615360 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 293 ms_handle_reset con 0x560f68ee4c00 session 0x560f6c7f16c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 294 ms_handle_reset con 0x560f6b7fec00 session 0x560f6fdb6380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 294 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0x18b8152/0x1a55000, compress 0x0/0x0/0x0, omap 0x3c9f1, meta 0x3d4360f), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135290880 unmapped: 41615360 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 294 heartbeat osd_stat(store_statfs(0x4f9293000/0x0/0x4ffc00000, data 0x18b9cf0/0x1a57000, compress 0x0/0x0/0x0, omap 0x3ce15, meta 0x4ed31eb), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 294 ms_handle_reset con 0x560f6b1f8000 session 0x560f6ba5bdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 295 ms_handle_reset con 0x560f68ee5400 session 0x560f6faff180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135356416 unmapped: 41549824 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 295 ms_handle_reset con 0x560f6b4cf000 session 0x560f69b9b6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135356416 unmapped: 41549824 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 296 ms_handle_reset con 0x560f68ee4c00 session 0x560f69b9a000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1996231 data_alloc: 218103808 data_used: 7926389
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135364608 unmapped: 41541632 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 296 ms_handle_reset con 0x560f68ee5400 session 0x560f6adbe540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.932826519s of 10.034054756s, submitted: 138
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 297 ms_handle_reset con 0x560f6b1f8000 session 0x560f6fe0d6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 41533440 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 41533440 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 297 ms_handle_reset con 0x560f6b7fec00 session 0x560f6ef33500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 41533440 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 297 heartbeat osd_stat(store_statfs(0x4f928f000/0x0/0x4ffc00000, data 0x18bee35/0x1a5d000, compress 0x0/0x0/0x0, omap 0x3d6b3, meta 0x4ed294d), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135421952 unmapped: 41484288 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2010635 data_alloc: 218103808 data_used: 7926661
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 299 ms_handle_reset con 0x560f6b230800 session 0x560f6c7f1c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135438336 unmapped: 41467904 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 299 heartbeat osd_stat(store_statfs(0x4f9283000/0x0/0x4ffc00000, data 0x18c28cb/0x1a65000, compress 0x0/0x0/0x0, omap 0x3e079, meta 0x4ed1f87), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135438336 unmapped: 41467904 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 299 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fe0c000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135438336 unmapped: 41467904 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 299 ms_handle_reset con 0x560f68ee5400 session 0x560f6f022fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 299 ms_handle_reset con 0x560f6b7fec00 session 0x560f69b9bdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 299 ms_handle_reset con 0x560f6b1f8000 session 0x560f6faffdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 299 ms_handle_reset con 0x560f6b231c00 session 0x560f6be00fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 299 heartbeat osd_stat(store_statfs(0x4f9282000/0x0/0x4ffc00000, data 0x18c292d/0x1a66000, compress 0x0/0x0/0x0, omap 0x3e079, meta 0x4ed1f87), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 299 ms_handle_reset con 0x560f68ee4c00 session 0x560f69b5b180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 41451520 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 299 ms_handle_reset con 0x560f68ee5400 session 0x560f6fb22e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135462912 unmapped: 41443328 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 300 ms_handle_reset con 0x560f6b231c00 session 0x560f6ef33a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2018508 data_alloc: 218103808 data_used: 6878085
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 301 ms_handle_reset con 0x560f6b7fec00 session 0x560f6ef33180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 301 ms_handle_reset con 0x560f6b230c00 session 0x560f6fb23340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 301 ms_handle_reset con 0x560f6b1f8000 session 0x560f6fb22c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 43450368 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 43450368 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 301 ms_handle_reset con 0x560f6b230c00 session 0x560f69372380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.570668221s of 10.804024696s, submitted: 132
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 301 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fdb6fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 301 heartbeat osd_stat(store_statfs(0x4f9281000/0x0/0x4ffc00000, data 0x18c5d30/0x1a6b000, compress 0x0/0x0/0x0, omap 0x3ef5b, meta 0x4ed10a5), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 43442176 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 301 ms_handle_reset con 0x560f68ee5400 session 0x560f69c02fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 43433984 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 302 ms_handle_reset con 0x560f6b231c00 session 0x560f6be001c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 302 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ef32e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 43450368 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 302 ms_handle_reset con 0x560f68ee5400 session 0x560f69c02fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 302 ms_handle_reset con 0x560f6b1f8000 session 0x560f69b9aa80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2032218 data_alloc: 218103808 data_used: 6882146
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 302 ms_handle_reset con 0x560f6b230c00 session 0x560f6fb228c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 302 ms_handle_reset con 0x560f6b7fec00 session 0x560f6fb22e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134037504 unmapped: 42868736 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134037504 unmapped: 42868736 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 302 ms_handle_reset con 0x560f68ee4c00 session 0x560f6c7f1c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 302 handle_osd_map epochs [302,303], i have 303, src has [1,303]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 303 ms_handle_reset con 0x560f6b1f8000 session 0x560f6ba5b880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 303 ms_handle_reset con 0x560f6b231000 session 0x560f6adbfdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 303 ms_handle_reset con 0x560f6b231800 session 0x560f69b5b6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 303 ms_handle_reset con 0x560f6d805400 session 0x560f69ee2e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 303 ms_handle_reset con 0x560f6d805400 session 0x560f685efc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134397952 unmapped: 42508288 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 303 ms_handle_reset con 0x560f6b21d000 session 0x560f6c7f1180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 304 ms_handle_reset con 0x560f6b230c00 session 0x560f6b484000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 14K writes, 57K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 4680 syncs, 3.10 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8815 writes, 33K keys, 8815 commit groups, 1.0 writes per commit group, ingest: 21.03 MB, 0.04 MB/s#012Interval WAL: 8815 writes, 3778 syncs, 2.33 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 304 ms_handle_reset con 0x560f68ee5400 session 0x560f6c7f16c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 304 heartbeat osd_stat(store_statfs(0x4f90d8000/0x0/0x4ffc00000, data 0x1a68530/0x1c12000, compress 0x0/0x0/0x0, omap 0x3f8ea, meta 0x4ed0716), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 304 ms_handle_reset con 0x560f6b21c400 session 0x560f6faffa40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 43237376 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 304 ms_handle_reset con 0x560f68ee5400 session 0x560f6adbf180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 43229184 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 305 ms_handle_reset con 0x560f6b21d000 session 0x560f6c7f1340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 305 ms_handle_reset con 0x560f6b230c00 session 0x560f6fb23340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 305 ms_handle_reset con 0x560f6d805400 session 0x560f6ef33a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2073750 data_alloc: 218103808 data_used: 6882759
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 305 ms_handle_reset con 0x560f69ae4000 session 0x560f69373500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133808128 unmapped: 43098112 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 305 ms_handle_reset con 0x560f69ae4c00 session 0x560f6be00fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 305 ms_handle_reset con 0x560f6b230c00 session 0x560f6be00e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 305 ms_handle_reset con 0x560f6b21d000 session 0x560f6f022fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 306 ms_handle_reset con 0x560f689fcc00 session 0x560f6bfe3500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 306 ms_handle_reset con 0x560f6d805400 session 0x560f6b897880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 306 ms_handle_reset con 0x560f6d805400 session 0x560f685ee540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 306 ms_handle_reset con 0x560f68ee5400 session 0x560f6fafefc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 132956160 unmapped: 43950080 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 306 ms_handle_reset con 0x560f69ae4c00 session 0x560f6fe0cfc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f8b42000/0x0/0x4ffc00000, data 0x1ff782b/0x21a8000, compress 0x0/0x0/0x0, omap 0x4049f, meta 0x4ecfb61), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.880592346s of 10.141967773s, submitted: 158
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 307 ms_handle_reset con 0x560f6b230c00 session 0x560f69b5a8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 307 ms_handle_reset con 0x560f6b21d000 session 0x560f6cfc76c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 43819008 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 308 ms_handle_reset con 0x560f6d03f800 session 0x560f69b9b340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 308 ms_handle_reset con 0x560f689fcc00 session 0x560f6952b180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 308 ms_handle_reset con 0x560f68ee5400 session 0x560f6be01dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133087232 unmapped: 43819008 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: mgrc ms_handle_reset ms_handle_reset con 0x560f69f2d800
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1000647904
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1000647904,v1:192.168.122.100:6801/1000647904]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: mgrc handle_mgr_configure stats_period=5
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 308 heartbeat osd_stat(store_statfs(0x4f8b38000/0x0/0x4ffc00000, data 0x1ffaf63/0x21ae000, compress 0x0/0x0/0x0, omap 0x40a19, meta 0x4ecf5e7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 308 ms_handle_reset con 0x560f69ae4c00 session 0x560f6fafe000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133292032 unmapped: 43614208 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 308 ms_handle_reset con 0x560f6b21d000 session 0x560f6b485500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 308 ms_handle_reset con 0x560f68ee5400 session 0x560f6b485880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2101815 data_alloc: 218103808 data_used: 6883961
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 43597824 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 309 heartbeat osd_stat(store_statfs(0x4f8b3f000/0x0/0x4ffc00000, data 0x1ffaf53/0x21ad000, compress 0x0/0x0/0x0, omap 0x40c19, meta 0x4ecf3e7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 309 ms_handle_reset con 0x560f69ae4c00 session 0x560f6ba4c1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 43597824 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 310 ms_handle_reset con 0x560f6d03f800 session 0x560f6fb23c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 310 ms_handle_reset con 0x560f6b230c00 session 0x560f6ba1d500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 310 ms_handle_reset con 0x560f689fcc00 session 0x560f68edf6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 43597824 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 43597824 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 311 ms_handle_reset con 0x560f68ee5400 session 0x560f6ba1d880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 311 ms_handle_reset con 0x560f689fcc00 session 0x560f6c7f16c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 311 ms_handle_reset con 0x560f69ae4c00 session 0x560f6bfe28c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 43597824 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2115927 data_alloc: 218103808 data_used: 6885131
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 43597824 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 311 ms_handle_reset con 0x560f6d03f800 session 0x560f6ba5b880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 311 heartbeat osd_stat(store_statfs(0x4f8b2f000/0x0/0x4ffc00000, data 0x20002ed/0x21b8000, compress 0x0/0x0/0x0, omap 0x417a1, meta 0x4ece85f), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 43597824 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 312 ms_handle_reset con 0x560f6d805400 session 0x560f6f023a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 312 ms_handle_reset con 0x560f689fcc00 session 0x560f6be00000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 133447680 unmapped: 43458560 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 312 ms_handle_reset con 0x560f692ef400 session 0x560f6fb23500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 312 heartbeat osd_stat(store_statfs(0x4f8b2f000/0x0/0x4ffc00000, data 0x2001ea5/0x21bb000, compress 0x0/0x0/0x0, omap 0x41905, meta 0x4ece6fb), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 312 handle_osd_map epochs [313,313], i have 313, src has [1,313]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.574655533s of 10.660223007s, submitted: 50
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 313 ms_handle_reset con 0x560f68ee5400 session 0x560f6fafe540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 313 ms_handle_reset con 0x560f6b230c00 session 0x560f6ef33180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 313 ms_handle_reset con 0x560f69ae4c00 session 0x560f69372a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 313 ms_handle_reset con 0x560f689fcc00 session 0x560f6ef33340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134496256 unmapped: 42409984 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134496256 unmapped: 42409984 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 314 ms_handle_reset con 0x560f692ef400 session 0x560f6faff340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 314 ms_handle_reset con 0x560f68ee5400 session 0x560f6b896700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 314 ms_handle_reset con 0x560f6b230c00 session 0x560f6fdb6380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2124245 data_alloc: 218103808 data_used: 6885131
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134529024 unmapped: 42377216 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134529024 unmapped: 42377216 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 314 heartbeat osd_stat(store_statfs(0x4f8b27000/0x0/0x4ffc00000, data 0x20057f6/0x21c1000, compress 0x0/0x0/0x0, omap 0x41e8c, meta 0x4ece174), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 314 ms_handle_reset con 0x560f6d03f800 session 0x560f69b5a700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134938624 unmapped: 41967616 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 314 ms_handle_reset con 0x560f68ee5400 session 0x560f6fe0ca80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134938624 unmapped: 41967616 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 316 ms_handle_reset con 0x560f692ef400 session 0x560f6adbe540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 316 ms_handle_reset con 0x560f69125400 session 0x560f6952a380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 134938624 unmapped: 41967616 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 316 ms_handle_reset con 0x560f6b230c00 session 0x560f6fe0dc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 317 ms_handle_reset con 0x560f689fb400 session 0x560f6faffdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2163128 data_alloc: 234881024 data_used: 11080020
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135094272 unmapped: 41811968 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 317 heartbeat osd_stat(store_statfs(0x4f8b1a000/0x0/0x4ffc00000, data 0x200ac1b/0x21cc000, compress 0x0/0x0/0x0, omap 0x42a79, meta 0x4ecd587), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 318 ms_handle_reset con 0x560f6df2dc00 session 0x560f69c021c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135110656 unmapped: 41795584 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 318 ms_handle_reset con 0x560f689fb400 session 0x560f6bfe2fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135110656 unmapped: 41795584 heap: 176906240 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 318 ms_handle_reset con 0x560f69125400 session 0x560f6c7f1180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 318 ms_handle_reset con 0x560f68ee5400 session 0x560f6bfe3500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.253678322s of 10.338562965s, submitted: 49
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 318 handle_osd_map epochs [318,319], i have 318, src has [1,319]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 45973504 heap: 181108736 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 319 handle_osd_map epochs [319,320], i have 319, src has [1,320]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 320 heartbeat osd_stat(store_statfs(0x4f7318000/0x0/0x4ffc00000, data 0x380e3fb/0x39d2000, compress 0x0/0x0/0x0, omap 0x42ffd, meta 0x4ecd003), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 139345920 unmapped: 41762816 heap: 181108736 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2641899 data_alloc: 234881024 data_used: 11080020
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 49020928 heap: 185311232 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 320 ms_handle_reset con 0x560f69b4ec00 session 0x560f6fe0dc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 144891904 unmapped: 40419328 heap: 185311232 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 320 heartbeat osd_stat(store_statfs(0x4f0716000/0x0/0x4ffc00000, data 0xa40ff30/0xa5d6000, compress 0x0/0x0/0x0, omap 0x43210, meta 0x4eccdf0), peers [0,1] op hist [0,0,0,0,0,0,1])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 144613376 unmapped: 40697856 heap: 185311232 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 320 ms_handle_reset con 0x560f689fb400 session 0x560f6adbe540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 321 ms_handle_reset con 0x560f692ef400 session 0x560f6be00fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 321 ms_handle_reset con 0x560f6df2dc00 session 0x560f6fe0cfc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 321 ms_handle_reset con 0x560f6b230c00 session 0x560f6bfe2c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 49020928 heap: 189513728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 322 ms_handle_reset con 0x560f69125400 session 0x560f6b485340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 322 ms_handle_reset con 0x560f68ee5400 session 0x560f6faff340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140451840 unmapped: 49061888 heap: 189513728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717984 data_alloc: 234881024 data_used: 11080231
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 323 ms_handle_reset con 0x560f689fb400 session 0x560f6be00000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 47955968 heap: 189513728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 323 heartbeat osd_stat(store_statfs(0x4e6e8c000/0x0/0x4ffc00000, data 0x13c8e2ff/0x13e58000, compress 0x0/0x0/0x0, omap 0x4440c, meta 0x4ecbbf4), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 324 ms_handle_reset con 0x560f692ef400 session 0x560f69b9a540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 324 ms_handle_reset con 0x560f6b230c00 session 0x560f6c7f1c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 47857664 heap: 189513728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 47849472 heap: 189513728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.468177795s of 10.297376633s, submitted: 219
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 324 ms_handle_reset con 0x560f6df2dc00 session 0x560f69b9b500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 324 heartbeat osd_stat(store_statfs(0x4e6e90000/0x0/0x4ffc00000, data 0x13c8fea9/0x13e5a000, compress 0x0/0x0/0x0, omap 0x445f8, meta 0x4ecba08), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 47849472 heap: 189513728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 324 ms_handle_reset con 0x560f68ee5400 session 0x560f6b897180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 324 ms_handle_reset con 0x560f689fb400 session 0x560f6ba1dc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 324 ms_handle_reset con 0x560f6b230c00 session 0x560f6bfe3a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 324 handle_osd_map epochs [324,325], i have 324, src has [1,325]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 325 ms_handle_reset con 0x560f69388800 session 0x560f6fafee00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 325 ms_handle_reset con 0x560f6a0e0c00 session 0x560f6fb23500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 325 ms_handle_reset con 0x560f689fb400 session 0x560f6ba5b880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 325 ms_handle_reset con 0x560f692ef400 session 0x560f69b9aa80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141705216 unmapped: 47808512 heap: 189513728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 325 handle_osd_map epochs [325,326], i have 326, src has [1,326]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3723526 data_alloc: 234881024 data_used: 11080118
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 326 ms_handle_reset con 0x560f69388800 session 0x560f6fafe540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 326 ms_handle_reset con 0x560f68ee5400 session 0x560f69373500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 47800320 heap: 189513728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 326 ms_handle_reset con 0x560f69ae4c00 session 0x560f6fe0c000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 326 ms_handle_reset con 0x560f689fcc00 session 0x560f6c7f0c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 326 ms_handle_reset con 0x560f69ae4c00 session 0x560f6fb22e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 47800320 heap: 189513728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 326 ms_handle_reset con 0x560f68ee5400 session 0x560f6be001c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 327 ms_handle_reset con 0x560f69388800 session 0x560f6e5ef340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 327 ms_handle_reset con 0x560f689fb400 session 0x560f6ba1d6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141770752 unmapped: 47742976 heap: 189513728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 327 handle_osd_map epochs [327,328], i have 327, src has [1,328]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 328 ms_handle_reset con 0x560f6b230c00 session 0x560f6ba1ca80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 328 ms_handle_reset con 0x560f692ef400 session 0x560f6ba1cc40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 328 ms_handle_reset con 0x560f689fcc00 session 0x560f69153180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141778944 unmapped: 47734784 heap: 189513728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 329 heartbeat osd_stat(store_statfs(0x4e6e7d000/0x0/0x4ffc00000, data 0x13c9ac08/0x13e69000, compress 0x0/0x0/0x0, omap 0x458c1, meta 0x4eca73f), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141819904 unmapped: 47693824 heap: 189513728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 330 ms_handle_reset con 0x560f68ee5400 session 0x560f69b5b180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733003 data_alloc: 234881024 data_used: 11080633
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 330 ms_handle_reset con 0x560f69ae4c00 session 0x560f6f022380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 330 ms_handle_reset con 0x560f69388800 session 0x560f69372a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141844480 unmapped: 47669248 heap: 189513728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 331 ms_handle_reset con 0x560f6b230c00 session 0x560f69b5b180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 331 ms_handle_reset con 0x560f68ee5400 session 0x560f6fb22540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 142082048 unmapped: 72638464 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 331 ms_handle_reset con 0x560f692ef400 session 0x560f6c7f0fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 331 ms_handle_reset con 0x560f69388c00 session 0x560f6e5ef340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 331 ms_handle_reset con 0x560f689fb800 session 0x560f69b9b500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140427264 unmapped: 74293248 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.753842354s of 10.125333786s, submitted: 275
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 332 ms_handle_reset con 0x560f6b773400 session 0x560f69b9bdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149159936 unmapped: 65560576 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 333 ms_handle_reset con 0x560f68ee5400 session 0x560f6adbe540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 334 heartbeat osd_stat(store_statfs(0x4df223000/0x0/0x4ffc00000, data 0x1b8fc0a1/0x1bac9000, compress 0x0/0x0/0x0, omap 0x463b4, meta 0x4ec9c4c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141262848 unmapped: 73457664 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5021177 data_alloc: 218103808 data_used: 6890595
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 145809408 unmapped: 68911104 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 334 ms_handle_reset con 0x560f689fcc00 session 0x560f6e5eee00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 334 ms_handle_reset con 0x560f69ae4c00 session 0x560f6be00540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141418496 unmapped: 73302016 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141418496 unmapped: 73302016 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 334 handle_osd_map epochs [334,335], i have 335, src has [1,335]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 335 ms_handle_reset con 0x560f689fb800 session 0x560f6bfe2e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141451264 unmapped: 73269248 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 336 heartbeat osd_stat(store_statfs(0x4d6219000/0x0/0x4ffc00000, data 0x249013f8/0x24ad1000, compress 0x0/0x0/0x0, omap 0x46dc3, meta 0x4ec923d), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 337 ms_handle_reset con 0x560f689fcc00 session 0x560f6ef33180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141484032 unmapped: 73236480 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5157853 data_alloc: 218103808 data_used: 6892190
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141508608 unmapped: 73211904 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141680640 unmapped: 73039872 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 337 ms_handle_reset con 0x560f68ee5400 session 0x560f692b48c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 337 ms_handle_reset con 0x560f6b773400 session 0x560f6ad268c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 337 heartbeat osd_stat(store_statfs(0x4d6216000/0x0/0x4ffc00000, data 0x24904a85/0x24ad6000, compress 0x0/0x0/0x0, omap 0x478e8, meta 0x4ec8718), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 337 ms_handle_reset con 0x560f692ef400 session 0x560f6cfc6fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140648448 unmapped: 74072064 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 337 handle_osd_map epochs [337,338], i have 337, src has [1,338]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 338 ms_handle_reset con 0x560f689fb800 session 0x560f6c7f1c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.919774055s of 10.090165138s, submitted: 208
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140664832 unmapped: 74055680 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 339 ms_handle_reset con 0x560f689fcc00 session 0x560f6eb3ac40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 340 ms_handle_reset con 0x560f68ee5400 session 0x560f6fb22a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 74022912 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 340 ms_handle_reset con 0x560f692ef400 session 0x560f6bfe36c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 340 heartbeat osd_stat(store_statfs(0x4d6206000/0x0/0x4ffc00000, data 0x24909cf2/0x24ae0000, compress 0x0/0x0/0x0, omap 0x48307, meta 0x4ec7cf9), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5171606 data_alloc: 218103808 data_used: 6892190
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 74014720 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 340 ms_handle_reset con 0x560f6b773400 session 0x560f692b4380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 340 heartbeat osd_stat(store_statfs(0x4d6205000/0x0/0x4ffc00000, data 0x24909d54/0x24ae1000, compress 0x0/0x0/0x0, omap 0x48307, meta 0x4ec7cf9), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 142467072 unmapped: 72253440 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 340 ms_handle_reset con 0x560f689fb800 session 0x560f6b897dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 340 ms_handle_reset con 0x560f689fcc00 session 0x560f69b9a000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 340 heartbeat osd_stat(store_statfs(0x4d5e54000/0x0/0x4ffc00000, data 0x24cc1cf2/0x24e98000, compress 0x0/0x0/0x0, omap 0x4838b, meta 0x4ec7c75), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140861440 unmapped: 73859072 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 340 handle_osd_map epochs [340,341], i have 341, src has [1,341]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 341 ms_handle_reset con 0x560f692ef400 session 0x560f6cfc7340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 73842688 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 341 handle_osd_map epochs [341,342], i have 341, src has [1,342]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 342 ms_handle_reset con 0x560f69388800 session 0x560f6be01500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 342 ms_handle_reset con 0x560f6b230c00 session 0x560f6f022380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 342 ms_handle_reset con 0x560f68ee5400 session 0x560f6ef32fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 342 heartbeat osd_stat(store_statfs(0x4d5e4f000/0x0/0x4ffc00000, data 0x24cc38aa/0x24e9b000, compress 0x0/0x0/0x0, omap 0x484fd, meta 0x4ec7b03), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140943360 unmapped: 73777152 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 342 handle_osd_map epochs [342,343], i have 342, src has [1,343]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5208074 data_alloc: 218103808 data_used: 6892288
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 343 ms_handle_reset con 0x560f689fb800 session 0x560f6fdb6540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 73170944 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 344 ms_handle_reset con 0x560f689fcc00 session 0x560f69372380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 344 ms_handle_reset con 0x560f692ef400 session 0x560f6a0c8fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 344 ms_handle_reset con 0x560f69388800 session 0x560f6bfe3dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140812288 unmapped: 73908224 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 344 ms_handle_reset con 0x560f689fb800 session 0x560f6ef321c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140812288 unmapped: 73908224 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 344 heartbeat osd_stat(store_statfs(0x4d56ba000/0x0/0x4ffc00000, data 0x25452dd1/0x25630000, compress 0x0/0x0/0x0, omap 0x4900a, meta 0x4ec6ff6), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140812288 unmapped: 73908224 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.390221596s of 10.737737656s, submitted: 103
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 344 ms_handle_reset con 0x560f68ee5400 session 0x560f6f0228c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 73891840 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 344 handle_osd_map epochs [344,345], i have 345, src has [1,345]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 345 ms_handle_reset con 0x560f692ef400 session 0x560f6b896e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5257176 data_alloc: 218103808 data_used: 6893973
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140861440 unmapped: 73859072 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 345 handle_osd_map epochs [345,346], i have 345, src has [1,346]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 346 ms_handle_reset con 0x560f6df2c000 session 0x560f6ef33dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 346 ms_handle_reset con 0x560f6917f000 session 0x560f6fb23340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 346 ms_handle_reset con 0x560f689fcc00 session 0x560f6fe0ca80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140902400 unmapped: 73818112 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 347 ms_handle_reset con 0x560f689fb800 session 0x560f69372380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 347 ms_handle_reset con 0x560f68ee5400 session 0x560f6fdb6540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 73752576 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 347 heartbeat osd_stat(store_statfs(0x4d56af000/0x0/0x4ffc00000, data 0x25458698/0x2563b000, compress 0x0/0x0/0x0, omap 0x4a226, meta 0x4ec5dda), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 347 heartbeat osd_stat(store_statfs(0x4d56b0000/0x0/0x4ffc00000, data 0x25458636/0x2563a000, compress 0x0/0x0/0x0, omap 0x4a226, meta 0x4ec5dda), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 73752576 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 348 ms_handle_reset con 0x560f692ef400 session 0x560f6ba1d340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 348 ms_handle_reset con 0x560f6df2c000 session 0x560f6ba4c8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 73736192 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5267238 data_alloc: 218103808 data_used: 6893973
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 73736192 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 349 ms_handle_reset con 0x560f689fb800 session 0x560f6fafefc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 349 heartbeat osd_stat(store_statfs(0x4d56ac000/0x0/0x4ffc00000, data 0x25459d85/0x2563c000, compress 0x0/0x0/0x0, omap 0x4a456, meta 0x4ec5baa), peers [0,1] op hist [0,0,0,2,2])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 349 ms_handle_reset con 0x560f689fcc00 session 0x560f6fe0d880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 142647296 unmapped: 72073216 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 349 ms_handle_reset con 0x560f692ef400 session 0x560f6b896c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 349 ms_handle_reset con 0x560f6df2c000 session 0x560f6ba1cfc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 349 ms_handle_reset con 0x560f69ae5400 session 0x560f6c7f1500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 349 ms_handle_reset con 0x560f689fb800 session 0x560f69372fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 350 ms_handle_reset con 0x560f68ee5400 session 0x560f6adbe380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 350 heartbeat osd_stat(store_statfs(0x4d497e000/0x0/0x4ffc00000, data 0x2618655b/0x2636a000, compress 0x0/0x0/0x0, omap 0x4b028, meta 0x4ec4fd8), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 72040448 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 72040448 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Feb  2 07:14:57 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19148 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 351 ms_handle_reset con 0x560f689fcc00 session 0x560f6fe0dc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.346989632s of 10.045645714s, submitted: 157
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 351 ms_handle_reset con 0x560f692ef400 session 0x560f6b8968c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 142696448 unmapped: 72024064 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5351295 data_alloc: 218103808 data_used: 6894614
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 142712832 unmapped: 72007680 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 150773760 unmapped: 63946752 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 351 heartbeat osd_stat(store_statfs(0x4d497d000/0x0/0x4ffc00000, data 0x261881af/0x2636d000, compress 0x0/0x0/0x0, omap 0x4b1ce, meta 0x4ec4e32), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 150773760 unmapped: 63946752 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 150773760 unmapped: 63946752 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 351 handle_osd_map epochs [351,352], i have 352, src has [1,352]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 150790144 unmapped: 63930368 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5435349 data_alloc: 234881024 data_used: 19253883
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 150790144 unmapped: 63930368 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 150790144 unmapped: 63930368 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 352 heartbeat osd_stat(store_statfs(0x4d497a000/0x0/0x4ffc00000, data 0x26189c6a/0x26370000, compress 0x0/0x0/0x0, omap 0x4b662, meta 0x4ec499e), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 150822912 unmapped: 63897600 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 150822912 unmapped: 63897600 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 352 handle_osd_map epochs [352,353], i have 352, src has [1,353]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 heartbeat osd_stat(store_statfs(0x4d497a000/0x0/0x4ffc00000, data 0x26189c6a/0x26370000, compress 0x0/0x0/0x0, omap 0x4b662, meta 0x4ec499e), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 150831104 unmapped: 63889408 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5438123 data_alloc: 234881024 data_used: 19253883
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 150831104 unmapped: 63889408 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.320289612s of 12.353035927s, submitted: 29
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153280512 unmapped: 61440000 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 ms_handle_reset con 0x560f6917e400 session 0x560f6cfc7880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153722880 unmapped: 60997632 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 ms_handle_reset con 0x560f689fb800 session 0x560f69ee2e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153837568 unmapped: 60882944 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 ms_handle_reset con 0x560f689fcc00 session 0x560f6ba1d880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 ms_handle_reset con 0x560f68ee5400 session 0x560f6ba1d500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 heartbeat osd_stat(store_statfs(0x4d42d2000/0x0/0x4ffc00000, data 0x26bd16f9/0x26a1a000, compress 0x0/0x0/0x0, omap 0x4bbb4, meta 0x4ec444c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153550848 unmapped: 61169664 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5529733 data_alloc: 234881024 data_used: 20286075
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153550848 unmapped: 61169664 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153550848 unmapped: 61169664 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153550848 unmapped: 61169664 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 heartbeat osd_stat(store_statfs(0x4d42d2000/0x0/0x4ffc00000, data 0x26bd16f9/0x26a1a000, compress 0x0/0x0/0x0, omap 0x4bbb4, meta 0x4ec444c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153567232 unmapped: 61153280 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153567232 unmapped: 61153280 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 ms_handle_reset con 0x560f6917e400 session 0x560f6be01dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 ms_handle_reset con 0x560f692ef400 session 0x560f68ede540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 ms_handle_reset con 0x560f6df2c000 session 0x560f6fdb6c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 ms_handle_reset con 0x560f6df2c400 session 0x560f6b896540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 ms_handle_reset con 0x560f6b206c00 session 0x560f6f022700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5526279 data_alloc: 234881024 data_used: 20290171
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 146325504 unmapped: 68395008 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 ms_handle_reset con 0x560f689fb800 session 0x560f6cfc76c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 146325504 unmapped: 68395008 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 146325504 unmapped: 68395008 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 heartbeat osd_stat(store_statfs(0x4d5020000/0x0/0x4ffc00000, data 0x25e846e9/0x25ccc000, compress 0x0/0x0/0x0, omap 0x4be79, meta 0x4ec4187), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 ms_handle_reset con 0x560f689fcc00 session 0x560f6eb3afc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 145997824 unmapped: 68722688 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 ms_handle_reset con 0x560f689fb800 session 0x560f685eea80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.517272949s of 12.711024284s, submitted: 71
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 354 heartbeat osd_stat(store_statfs(0x4d5020000/0x0/0x4ffc00000, data 0x25e846e9/0x25ccc000, compress 0x0/0x0/0x0, omap 0x4be79, meta 0x4ec4187), peers [0,1] op hist [1])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 354 ms_handle_reset con 0x560f6df2c000 session 0x560f6adbfc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 148029440 unmapped: 66691072 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 355 ms_handle_reset con 0x560f6b206c00 session 0x560f6ad27500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 355 heartbeat osd_stat(store_statfs(0x4d4d17000/0x0/0x4ffc00000, data 0x26914285/0x25fd3000, compress 0x0/0x0/0x0, omap 0x4c5b8, meta 0x4ec3a48), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5467757 data_alloc: 218103808 data_used: 7931531
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 146907136 unmapped: 67813376 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 146915328 unmapped: 67805184 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 355 ms_handle_reset con 0x560f68ee5400 session 0x560f69b5a1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 355 handle_osd_map epochs [355,356], i have 355, src has [1,356]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 356 ms_handle_reset con 0x560f6df2c400 session 0x560f6faffdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 356 ms_handle_reset con 0x560f6917e400 session 0x560f69b5a540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147210240 unmapped: 67510272 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 356 ms_handle_reset con 0x560f689fb800 session 0x560f6adbfdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147210240 unmapped: 67510272 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 357 ms_handle_reset con 0x560f68ee5400 session 0x560f6ba4ddc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147251200 unmapped: 67469312 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 357 heartbeat osd_stat(store_statfs(0x4d4d14000/0x0/0x4ffc00000, data 0x269179af/0x25fd8000, compress 0x0/0x0/0x0, omap 0x4d057, meta 0x4ec2fa9), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5353149 data_alloc: 218103808 data_used: 7931515
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 357 ms_handle_reset con 0x560f6b206c00 session 0x560f6eb3bc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147251200 unmapped: 67469312 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 357 ms_handle_reset con 0x560f6df2c000 session 0x560f6f023c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 357 ms_handle_reset con 0x560f689fb800 session 0x560f6f022000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 357 ms_handle_reset con 0x560f68ee5400 session 0x560f69c02c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147636224 unmapped: 67084288 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 358 ms_handle_reset con 0x560f6917e400 session 0x560f69ee2540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 358 ms_handle_reset con 0x560f6b206c00 session 0x560f6eb3b180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147644416 unmapped: 67076096 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 358 ms_handle_reset con 0x560f6917ec00 session 0x560f6f022c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 358 ms_handle_reset con 0x560f692ef400 session 0x560f6ba4d6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 147644416 unmapped: 67076096 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 358 handle_osd_map epochs [358,359], i have 358, src has [1,359]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.710219383s of 10.295080185s, submitted: 175
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 151887872 unmapped: 62832640 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 359 heartbeat osd_stat(store_statfs(0x4d3384000/0x0/0x4ffc00000, data 0x27b15cd6/0x27966000, compress 0x0/0x0/0x0, omap 0x4deea, meta 0x4ec2116), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5676915 data_alloc: 218103808 data_used: 7932242
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 148054016 unmapped: 66666496 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 152805376 unmapped: 61915136 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 359 heartbeat osd_stat(store_statfs(0x4ca786000/0x0/0x4ffc00000, data 0x30715cd6/0x30566000, compress 0x0/0x0/0x0, omap 0x4deea, meta 0x4ec2116), peers [0,1] op hist [0,0,0,0,0,0,1,1])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153026560 unmapped: 61693952 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 359 ms_handle_reset con 0x560f6917e400 session 0x560f6ba1ddc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 153436160 unmapped: 61284352 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 159244288 unmapped: 55476224 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 360 ms_handle_reset con 0x560f68ee5400 session 0x560f6ba1da40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 360 ms_handle_reset con 0x560f6917ec00 session 0x560f6ba4c700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 360 ms_handle_reset con 0x560f689fb800 session 0x560f6ba1cc40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 360 ms_handle_reset con 0x560f68ee5400 session 0x560f6e5ef180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7097045 data_alloc: 218103808 data_used: 7932827
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 360 ms_handle_reset con 0x560f6917e400 session 0x560f6ad27a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149856256 unmapped: 64864256 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149913600 unmapped: 64806912 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 360 ms_handle_reset con 0x560f692ef400 session 0x560f6a0c8c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 360 ms_handle_reset con 0x560f6917ec00 session 0x560f685ee540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 360 ms_handle_reset con 0x560f6b206c00 session 0x560f6ba5ae00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 148520960 unmapped: 66199552 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 360 heartbeat osd_stat(store_statfs(0x4c5f83000/0x0/0x4ffc00000, data 0x33f17c2e/0x33d69000, compress 0x0/0x0/0x0, omap 0x4e741, meta 0x4ec18bf), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 360 ms_handle_reset con 0x560f68ee5400 session 0x560f6ba4ce00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 360 ms_handle_reset con 0x560f6917e400 session 0x560f6eb3afc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149577728 unmapped: 65142784 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 361 ms_handle_reset con 0x560f6917ec00 session 0x560f68ede540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 361 ms_handle_reset con 0x560f692ef400 session 0x560f6ba1d880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.248474121s of 10.008143425s, submitted: 189
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 361 ms_handle_reset con 0x560f6a0e4800 session 0x560f6b896c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149585920 unmapped: 65134592 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5539349 data_alloc: 218103808 data_used: 7995177
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149954560 unmapped: 64765952 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 361 ms_handle_reset con 0x560f6917e400 session 0x560f6cfc7180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 361 ms_handle_reset con 0x560f6917ec00 session 0x560f6a0c9a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156147712 unmapped: 58572800 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 362 ms_handle_reset con 0x560f692ef400 session 0x560f69b9b500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 362 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fb221c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 362 ms_handle_reset con 0x560f68ee5400 session 0x560f6fafefc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 362 heartbeat osd_stat(store_statfs(0x4d4b81000/0x0/0x4ffc00000, data 0x263192f6/0x2616b000, compress 0x0/0x0/0x0, omap 0x4e9ea, meta 0x4ec1616), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156147712 unmapped: 58572800 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 362 ms_handle_reset con 0x560f68ee4c00 session 0x560f6e5ee8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 362 handle_osd_map epochs [362,363], i have 362, src has [1,363]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 363 ms_handle_reset con 0x560f6917e400 session 0x560f6adbe540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156172288 unmapped: 58548224 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 363 ms_handle_reset con 0x560f6917ec00 session 0x560f6e5ef880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 363 ms_handle_reset con 0x560f692ef400 session 0x560f6cfc6380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 363 ms_handle_reset con 0x560f6b4d2c00 session 0x560f69ee2700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156205056 unmapped: 58515456 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5513933 data_alloc: 234881024 data_used: 18323206
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156205056 unmapped: 58515456 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 364 ms_handle_reset con 0x560f6917e400 session 0x560f6fdb7880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 365 ms_handle_reset con 0x560f6917ec00 session 0x560f6ad26fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 365 ms_handle_reset con 0x560f68ee4c00 session 0x560f6faff340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 365 ms_handle_reset con 0x560f692ef400 session 0x560f6cfc7340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 157384704 unmapped: 57335808 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 366 heartbeat osd_stat(store_statfs(0x4e6db1000/0x0/0x4ffc00000, data 0x13d462c1/0x13f3b000, compress 0x0/0x0/0x0, omap 0x4f9be, meta 0x4ec0642), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 366 ms_handle_reset con 0x560f6b4d2c00 session 0x560f68edfc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 366 ms_handle_reset con 0x560f68ee4c00 session 0x560f6cfc68c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 366 ms_handle_reset con 0x560f6917e400 session 0x560f69b9a000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 366 heartbeat osd_stat(store_statfs(0x4e6db1000/0x0/0x4ffc00000, data 0x13d462c1/0x13f3b000, compress 0x0/0x0/0x0, omap 0x4f9be, meta 0x4ec0642), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 65822720 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 65822720 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 65822720 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2543667 data_alloc: 218103808 data_used: 6901712
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.550599098s of 11.105086327s, submitted: 342
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 367 ms_handle_reset con 0x560f6917ec00 session 0x560f6b484000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 367 ms_handle_reset con 0x560f692ef400 session 0x560f6e5ee540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 148807680 unmapped: 65912832 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 148807680 unmapped: 65912832 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 367 ms_handle_reset con 0x560f6b772000 session 0x560f6c7f0e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 148807680 unmapped: 65912832 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 367 heartbeat osd_stat(store_statfs(0x4f91bc000/0x0/0x4ffc00000, data 0x193899a/0x1b30000, compress 0x0/0x0/0x0, omap 0x502a0, meta 0x4ebfd60), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 148807680 unmapped: 65912832 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 368 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fafe000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 368 ms_handle_reset con 0x560f6917e400 session 0x560f6fb22a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 368 ms_handle_reset con 0x560f6917ec00 session 0x560f6b434c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149266432 unmapped: 65454080 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 368 ms_handle_reset con 0x560f692ef400 session 0x560f6cfc7dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2610791 data_alloc: 218103808 data_used: 6905998
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149266432 unmapped: 65454080 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 368 handle_osd_map epochs [368,369], i have 369, src has [1,369]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 65323008 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 369 ms_handle_reset con 0x560f689fe400 session 0x560f6cfc6c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 65323008 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 65323008 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 370 ms_handle_reset con 0x560f68ee4c00 session 0x560f69b9b340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 370 ms_handle_reset con 0x560f6917ec00 session 0x560f685ef6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 370 ms_handle_reset con 0x560f6917e400 session 0x560f6c7f0a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 370 ms_handle_reset con 0x560f692ef400 session 0x560f6c7f1c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 370 heartbeat osd_stat(store_statfs(0x4f869c000/0x0/0x4ffc00000, data 0x244fb0f/0x264c000, compress 0x0/0x0/0x0, omap 0x50f1d, meta 0x4ebf0e3), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 65298432 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2619575 data_alloc: 218103808 data_used: 6905998
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 370 ms_handle_reset con 0x560f6917f400 session 0x560f6bfe2700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 65167360 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.220429420s of 10.406297684s, submitted: 46
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 370 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fe0cfc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 65167360 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 65167360 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 371 ms_handle_reset con 0x560f6917e400 session 0x560f6ba5ac40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 371 heartbeat osd_stat(store_statfs(0x4f869c000/0x0/0x4ffc00000, data 0x24516dc/0x264e000, compress 0x0/0x0/0x0, omap 0x51111, meta 0x4ebeeef), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149553152 unmapped: 65167360 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 371 handle_osd_map epochs [371,372], i have 372, src has [1,372]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 372 ms_handle_reset con 0x560f6917ec00 session 0x560f69b9a380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 65159168 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2624609 data_alloc: 218103808 data_used: 6841335
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 65159168 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 65159168 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 65159168 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 65150976 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 373 ms_handle_reset con 0x560f692ef400 session 0x560f6fe0c1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 373 ms_handle_reset con 0x560f69388400 session 0x560f6fe0d6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 373 ms_handle_reset con 0x560f6d805000 session 0x560f6fe0cc40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 373 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ba1ce00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 373 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x2454d36/0x2655000, compress 0x0/0x0/0x0, omap 0x5178e, meta 0x4ebe872), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 65150976 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2626903 data_alloc: 218103808 data_used: 6841370
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 373 heartbeat osd_stat(store_statfs(0x4f8695000/0x0/0x4ffc00000, data 0x2454d36/0x2655000, compress 0x0/0x0/0x0, omap 0x5178e, meta 0x4ebe872), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 65150976 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 374 ms_handle_reset con 0x560f6917e400 session 0x560f6fdb7a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.346570969s of 10.392714500s, submitted: 35
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 374 ms_handle_reset con 0x560f6917ec00 session 0x560f6b4856c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 374 ms_handle_reset con 0x560f692ef400 session 0x560f6faffc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 374 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ba1ce00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149291008 unmapped: 65429504 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 374 ms_handle_reset con 0x560f6917e400 session 0x560f6ba5ac40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 375 ms_handle_reset con 0x560f6917ec00 session 0x560f6cfc6380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 149782528 unmapped: 64937984 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 375 ms_handle_reset con 0x560f6d805000 session 0x560f6fe0ddc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 375 heartbeat osd_stat(store_statfs(0x4f8692000/0x0/0x4ffc00000, data 0x2456944/0x265a000, compress 0x0/0x0/0x0, omap 0x51d69, meta 0x4ebe297), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 376 ms_handle_reset con 0x560f6c009c00 session 0x560f6be01180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 150978560 unmapped: 63741952 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 376 ms_handle_reset con 0x560f6917e400 session 0x560f6fe0c1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 376 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fafec40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156590080 unmapped: 58130432 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 376 ms_handle_reset con 0x560f6917ec00 session 0x560f6bfe2700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 376 ms_handle_reset con 0x560f6d805000 session 0x560f6fdb7500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718270 data_alloc: 234881024 data_used: 18249356
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 376 ms_handle_reset con 0x560f6a0e1400 session 0x560f6be00380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 376 ms_handle_reset con 0x560f6917e400 session 0x560f6fdb7a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 376 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fdb6fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156753920 unmapped: 57966592 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 376 ms_handle_reset con 0x560f6917ec00 session 0x560f685ef880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 376 ms_handle_reset con 0x560f6b987c00 session 0x560f6be00540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 377 ms_handle_reset con 0x560f6d805000 session 0x560f6fb22c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 377 ms_handle_reset con 0x560f6917ac00 session 0x560f6be01c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156712960 unmapped: 58007552 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 377 heartbeat osd_stat(store_statfs(0x4f865a000/0x0/0x4ffc00000, data 0x24860bd/0x2692000, compress 0x0/0x0/0x0, omap 0x52c15, meta 0x4ebd3eb), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 378 ms_handle_reset con 0x560f68ee4c00 session 0x560f6c7f0a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156745728 unmapped: 57974784 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 378 ms_handle_reset con 0x560f689ff000 session 0x560f69373dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 378 heartbeat osd_stat(store_statfs(0x4f8655000/0x0/0x4ffc00000, data 0x2487cad/0x2695000, compress 0x0/0x0/0x0, omap 0x53185, meta 0x4ebce7b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 378 ms_handle_reset con 0x560f6917ec00 session 0x560f6ba1c700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 378 ms_handle_reset con 0x560f6917e400 session 0x560f6fb23c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156786688 unmapped: 57933824 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 378 ms_handle_reset con 0x560f689ff000 session 0x560f6ef32fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 378 ms_handle_reset con 0x560f68ee4c00 session 0x560f6c7f1c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 378 ms_handle_reset con 0x560f6917ac00 session 0x560f6cfc6700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156786688 unmapped: 57933824 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2724581 data_alloc: 234881024 data_used: 18249910
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156786688 unmapped: 57933824 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 378 ms_handle_reset con 0x560f6917ec00 session 0x560f692b48c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.409003258s of 10.604897499s, submitted: 104
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156786688 unmapped: 57933824 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 379 ms_handle_reset con 0x560f6b987c00 session 0x560f69b9b340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 379 ms_handle_reset con 0x560f689ff000 session 0x560f69ee2380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 380 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ba5afc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 156794880 unmapped: 57925632 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 380 heartbeat osd_stat(store_statfs(0x4f8656000/0x0/0x4ffc00000, data 0x248b3a2/0x2696000, compress 0x0/0x0/0x0, omap 0x53861, meta 0x4ebc79f), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 159342592 unmapped: 55377920 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 381 ms_handle_reset con 0x560f6917ac00 session 0x560f6ef32000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 381 ms_handle_reset con 0x560f6917ec00 session 0x560f6b484380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 160899072 unmapped: 53821440 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2780710 data_alloc: 234881024 data_used: 18504408
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161406976 unmapped: 53313536 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161406976 unmapped: 53313536 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f8059000/0x0/0x4ffc00000, data 0x2a84e59/0x2c91000, compress 0x0/0x0/0x0, omap 0x539f4, meta 0x4ebc60c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161406976 unmapped: 53313536 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 381 handle_osd_map epochs [381,382], i have 382, src has [1,382]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161415168 unmapped: 53305344 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161415168 unmapped: 53305344 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2780876 data_alloc: 234881024 data_used: 18504408
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161693696 unmapped: 53026816 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 382 ms_handle_reset con 0x560f6b987c00 session 0x560f68edec40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161824768 unmapped: 52895744 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 382 ms_handle_reset con 0x560f689ff000 session 0x560f6ba5bc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161824768 unmapped: 52895744 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.151252747s of 11.402992249s, submitted: 113
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 383 heartbeat osd_stat(store_statfs(0x4f8036000/0x0/0x4ffc00000, data 0x2aa8910/0x2cb6000, compress 0x0/0x0/0x0, omap 0x53edf, meta 0x4ebc121), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 384 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ef32e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f8036000/0x0/0x4ffc00000, data 0x2aa8910/0x2cb6000, compress 0x0/0x0/0x0, omap 0x53edf, meta 0x4ebc121), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161832960 unmapped: 52887552 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 384 ms_handle_reset con 0x560f6917ac00 session 0x560f6ad268c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161882112 unmapped: 52838400 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2790778 data_alloc: 234881024 data_used: 18504708
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161947648 unmapped: 52772864 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161947648 unmapped: 52772864 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 385 ms_handle_reset con 0x560f6917ec00 session 0x560f6c7f16c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161955840 unmapped: 52764672 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f8027000/0x0/0x4ffc00000, data 0x2c67c56/0x2cc5000, compress 0x0/0x0/0x0, omap 0x548fa, meta 0x4ebb706), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 386 ms_handle_reset con 0x560f6c008c00 session 0x560f6ba5bc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161964032 unmapped: 52756480 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 386 ms_handle_reset con 0x560f689ff000 session 0x560f6fb23c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161964032 unmapped: 52756480 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2812518 data_alloc: 234881024 data_used: 18506491
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 387 ms_handle_reset con 0x560f68ee4c00 session 0x560f6b4848c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161964032 unmapped: 52756480 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161964032 unmapped: 52756480 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 387 ms_handle_reset con 0x560f6917ec00 session 0x560f6e5ef880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 387 ms_handle_reset con 0x560f6917ac00 session 0x560f6fdb68c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 387 ms_handle_reset con 0x560f689fec00 session 0x560f6a0c8fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 387 heartbeat osd_stat(store_statfs(0x4f8021000/0x0/0x4ffc00000, data 0x2c6b3fe/0x2ccb000, compress 0x0/0x0/0x0, omap 0x55117, meta 0x4ebaee9), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162013184 unmapped: 52707328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162013184 unmapped: 52707328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.343843460s of 11.460665703s, submitted: 65
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162013184 unmapped: 52707328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 388 heartbeat osd_stat(store_statfs(0x4f801e000/0x0/0x4ffc00000, data 0x2c6e3fe/0x2cce000, compress 0x0/0x0/0x0, omap 0x55321, meta 0x4ebacdf), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 388 heartbeat osd_stat(store_statfs(0x4f8019000/0x0/0x4ffc00000, data 0x2c6fe99/0x2cd1000, compress 0x0/0x0/0x0, omap 0x554b7, meta 0x4ebab49), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2817335 data_alloc: 234881024 data_used: 18506491
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162013184 unmapped: 52707328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f8014000/0x0/0x4ffc00000, data 0x2c71a51/0x2cd4000, compress 0x0/0x0/0x0, omap 0x55613, meta 0x4eba9ed), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 389 ms_handle_reset con 0x560f689ff000 session 0x560f69b9a380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162013184 unmapped: 52707328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 389 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ef32380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f8014000/0x0/0x4ffc00000, data 0x2c71a51/0x2cd4000, compress 0x0/0x0/0x0, omap 0x55613, meta 0x4eba9ed), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162013184 unmapped: 52707328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 390 ms_handle_reset con 0x560f6917ac00 session 0x560f6fe0c380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162013184 unmapped: 52707328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 391 ms_handle_reset con 0x560f6917ec00 session 0x560f69b5b180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 391 ms_handle_reset con 0x560f69f93800 session 0x560f6b897180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162021376 unmapped: 52699136 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2828395 data_alloc: 234881024 data_used: 18596603
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 391 ms_handle_reset con 0x560f6917ac00 session 0x560f6be00000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162029568 unmapped: 52690944 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 391 heartbeat osd_stat(store_statfs(0x4f8010000/0x0/0x4ffc00000, data 0x2c7514e/0x2cdc000, compress 0x0/0x0/0x0, omap 0x55e36, meta 0x4eba1ca), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162029568 unmapped: 52690944 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 392 ms_handle_reset con 0x560f6917ec00 session 0x560f6fdb7880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162308096 unmapped: 52412416 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 392 ms_handle_reset con 0x560f6b22f400 session 0x560f6e5ef500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 393 ms_handle_reset con 0x560f69f92800 session 0x560f6cfc68c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 52133888 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 393 heartbeat osd_stat(store_statfs(0x4f7fc8000/0x0/0x4ffc00000, data 0x2cb88f6/0x2d22000, compress 0x0/0x0/0x0, omap 0x56867, meta 0x4eb9799), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 393 ms_handle_reset con 0x560f6d03ec00 session 0x560f6ba4ce00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.971599579s of 10.125757217s, submitted: 77
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 393 ms_handle_reset con 0x560f6917ac00 session 0x560f6c7f0e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 393 ms_handle_reset con 0x560f6917ec00 session 0x560f6fafe000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162619392 unmapped: 52101120 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2841876 data_alloc: 234881024 data_used: 18630329
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 393 heartbeat osd_stat(store_statfs(0x4f7fc9000/0x0/0x4ffc00000, data 0x2cb8894/0x2d21000, compress 0x0/0x0/0x0, omap 0x56a71, meta 0x4eb958f), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162643968 unmapped: 52076544 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162652160 unmapped: 52068352 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 394 ms_handle_reset con 0x560f69f92800 session 0x560f6f023c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 394 ms_handle_reset con 0x560f6b22f400 session 0x560f6eb3ac40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162660352 unmapped: 52060160 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 394 heartbeat osd_stat(store_statfs(0x4f7fc3000/0x0/0x4ffc00000, data 0x2cba45c/0x2d27000, compress 0x0/0x0/0x0, omap 0x57092, meta 0x4eb8f6e), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 394 ms_handle_reset con 0x560f6ebc3400 session 0x560f6bfe2700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 395 ms_handle_reset con 0x560f6b775800 session 0x560f6b484380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162652160 unmapped: 52068352 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 395 ms_handle_reset con 0x560f6917ac00 session 0x560f6ef32000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 395 ms_handle_reset con 0x560f6917ec00 session 0x560f6be00380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162668544 unmapped: 52051968 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2851646 data_alloc: 234881024 data_used: 18634425
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 396 ms_handle_reset con 0x560f69f92800 session 0x560f6fe0c1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 396 ms_handle_reset con 0x560f6b22f400 session 0x560f6952a1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162676736 unmapped: 52043776 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162676736 unmapped: 52043776 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f7e0b000/0x0/0x4ffc00000, data 0x2e72c3c/0x2ee1000, compress 0x0/0x0/0x0, omap 0x576b6, meta 0x4eb894a), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 164020224 unmapped: 50700288 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 396 heartbeat osd_stat(store_statfs(0x4f7e0b000/0x0/0x4ffc00000, data 0x2e72c3c/0x2ee1000, compress 0x0/0x0/0x0, omap 0x576b6, meta 0x4eb894a), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 164020224 unmapped: 50700288 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 397 ms_handle_reset con 0x560f6917ac00 session 0x560f69b9b500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 397 ms_handle_reset con 0x560f6917ec00 session 0x560f69ee2380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 397 ms_handle_reset con 0x560f69f92800 session 0x560f69ee2000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 164020224 unmapped: 50700288 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 397 ms_handle_reset con 0x560f6b775800 session 0x560f69b9b880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.313632011s of 10.417798996s, submitted: 71
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 397 ms_handle_reset con 0x560f6b770000 session 0x560f69ee2540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 397 ms_handle_reset con 0x560f6917ac00 session 0x560f69373c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 397 ms_handle_reset con 0x560f6917ec00 session 0x560f6cfc6380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 397 ms_handle_reset con 0x560f69f92800 session 0x560f69c021c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 397 ms_handle_reset con 0x560f6b775800 session 0x560f6f023340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2926431 data_alloc: 234881024 data_used: 21052190
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165068800 unmapped: 49651712 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165068800 unmapped: 49651712 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165838848 unmapped: 48881664 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 397 heartbeat osd_stat(store_statfs(0x4f65f1000/0x0/0x4ffc00000, data 0x34e9749/0x355b000, compress 0x0/0x0/0x0, omap 0x57b7e, meta 0x6058482), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165871616 unmapped: 48848896 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 397 ms_handle_reset con 0x560f6a0e5c00 session 0x560f6f022c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165871616 unmapped: 48848896 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935819 data_alloc: 234881024 data_used: 21453870
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 166428672 unmapped: 48291840 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 169222144 unmapped: 45498368 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 169222144 unmapped: 45498368 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 397 handle_osd_map epochs [397,398], i have 398, src has [1,398]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 398 heartbeat osd_stat(store_statfs(0x4f65f1000/0x0/0x4ffc00000, data 0x34e9749/0x355b000, compress 0x0/0x0/0x0, omap 0x57b7e, meta 0x6058482), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168820736 unmapped: 45899776 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 398 ms_handle_reset con 0x560f69f92800 session 0x560f6ba1c540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168820736 unmapped: 45899776 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972211 data_alloc: 234881024 data_used: 26134574
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168820736 unmapped: 45899776 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.892257690s of 11.005919456s, submitted: 57
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 398 ms_handle_reset con 0x560f6b775800 session 0x560f6fb23500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 398 ms_handle_reset con 0x560f6b4cf800 session 0x560f6ef32380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168861696 unmapped: 45858816 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 398 ms_handle_reset con 0x560f6b223c00 session 0x560f6a0c9a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168861696 unmapped: 45858816 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 398 ms_handle_reset con 0x560f692efc00 session 0x560f6b4856c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 398 ms_handle_reset con 0x560f69f92800 session 0x560f6e5ef340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168747008 unmapped: 45973504 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 398 heartbeat osd_stat(store_statfs(0x4f65ec000/0x0/0x4ffc00000, data 0x34ed1c8/0x3560000, compress 0x0/0x0/0x0, omap 0x5820f, meta 0x6057df1), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168747008 unmapped: 45973504 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2971943 data_alloc: 234881024 data_used: 26134574
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168747008 unmapped: 45973504 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168747008 unmapped: 45973504 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 176373760 unmapped: 38346752 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 173408256 unmapped: 41312256 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 398 ms_handle_reset con 0x560f689ff000 session 0x560f6b485880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 398 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ba5ae00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 398 ms_handle_reset con 0x560f6b223c00 session 0x560f6ba4d340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 174063616 unmapped: 40656896 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 398 heartbeat osd_stat(store_statfs(0x4f5b40000/0x0/0x4ffc00000, data 0x3f981b8/0x400a000, compress 0x0/0x0/0x0, omap 0x5820f, meta 0x6057df1), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3044882 data_alloc: 234881024 data_used: 26358318
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 174063616 unmapped: 40656896 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 398 heartbeat osd_stat(store_statfs(0x4f5b40000/0x0/0x4ffc00000, data 0x3f981b8/0x400a000, compress 0x0/0x0/0x0, omap 0x5820f, meta 0x6057df1), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.050834656s of 10.364282608s, submitted: 161
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 398 ms_handle_reset con 0x560f6b4cf800 session 0x560f6fdb76c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 173154304 unmapped: 41566208 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 398 ms_handle_reset con 0x560f689ff000 session 0x560f6fb221c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 399 ms_handle_reset con 0x560f68ee4c00 session 0x560f6a0c9500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 171876352 unmapped: 42844160 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 399 ms_handle_reset con 0x560f69388400 session 0x560f6fb22a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 399 ms_handle_reset con 0x560f6c009800 session 0x560f69153180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 171876352 unmapped: 42844160 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 399 ms_handle_reset con 0x560f69f92800 session 0x560f6fe0ddc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f5d63000/0x0/0x4ffc00000, data 0x3bc0da8/0x3de7000, compress 0x0/0x0/0x0, omap 0x58667, meta 0x6057999), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 171909120 unmapped: 42811392 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001620 data_alloc: 234881024 data_used: 24381998
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 399 ms_handle_reset con 0x560f689ff000 session 0x560f6fafe540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 171925504 unmapped: 42795008 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 399 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fdb7880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 166109184 unmapped: 48611328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 166109184 unmapped: 48611328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 166109184 unmapped: 48611328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 401 ms_handle_reset con 0x560f69388400 session 0x560f6ad268c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 166109184 unmapped: 48611328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f6e99000/0x0/0x4ffc00000, data 0x2a883e9/0x2caf000, compress 0x0/0x0/0x0, omap 0x5908b, meta 0x6056f75), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2833330 data_alloc: 234881024 data_used: 13199902
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 166109184 unmapped: 48611328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f6e99000/0x0/0x4ffc00000, data 0x2a883e9/0x2caf000, compress 0x0/0x0/0x0, omap 0x5908b, meta 0x6056f75), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 166109184 unmapped: 48611328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 166109184 unmapped: 48611328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f6e99000/0x0/0x4ffc00000, data 0x2a883e9/0x2caf000, compress 0x0/0x0/0x0, omap 0x5908b, meta 0x6056f75), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 166109184 unmapped: 48611328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 166109184 unmapped: 48611328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2833330 data_alloc: 234881024 data_used: 13199902
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 166109184 unmapped: 48611328 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.774635315s of 15.531465530s, submitted: 77
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165068800 unmapped: 49651712 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f6e9a000/0x0/0x4ffc00000, data 0x2a8b3e9/0x2cb2000, compress 0x0/0x0/0x0, omap 0x5908b, meta 0x6056f75), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f6e9a000/0x0/0x4ffc00000, data 0x2a8b3e9/0x2cb2000, compress 0x0/0x0/0x0, omap 0x5908b, meta 0x6056f75), peers [0,1] op hist [0,1])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 401 ms_handle_reset con 0x560f6c009800 session 0x560f6fe0dc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165068800 unmapped: 49651712 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 401 ms_handle_reset con 0x560f6b223c00 session 0x560f6a0c8fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f6e9a000/0x0/0x4ffc00000, data 0x2a8b3e9/0x2cb2000, compress 0x0/0x0/0x0, omap 0x590d3, meta 0x6056f2d), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 401 handle_osd_map epochs [402,402], i have 402, src has [1,402]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165289984 unmapped: 49430528 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f689ff000 session 0x560f6ef33180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165289984 unmapped: 49430528 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2838074 data_alloc: 234881024 data_used: 13789726
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165289984 unmapped: 49430528 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fe0c380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165855232 unmapped: 48865280 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f69388400 session 0x560f6ba5ac40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6c009800 session 0x560f6b896fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165863424 unmapped: 48857088 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f6baa000/0x0/0x4ffc00000, data 0x2d78eca/0x2fa2000, compress 0x0/0x0/0x0, omap 0x592a5, meta 0x6056d5b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165863424 unmapped: 48857088 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165871616 unmapped: 48848896 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2859613 data_alloc: 234881024 data_used: 13789726
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165871616 unmapped: 48848896 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165871616 unmapped: 48848896 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6b775800 session 0x560f69b9a540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165871616 unmapped: 48848896 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165871616 unmapped: 48848896 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.061322212s of 12.205549240s, submitted: 62
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6917ac00 session 0x560f6fe0d880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6917ec00 session 0x560f6fe0cfc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f6ba5000/0x0/0x4ffc00000, data 0x2d7deca/0x2fa7000, compress 0x0/0x0/0x0, omap 0x592a5, meta 0x6056d5b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f689ff000 session 0x560f69b5a540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 160964608 unmapped: 53755904 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2720029 data_alloc: 218103808 data_used: 7696926
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fe0dc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 160964608 unmapped: 53755904 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f69388400 session 0x560f6fdb7880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f7c83000/0x0/0x4ffc00000, data 0x1ca0e58/0x1ec8000, compress 0x0/0x0/0x0, omap 0x592a5, meta 0x6056d5b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f689ff000 session 0x560f6ba5bc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 160964608 unmapped: 53755904 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fe0ddc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6917ac00 session 0x560f6a0c9a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161267712 unmapped: 53452800 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6917ec00 session 0x560f6fafe540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 160776192 unmapped: 53944320 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f689fd800 session 0x560f6fb23c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f689ff000 session 0x560f6cfc7340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f7c5f000/0x0/0x4ffc00000, data 0x1cc4e7b/0x1eed000, compress 0x0/0x0/0x0, omap 0x5938d, meta 0x6056c73), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 53059584 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fdfa000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f7c5e000/0x0/0x4ffc00000, data 0x1cc4e8b/0x1eee000, compress 0x0/0x0/0x0, omap 0x5938d, meta 0x6056c73), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6917ac00 session 0x560f6ad27a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2737238 data_alloc: 218103808 data_used: 8917948
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f7c5e000/0x0/0x4ffc00000, data 0x1cc4e8b/0x1eee000, compress 0x0/0x0/0x0, omap 0x5938d, meta 0x6056c73), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162144256 unmapped: 52576256 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162144256 unmapped: 52576256 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162144256 unmapped: 52576256 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f7c5e000/0x0/0x4ffc00000, data 0x1cc4e8b/0x1eee000, compress 0x0/0x0/0x0, omap 0x5938d, meta 0x6056c73), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162144256 unmapped: 52576256 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162144256 unmapped: 52576256 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2737238 data_alloc: 218103808 data_used: 8917948
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162144256 unmapped: 52576256 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.308311462s of 12.385532379s, submitted: 39
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f7c5e000/0x0/0x4ffc00000, data 0x1cc4e8b/0x1eee000, compress 0x0/0x0/0x0, omap 0x5938d, meta 0x6056c73), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162144256 unmapped: 52576256 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6df2d800 session 0x560f69372a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f7c5e000/0x0/0x4ffc00000, data 0x1cc4e8b/0x1eee000, compress 0x0/0x0/0x0, omap 0x5939f, meta 0x6056c61), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f68a00400 session 0x560f6f022540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6917b400 session 0x560f6fe0ce00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162144256 unmapped: 52576256 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 162144256 unmapped: 52576256 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 165634048 unmapped: 49086464 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2802715 data_alloc: 234881024 data_used: 9855932
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 167256064 unmapped: 47464448 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f747c000/0x0/0x4ffc00000, data 0x249fe8b/0x26c9000, compress 0x0/0x0/0x0, omap 0x5939f, meta 0x6056c61), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 167256064 unmapped: 47464448 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 167329792 unmapped: 47390720 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 167206912 unmapped: 47513600 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f689ff000 session 0x560f6e5ee000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 167206912 unmapped: 47513600 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2812634 data_alloc: 234881024 data_used: 9814972
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 167206912 unmapped: 47513600 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f73d4000/0x0/0x4ffc00000, data 0x254deee/0x2778000, compress 0x0/0x0/0x0, omap 0x5939f, meta 0x6056c61), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 167223296 unmapped: 47497216 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.899003029s of 11.151945114s, submitted: 127
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6b775800 session 0x560f6a0c8fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6c009800 session 0x560f6fdfbc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168296448 unmapped: 46424064 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f689fc800 session 0x560f6b435dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f73c2000/0x0/0x4ffc00000, data 0x255feee/0x278a000, compress 0x0/0x0/0x0, omap 0x5939f, meta 0x6056c61), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f689fc800 session 0x560f6b485dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f689ff000 session 0x560f6e5efdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 167903232 unmapped: 46817280 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6b770800 session 0x560f69373180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168173568 unmapped: 46546944 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3080110 data_alloc: 234881024 data_used: 9726908
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168173568 unmapped: 46546944 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168173568 unmapped: 46546944 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168173568 unmapped: 46546944 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f419f000/0x0/0x4ffc00000, data 0x5782ecb/0x59ac000, compress 0x0/0x0/0x0, omap 0x5961e, meta 0x60569e2), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168173568 unmapped: 46546944 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168173568 unmapped: 46546944 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3080110 data_alloc: 234881024 data_used: 9726908
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f69fa8800 session 0x560f6ba1d880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168189952 unmapped: 46530560 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6d03f000 session 0x560f69b9b880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f689fc800 session 0x560f6b485500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168255488 unmapped: 46465024 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f689ff000 session 0x560f685ee380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f69fa8800 session 0x560f6f0228c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168255488 unmapped: 46465024 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f1da0000/0x0/0x4ffc00000, data 0x7b82eca/0x7dac000, compress 0x0/0x0/0x0, omap 0x5961e, meta 0x60569e2), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6b770800 session 0x560f6e5ef180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6ebc4800 session 0x560f6fb221c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168255488 unmapped: 46465024 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f689fc800 session 0x560f6b4841c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.911906242s of 11.692256927s, submitted: 122
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f689ff000 session 0x560f6ba4cfc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f69fa8800 session 0x560f6f023340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6b770800 session 0x560f6ef32fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 46456832 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3277109 data_alloc: 234881024 data_used: 9731974
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 46456832 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f1d9d000/0x0/0x4ffc00000, data 0x7b82f20/0x7daf000, compress 0x0/0x0/0x0, omap 0x5961e, meta 0x60569e2), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 46456832 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 46456832 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 40468480 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 40468480 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f1d9d000/0x0/0x4ffc00000, data 0x7b82f20/0x7daf000, compress 0x0/0x0/0x0, omap 0x5961e, meta 0x60569e2), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345589 data_alloc: 234881024 data_used: 21446022
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 40468480 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 40468480 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f1d9d000/0x0/0x4ffc00000, data 0x7b82f20/0x7daf000, compress 0x0/0x0/0x0, omap 0x5961e, meta 0x60569e2), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 40468480 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 40468480 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 40468480 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345589 data_alloc: 234881024 data_used: 21446022
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f1d9d000/0x0/0x4ffc00000, data 0x7b82f20/0x7daf000, compress 0x0/0x0/0x0, omap 0x5961e, meta 0x60569e2), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 174252032 unmapped: 40468480 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.225348473s of 12.246403694s, submitted: 9
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 174858240 unmapped: 39862272 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 174874624 unmapped: 39845888 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 ms_handle_reset con 0x560f6b4ca400 session 0x560f6fe0d880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 191537152 unmapped: 23183360 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f19ad000/0x0/0x4ffc00000, data 0x7b82f20/0x7daf000, compress 0x0/0x0/0x0, omap 0x5961e, meta 0x60569e2), peers [0,1] op hist [2])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 192069632 unmapped: 22650880 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398733 data_alloc: 234881024 data_used: 25074646
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 186769408 unmapped: 27951104 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 186925056 unmapped: 27795456 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 188293120 unmapped: 26427392 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 191938560 unmapped: 22781952 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 191938560 unmapped: 22781952 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f1783000/0x0/0x4ffc00000, data 0x819cf20/0x83c9000, compress 0x0/0x0/0x0, omap 0x5961e, meta 0x60569e2), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3452261 data_alloc: 251658240 data_used: 31152086
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 191971328 unmapped: 22749184 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 191987712 unmapped: 22732800 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 191995904 unmapped: 22724608 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 191995904 unmapped: 22724608 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 191995904 unmapped: 22724608 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f1783000/0x0/0x4ffc00000, data 0x819cf20/0x83c9000, compress 0x0/0x0/0x0, omap 0x5961e, meta 0x60569e2), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458277 data_alloc: 251658240 data_used: 32810966
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 192438272 unmapped: 22282240 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.582061768s of 14.998506546s, submitted: 166
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 192503808 unmapped: 22216704 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f177e000/0x0/0x4ffc00000, data 0x819eabc/0x83cc000, compress 0x0/0x0/0x0, omap 0x5977c, meta 0x6056884), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 21569536 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 403 ms_handle_reset con 0x560f6b5ca000 session 0x560f69b5aa80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 403 ms_handle_reset con 0x560f6b1f8000 session 0x560f6fb22a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 10657792 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 403 ms_handle_reset con 0x560f69fa8800 session 0x560f6bfe2000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200851456 unmapped: 13869056 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3561854 data_alloc: 251658240 data_used: 34178518
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200851456 unmapped: 13869056 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f09b1000/0x0/0x4ffc00000, data 0x8fdca89/0x919a000, compress 0x0/0x0/0x0, omap 0x5980a, meta 0x60567f6), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200851456 unmapped: 13869056 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200851456 unmapped: 13869056 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200851456 unmapped: 13869056 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200859648 unmapped: 13860864 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3561854 data_alloc: 251658240 data_used: 34178518
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200859648 unmapped: 13860864 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200859648 unmapped: 13860864 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.406622887s of 10.777627945s, submitted: 157
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 403 ms_handle_reset con 0x560f689fc800 session 0x560f685ef6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f09b2000/0x0/0x4ffc00000, data 0x8fdca89/0x919a000, compress 0x0/0x0/0x0, omap 0x5980a, meta 0x60567f6), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 403 ms_handle_reset con 0x560f689ff000 session 0x560f6e5ef340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 201965568 unmapped: 12754944 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 403 ms_handle_reset con 0x560f69fa8800 session 0x560f6ad26000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 403 ms_handle_reset con 0x560f689fc800 session 0x560f6fe0d500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 12738560 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 201981952 unmapped: 12738560 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3559039 data_alloc: 251658240 data_used: 34232243
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 12722176 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 403 handle_osd_map epochs [403,404], i have 404, src has [1,404]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 ms_handle_reset con 0x560f6b770800 session 0x560f6ad268c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 12722176 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 ms_handle_reset con 0x560f6b4ca800 session 0x560f6fb22540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 201998336 unmapped: 12722176 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f09aa000/0x0/0x4ffc00000, data 0x920a687/0x91a0000, compress 0x0/0x0/0x0, omap 0x5a186, meta 0x6055e7a), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f09aa000/0x0/0x4ffc00000, data 0x920a687/0x91a0000, compress 0x0/0x0/0x0, omap 0x5a186, meta 0x6055e7a), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202031104 unmapped: 12689408 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202031104 unmapped: 12689408 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3585027 data_alloc: 251658240 data_used: 34227555
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202031104 unmapped: 12689408 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f09aa000/0x0/0x4ffc00000, data 0x920a687/0x91a0000, compress 0x0/0x0/0x0, omap 0x5a186, meta 0x6055e7a), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202031104 unmapped: 12689408 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202031104 unmapped: 12689408 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f09aa000/0x0/0x4ffc00000, data 0x920a687/0x91a0000, compress 0x0/0x0/0x0, omap 0x5a186, meta 0x6055e7a), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.738533974s of 11.892711639s, submitted: 30
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 ms_handle_reset con 0x560f69ae5800 session 0x560f6c7f0a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202039296 unmapped: 12681216 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202039296 unmapped: 12681216 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3585548 data_alloc: 251658240 data_used: 34231651
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202039296 unmapped: 12681216 heap: 214720512 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 206790656 unmapped: 11124736 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f0426000/0x0/0x4ffc00000, data 0x978f6aa/0x9726000, compress 0x0/0x0/0x0, omap 0x5a186, meta 0x6055e7a), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 ms_handle_reset con 0x560f6b4ca800 session 0x560f6c7f0fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 11051008 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 11026432 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 11026432 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3638538 data_alloc: 251658240 data_used: 36616645
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 11026432 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 206921728 unmapped: 10993664 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f0352000/0x0/0x4ffc00000, data 0x98636aa/0x97fa000, compress 0x0/0x0/0x0, omap 0x5a186, meta 0x6055e7a), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204808192 unmapped: 13107200 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204808192 unmapped: 13107200 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204808192 unmapped: 13107200 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.592720032s of 11.657809258s, submitted: 17
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 ms_handle_reset con 0x560f6b1f8000 session 0x560f6fe0d6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 ms_handle_reset con 0x560f6b5ca000 session 0x560f6b8961c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3648506 data_alloc: 251658240 data_used: 37670341
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202858496 unmapped: 15056896 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 ms_handle_reset con 0x560f6917dc00 session 0x560f6ef32000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202866688 unmapped: 15048704 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202866688 unmapped: 15048704 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f0352000/0x0/0x4ffc00000, data 0x98636aa/0x97fa000, compress 0x0/0x0/0x0, omap 0x5a390, meta 0x6055c70), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 ms_handle_reset con 0x560f6ebc4400 session 0x560f6fdb7880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 203104256 unmapped: 14811136 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 ms_handle_reset con 0x560f6917dc00 session 0x560f6be016c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 203890688 unmapped: 14024704 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 404 handle_osd_map epochs [404,405], i have 405, src has [1,405]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3626618 data_alloc: 251658240 data_used: 39093603
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 203964416 unmapped: 13950976 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 405 heartbeat osd_stat(store_statfs(0x4f09ab000/0x0/0x4ffc00000, data 0x920a6aa/0x91a1000, compress 0x0/0x0/0x0, omap 0x5a390, meta 0x6055c70), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 405 ms_handle_reset con 0x560f6a0e2400 session 0x560f6a0c9500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 405 ms_handle_reset con 0x560f6b4ca000 session 0x560f68ede540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 203997184 unmapped: 13918208 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 405 ms_handle_reset con 0x560f6b1f8000 session 0x560f6f022000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204029952 unmapped: 13885440 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204029952 unmapped: 13885440 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204029952 unmapped: 13885440 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 405 ms_handle_reset con 0x560f6b4ca800 session 0x560f6c7f16c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3492944 data_alloc: 251658240 data_used: 38106432
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205651968 unmapped: 12263424 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 405 heartbeat osd_stat(store_statfs(0x4f133b000/0x0/0x4ffc00000, data 0x841e215/0x864c000, compress 0x0/0x0/0x0, omap 0x5a60f, meta 0x60559f1), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205651968 unmapped: 12263424 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205651968 unmapped: 12263424 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.881166458s of 13.211331367s, submitted: 74
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 406 heartbeat osd_stat(store_statfs(0x4f133b000/0x0/0x4ffc00000, data 0x841e215/0x864c000, compress 0x0/0x0/0x0, omap 0x5a60f, meta 0x60559f1), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205701120 unmapped: 12214272 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205701120 unmapped: 12214272 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3496374 data_alloc: 251658240 data_used: 38110430
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 406 heartbeat osd_stat(store_statfs(0x4f14fb000/0x0/0x4ffc00000, data 0x841fc94/0x864f000, compress 0x0/0x0/0x0, omap 0x5a7e2, meta 0x605581e), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205701120 unmapped: 12214272 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205701120 unmapped: 12214272 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205701120 unmapped: 12214272 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205701120 unmapped: 12214272 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 406 heartbeat osd_stat(store_statfs(0x4f14fb000/0x0/0x4ffc00000, data 0x841fc94/0x864f000, compress 0x0/0x0/0x0, omap 0x5a7e2, meta 0x605581e), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205717504 unmapped: 12197888 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3497142 data_alloc: 251658240 data_used: 38081758
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205717504 unmapped: 12197888 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205717504 unmapped: 12197888 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 406 heartbeat osd_stat(store_statfs(0x4f14fd000/0x0/0x4ffc00000, data 0x841fc94/0x864f000, compress 0x0/0x0/0x0, omap 0x5a7e2, meta 0x605581e), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205717504 unmapped: 12197888 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205717504 unmapped: 12197888 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205717504 unmapped: 12197888 heap: 217915392 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 406 heartbeat osd_stat(store_statfs(0x4f14fd000/0x0/0x4ffc00000, data 0x841fc94/0x864f000, compress 0x0/0x0/0x0, omap 0x5a7e2, meta 0x605581e), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.969118118s of 11.985156059s, submitted: 23
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3516722 data_alloc: 251658240 data_used: 38081758
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205783040 unmapped: 16334848 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205783040 unmapped: 16334848 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205783040 unmapped: 16334848 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 ms_handle_reset con 0x560f6917dc00 session 0x560f6fdb6a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 ms_handle_reset con 0x560f6a0e2400 session 0x560f69c02fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 ms_handle_reset con 0x560f6b1f8000 session 0x560f685efc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 ms_handle_reset con 0x560f6b4ca000 session 0x560f6eb3ac40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205824000 unmapped: 16293888 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 ms_handle_reset con 0x560f689fc800 session 0x560f6fdfa1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 ms_handle_reset con 0x560f69fa8800 session 0x560f6e5eee00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 ms_handle_reset con 0x560f6917dc00 session 0x560f6bfe3c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 heartbeat osd_stat(store_statfs(0x4f10f5000/0x0/0x4ffc00000, data 0x8821903/0x8a55000, compress 0x0/0x0/0x0, omap 0x5ac6b, meta 0x6055395), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205824000 unmapped: 16293888 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3533278 data_alloc: 251658240 data_used: 39539934
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205922304 unmapped: 16195584 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 ms_handle_reset con 0x560f6b770800 session 0x560f6e5ef180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 ms_handle_reset con 0x560f6b1f9c00 session 0x560f6ef32a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 ms_handle_reset con 0x560f6a0e2400 session 0x560f6ba1cc40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205357056 unmapped: 16760832 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 ms_handle_reset con 0x560f6917dc00 session 0x560f6ba4d340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205357056 unmapped: 16760832 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 ms_handle_reset con 0x560f6b773000 session 0x560f6b897a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 ms_handle_reset con 0x560f6b4f5000 session 0x560f6ba5b880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205357056 unmapped: 16760832 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 ms_handle_reset con 0x560f69ae4800 session 0x560f6ef32a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205365248 unmapped: 16752640 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 heartbeat osd_stat(store_statfs(0x4f10fa000/0x0/0x4ffc00000, data 0x882186f/0x8a52000, compress 0x0/0x0/0x0, omap 0x5ac6b, meta 0x6055395), peers [0,1] op hist [0,0,0,1])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 ms_handle_reset con 0x560f6b4cb000 session 0x560f68ede540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3535691 data_alloc: 251658240 data_used: 41100475
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 205373440 unmapped: 16744448 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 ms_handle_reset con 0x560f6917dc00 session 0x560f6ba4ddc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.324517250s of 10.707088470s, submitted: 82
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 408 ms_handle_reset con 0x560f69ae4800 session 0x560f6ba4d500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 408 ms_handle_reset con 0x560f6b4f5000 session 0x560f6c7f0e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 408 ms_handle_reset con 0x560f6b773000 session 0x560f6f023dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 195575808 unmapped: 26542080 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 408 heartbeat osd_stat(store_statfs(0x4f4d4b000/0x0/0x4ffc00000, data 0x4bce3fd/0x4dff000, compress 0x0/0x0/0x0, omap 0x5b0b8, meta 0x6054f48), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 195575808 unmapped: 26542080 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 408 ms_handle_reset con 0x560f68ee4c00 session 0x560f6c7f0700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 408 ms_handle_reset con 0x560f68ee4c00 session 0x560f69ee2700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194527232 unmapped: 27590656 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 408 heartbeat osd_stat(store_statfs(0x4f4d4b000/0x0/0x4ffc00000, data 0x4bce3fd/0x4dff000, compress 0x0/0x0/0x0, omap 0x5b0b8, meta 0x6054f48), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194527232 unmapped: 27590656 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3113117 data_alloc: 234881024 data_used: 22658253
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194527232 unmapped: 27590656 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194527232 unmapped: 27590656 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194527232 unmapped: 27590656 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194527232 unmapped: 27590656 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 410 heartbeat osd_stat(store_statfs(0x4f4d48000/0x0/0x4ffc00000, data 0x4bcfe7c/0x4e02000, compress 0x0/0x0/0x0, omap 0x5b565, meta 0x6054a9b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 410 ms_handle_reset con 0x560f6917dc00 session 0x560f6bfe36c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194527232 unmapped: 27590656 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 410 ms_handle_reset con 0x560f69ae4800 session 0x560f6bfe3c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 410 ms_handle_reset con 0x560f6b4f5000 session 0x560f6e5ee000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3120465 data_alloc: 234881024 data_used: 23182541
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194330624 unmapped: 27787264 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194330624 unmapped: 27787264 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 410 ms_handle_reset con 0x560f6b773000 session 0x560f6ba5b500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.172562599s of 11.304408073s, submitted: 46
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194330624 unmapped: 27787264 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 411 ms_handle_reset con 0x560f6917dc00 session 0x560f6fdb68c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 411 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ba5bc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 411 heartbeat osd_stat(store_statfs(0x4f4d41000/0x0/0x4ffc00000, data 0x4bd35c4/0x4e09000, compress 0x0/0x0/0x0, omap 0x5b861, meta 0x605479f), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194330624 unmapped: 27787264 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194330624 unmapped: 27787264 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 412 ms_handle_reset con 0x560f69ae4800 session 0x560f6f023340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130093 data_alloc: 234881024 data_used: 23182541
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194338816 unmapped: 27779072 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 412 ms_handle_reset con 0x560f6b4f5000 session 0x560f6adbe540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f4d3c000/0x0/0x4ffc00000, data 0x4bd517f/0x4e0e000, compress 0x0/0x0/0x0, omap 0x5bcd8, meta 0x6054328), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194338816 unmapped: 27779072 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 412 ms_handle_reset con 0x560f68ee5000 session 0x560f685ef6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 412 ms_handle_reset con 0x560f68ee4c00 session 0x560f6cfc6c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194338816 unmapped: 27779072 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 412 ms_handle_reset con 0x560f69fa9400 session 0x560f69b9a540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 412 ms_handle_reset con 0x560f68ee5000 session 0x560f6fb22a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194338816 unmapped: 27779072 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 412 ms_handle_reset con 0x560f6917dc00 session 0x560f6b485dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f4d3c000/0x0/0x4ffc00000, data 0x4bd5170/0x4e0d000, compress 0x0/0x0/0x0, omap 0x5bcd8, meta 0x6054328), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194338816 unmapped: 27779072 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 412 ms_handle_reset con 0x560f69ae4800 session 0x560f6ef336c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 413 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fdfbc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 413 ms_handle_reset con 0x560f68ee5000 session 0x560f6f023c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131099 data_alloc: 234881024 data_used: 23182557
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194338816 unmapped: 27779072 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 413 ms_handle_reset con 0x560f6917dc00 session 0x560f6f022380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 413 ms_handle_reset con 0x560f69ae4800 session 0x560f6be00e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194338816 unmapped: 27779072 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 413 heartbeat osd_stat(store_statfs(0x4f4d3a000/0x0/0x4ffc00000, data 0x4bd6d60/0x4e10000, compress 0x0/0x0/0x0, omap 0x5be33, meta 0x60541cd), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.061160088s of 10.095658302s, submitted: 16
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 414 ms_handle_reset con 0x560f69fa9400 session 0x560f6c7f0a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194338816 unmapped: 27779072 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 414 ms_handle_reset con 0x560f68ee4c00 session 0x560f6b896c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 414 ms_handle_reset con 0x560f68ee5000 session 0x560f685efc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194338816 unmapped: 27779072 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194338816 unmapped: 27779072 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 414 heartbeat osd_stat(store_statfs(0x4f4d39000/0x0/0x4ffc00000, data 0x4bd8930/0x4e11000, compress 0x0/0x0/0x0, omap 0x5c2ad, meta 0x6053d53), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 414 ms_handle_reset con 0x560f6917dc00 session 0x560f6ef32000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134211 data_alloc: 234881024 data_used: 23182541
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194338816 unmapped: 27779072 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 415 ms_handle_reset con 0x560f69ae4800 session 0x560f6fe0ddc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194347008 unmapped: 27770880 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 415 ms_handle_reset con 0x560f69fa9400 session 0x560f6b485500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f4d35000/0x0/0x4ffc00000, data 0x4bda54a/0x4e15000, compress 0x0/0x0/0x0, omap 0x5c409, meta 0x6053bf7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194347008 unmapped: 27770880 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194355200 unmapped: 27762688 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 417 ms_handle_reset con 0x560f68ee4c00 session 0x560f6f022000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194355200 unmapped: 27762688 heap: 222117888 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 417 ms_handle_reset con 0x560f68ee5000 session 0x560f6ba1c540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3198947 data_alloc: 234881024 data_used: 23183496
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 198574080 unmapped: 36151296 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 417 ms_handle_reset con 0x560f6917dc00 session 0x560f6c7f16c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 417 ms_handle_reset con 0x560f69ae4800 session 0x560f6cfc7180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 417 ms_handle_reset con 0x560f69fa9400 session 0x560f6be01180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 417 ms_handle_reset con 0x560f68ee4c00 session 0x560f692b56c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 417 heartbeat osd_stat(store_statfs(0x4f2d2f000/0x0/0x4ffc00000, data 0x6bddc61/0x6e1d000, compress 0x0/0x0/0x0, omap 0x5cc61, meta 0x605339f), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194379776 unmapped: 40345600 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.046445847s of 10.369565010s, submitted: 53
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 418 ms_handle_reset con 0x560f68ee5000 session 0x560f6be00e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194379776 unmapped: 40345600 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 418 heartbeat osd_stat(store_statfs(0x4f2d2f000/0x0/0x4ffc00000, data 0x6bddc61/0x6e1d000, compress 0x0/0x0/0x0, omap 0x5cc9b, meta 0x6053365), peers [0,1] op hist [0,0,0,0,0,1])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 418 ms_handle_reset con 0x560f6917dc00 session 0x560f6c7f0000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194379776 unmapped: 40345600 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 418 ms_handle_reset con 0x560f6b4f5000 session 0x560f6f022380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 418 heartbeat osd_stat(store_statfs(0x4f2d2a000/0x0/0x4ffc00000, data 0x6bdf6e0/0x6e20000, compress 0x0/0x0/0x0, omap 0x5ce6c, meta 0x6053194), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194379776 unmapped: 40345600 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 419 ms_handle_reset con 0x560f6ebc2800 session 0x560f6bfe2000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 419 ms_handle_reset con 0x560f69ae4800 session 0x560f6ef336c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324164 data_alloc: 234881024 data_used: 23183768
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194387968 unmapped: 40337408 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 419 ms_handle_reset con 0x560f68ee5000 session 0x560f6fb22540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 420 ms_handle_reset con 0x560f6917dc00 session 0x560f6a0c9a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194396160 unmapped: 40329216 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 420 ms_handle_reset con 0x560f68ee4c00 session 0x560f6cfc7880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 420 heartbeat osd_stat(store_statfs(0x4f2d28000/0x0/0x4ffc00000, data 0x6be2dfc/0x6e24000, compress 0x0/0x0/0x0, omap 0x5d44b, meta 0x6052bb5), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 420 ms_handle_reset con 0x560f6b4f5000 session 0x560f6ba5bc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194404352 unmapped: 40321024 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194404352 unmapped: 40321024 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194412544 unmapped: 40312832 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3325009 data_alloc: 234881024 data_used: 23184283
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194412544 unmapped: 40312832 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 421 heartbeat osd_stat(store_statfs(0x4f2d24000/0x0/0x4ffc00000, data 0x6be4952/0x6e26000, compress 0x0/0x0/0x0, omap 0x5d5a9, meta 0x6052a57), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 421 ms_handle_reset con 0x560f68ee4c00 session 0x560f6bfe3c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194412544 unmapped: 40312832 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 421 ms_handle_reset con 0x560f68ee5000 session 0x560f6fafe540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.814995766s of 10.003114700s, submitted: 81
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194453504 unmapped: 40271872 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 422 ms_handle_reset con 0x560f6b4f5000 session 0x560f6ad26000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194453504 unmapped: 40271872 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f2d1e000/0x0/0x4ffc00000, data 0x6be6433/0x6e2a000, compress 0x0/0x0/0x0, omap 0x5da68, meta 0x6052598), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 422 ms_handle_reset con 0x560f6b794800 session 0x560f6f023dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194453504 unmapped: 40271872 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334178 data_alloc: 234881024 data_used: 23281051
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 423 ms_handle_reset con 0x560f6da93800 session 0x560f6e5ef880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194469888 unmapped: 40255488 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194469888 unmapped: 40255488 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f2d1f000/0x0/0x4ffc00000, data 0x6be8023/0x6e2d000, compress 0x0/0x0/0x0, omap 0x5dbc7, meta 0x6052439), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194469888 unmapped: 40255488 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194469888 unmapped: 40255488 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 424 ms_handle_reset con 0x560f68ee4c00 session 0x560f68ede540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 424 ms_handle_reset con 0x560f68ee5000 session 0x560f6f023340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194469888 unmapped: 40255488 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3364280 data_alloc: 234881024 data_used: 26991515
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194469888 unmapped: 40255488 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 424 heartbeat osd_stat(store_statfs(0x4f2d1a000/0x0/0x4ffc00000, data 0x6be9c4d/0x6e32000, compress 0x0/0x0/0x0, omap 0x5dd26, meta 0x60522da), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 424 ms_handle_reset con 0x560f6b4f5000 session 0x560f6952bc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194469888 unmapped: 40255488 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 424 ms_handle_reset con 0x560f6b794800 session 0x560f6eb3b500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194478080 unmapped: 40247296 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194478080 unmapped: 40247296 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 194478080 unmapped: 40247296 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 424 heartbeat osd_stat(store_statfs(0x4f2d1c000/0x0/0x4ffc00000, data 0x6be9b89/0x6e30000, compress 0x0/0x0/0x0, omap 0x5dd26, meta 0x60522da), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 424 handle_osd_map epochs [425,425], i have 425, src has [1,425]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.262174606s of 12.359186172s, submitted: 39
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3367004 data_alloc: 251658240 data_used: 27449755
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 201859072 unmapped: 32866304 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 425 heartbeat osd_stat(store_statfs(0x4f2a17000/0x0/0x4ffc00000, data 0x6beb608/0x6e33000, compress 0x0/0x0/0x0, omap 0x5e223, meta 0x6051ddd), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202203136 unmapped: 32522240 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202407936 unmapped: 32317440 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202407936 unmapped: 32317440 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202407936 unmapped: 32317440 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437680 data_alloc: 251658240 data_used: 27212187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 425 heartbeat osd_stat(store_statfs(0x4f2196000/0x0/0x4ffc00000, data 0x746c608/0x76b4000, compress 0x0/0x0/0x0, omap 0x5e223, meta 0x6051ddd), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202407936 unmapped: 32317440 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202407936 unmapped: 32317440 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 425 ms_handle_reset con 0x560f6b986c00 session 0x560f6fe0d880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 199581696 unmapped: 35143680 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 426 ms_handle_reset con 0x560f68ee5000 session 0x560f6ba5ae00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 427 ms_handle_reset con 0x560f6b4f5000 session 0x560f69c028c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 199581696 unmapped: 35143680 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 427 ms_handle_reset con 0x560f6b794800 session 0x560f6fafee00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 427 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fe0d6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f248d000/0x0/0x4ffc00000, data 0x746fda2/0x76bb000, compress 0x0/0x0/0x0, omap 0x5e573, meta 0x6051a8d), peers [0,1] op hist [0,0,0,1])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 427 ms_handle_reset con 0x560f6917dc00 session 0x560f6bfe36c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 427 ms_handle_reset con 0x560f69ae4800 session 0x560f69ee2700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 199319552 unmapped: 35405824 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 427 ms_handle_reset con 0x560f68ee4c00 session 0x560f6adbfc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f2490000/0x0/0x4ffc00000, data 0x7470297/0x76bc000, compress 0x0/0x0/0x0, omap 0x5e573, meta 0x6051a8d), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429948 data_alloc: 251658240 data_used: 27212772
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.659602165s of 10.917957306s, submitted: 107
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 199319552 unmapped: 35405824 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 428 ms_handle_reset con 0x560f68ee5000 session 0x560f69b9b880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 428 ms_handle_reset con 0x560f6b4f5000 session 0x560f69b9b340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 199335936 unmapped: 35389440 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 429 ms_handle_reset con 0x560f6b794800 session 0x560f6b896fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 429 heartbeat osd_stat(store_statfs(0x4f248b000/0x0/0x4ffc00000, data 0x7471930/0x76bd000, compress 0x0/0x0/0x0, omap 0x5ea90, meta 0x6051570), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 199335936 unmapped: 35389440 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 429 heartbeat osd_stat(store_statfs(0x4f248c000/0x0/0x4ffc00000, data 0x747353c/0x76c0000, compress 0x0/0x0/0x0, omap 0x5ebf1, meta 0x605140f), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 199335936 unmapped: 35389440 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 199335936 unmapped: 35389440 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431374 data_alloc: 251658240 data_used: 27213998
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 199335936 unmapped: 35389440 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 199335936 unmapped: 35389440 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 429 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ad268c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f2486000/0x0/0x4ffc00000, data 0x7475039/0x76c4000, compress 0x0/0x0/0x0, omap 0x5f0f5, meta 0x6050f0b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 199335936 unmapped: 35389440 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 199335936 unmapped: 35389440 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 430 ms_handle_reset con 0x560f68ee5000 session 0x560f6b434c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f2486000/0x0/0x4ffc00000, data 0x7475039/0x76c4000, compress 0x0/0x0/0x0, omap 0x5f0f5, meta 0x6050f0b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 431 ms_handle_reset con 0x560f6b4f5000 session 0x560f69f24000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 199393280 unmapped: 35332096 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3443067 data_alloc: 251658240 data_used: 27214368
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 432 ms_handle_reset con 0x560f6b223400 session 0x560f6fafee00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 432 ms_handle_reset con 0x560f69ae4800 session 0x560f6e5ef500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 199393280 unmapped: 35332096 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 199393280 unmapped: 35332096 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 432 ms_handle_reset con 0x560f68ee4c00 session 0x560f6e5ee380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.640606880s of 11.747207642s, submitted: 51
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 34283520 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 433 ms_handle_reset con 0x560f68ee5000 session 0x560f6eb3b180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 433 ms_handle_reset con 0x560f6b4f5000 session 0x560f6ad27dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 434 ms_handle_reset con 0x560f69ae5c00 session 0x560f6bfe3dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 434 ms_handle_reset con 0x560f6917b400 session 0x560f6f023a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200458240 unmapped: 34267136 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 434 ms_handle_reset con 0x560f68ee4c00 session 0x560f6bfe2e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 434 heartbeat osd_stat(store_statfs(0x4f2477000/0x0/0x4ffc00000, data 0x747be26/0x76d1000, compress 0x0/0x0/0x0, omap 0x5ffd5, meta 0x605002b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 435 ms_handle_reset con 0x560f6a20d000 session 0x560f6c7f1500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 435 ms_handle_reset con 0x560f69ae5c00 session 0x560f68ede540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 435 ms_handle_reset con 0x560f6b223400 session 0x560f6adbe380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200466432 unmapped: 34258944 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457640 data_alloc: 251658240 data_used: 27215225
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200491008 unmapped: 34234368 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 436 ms_handle_reset con 0x560f6b4f5000 session 0x560f69b9a8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 436 ms_handle_reset con 0x560f68ee4c00 session 0x560f6fb22a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200491008 unmapped: 34234368 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 436 ms_handle_reset con 0x560f69ae5c00 session 0x560f6ba1cc40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200491008 unmapped: 34234368 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 437 ms_handle_reset con 0x560f6a20d000 session 0x560f6b484380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 437 heartbeat osd_stat(store_statfs(0x4f2470000/0x0/0x4ffc00000, data 0x747f5b2/0x76d7000, compress 0x0/0x0/0x0, omap 0x60c9b, meta 0x604f365), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200499200 unmapped: 34226176 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 437 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 438 ms_handle_reset con 0x560f6b223400 session 0x560f6ba1d6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 438 ms_handle_reset con 0x560f6b4d0000 session 0x560f692b4000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200515584 unmapped: 34209792 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467245 data_alloc: 251658240 data_used: 27294174
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 438 handle_osd_map epochs [438,439], i have 438, src has [1,439]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200515584 unmapped: 34209792 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 439 ms_handle_reset con 0x560f6b4d0000 session 0x560f6fe0d500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 439 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ad26000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200540160 unmapped: 34185216 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200548352 unmapped: 34177024 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f246a000/0x0/0x4ffc00000, data 0x7484abb/0x76e0000, compress 0x0/0x0/0x0, omap 0x6148f, meta 0x604eb71), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.282379150s of 10.439109802s, submitted: 85
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 440 ms_handle_reset con 0x560f69ae5c00 session 0x560f6fb22540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200556544 unmapped: 34168832 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200572928 unmapped: 34152448 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 441 ms_handle_reset con 0x560f6b223400 session 0x560f69b5a380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475715 data_alloc: 251658240 data_used: 27295156
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 442 ms_handle_reset con 0x560f6a20d000 session 0x560f6cfc76c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200581120 unmapped: 34144256 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 442 handle_osd_map epochs [442,443], i have 442, src has [1,443]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200654848 unmapped: 34070528 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f245d000/0x0/0x4ffc00000, data 0x748b988/0x76ed000, compress 0x0/0x0/0x0, omap 0x622ab, meta 0x604dd55), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200679424 unmapped: 34045952 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 444 ms_handle_reset con 0x560f6a20d000 session 0x560f6fe0ce00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f2458000/0x0/0x4ffc00000, data 0x748d43f/0x76f0000, compress 0x0/0x0/0x0, omap 0x624af, meta 0x604db51), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200679424 unmapped: 34045952 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200679424 unmapped: 34045952 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 444 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ba1ddc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3490286 data_alloc: 251658240 data_used: 27611206
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200679424 unmapped: 34045952 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 445 ms_handle_reset con 0x560f6b223400 session 0x560f6cfc6c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 445 handle_osd_map epochs [445,446], i have 445, src has [1,446]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 446 ms_handle_reset con 0x560f69ae5c00 session 0x560f6fdb7880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200704000 unmapped: 34021376 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f12b2000/0x0/0x4ffc00000, data 0x7490be7/0x76f6000, compress 0x0/0x0/0x0, omap 0x62b5c, meta 0x71ed4a4), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 446 handle_osd_map epochs [447,447], i have 447, src has [1,447]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 200712192 unmapped: 34013184 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.137764931s of 10.240889549s, submitted: 87
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 201785344 unmapped: 32940032 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 201785344 unmapped: 32940032 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3501046 data_alloc: 251658240 data_used: 27612376
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 448 ms_handle_reset con 0x560f6b772c00 session 0x560f6bfe2700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 448 ms_handle_reset con 0x560f6b4d0000 session 0x560f6f022700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 201801728 unmapped: 32923648 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 448 ms_handle_reset con 0x560f69ae5c00 session 0x560f6be001c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 449 ms_handle_reset con 0x560f6a20d000 session 0x560f6fdb6c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 449 ms_handle_reset con 0x560f6b223400 session 0x560f69ee3dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 449 ms_handle_reset con 0x560f68ee4c00 session 0x560f6be01dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 201809920 unmapped: 32915456 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 449 heartbeat osd_stat(store_statfs(0x4f12aa000/0x0/0x4ffc00000, data 0x7495e8c/0x7700000, compress 0x0/0x0/0x0, omap 0x63347, meta 0x71eccb9), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 449 handle_osd_map epochs [450,450], i have 450, src has [1,450]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 450 ms_handle_reset con 0x560f68ee4c00 session 0x560f69c028c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 201809920 unmapped: 32915456 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 201809920 unmapped: 32915456 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 450 heartbeat osd_stat(store_statfs(0x4f12a5000/0x0/0x4ffc00000, data 0x7497a52/0x7702000, compress 0x0/0x0/0x0, omap 0x63513, meta 0x71ecaed), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 450 handle_osd_map epochs [450,451], i have 450, src has [1,451]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 451 ms_handle_reset con 0x560f69ae5c00 session 0x560f6c7f1dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 201809920 unmapped: 32915456 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3508184 data_alloc: 251658240 data_used: 27612376
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 451 heartbeat osd_stat(store_statfs(0x4f12a5000/0x0/0x4ffc00000, data 0x749965e/0x7705000, compress 0x0/0x0/0x0, omap 0x63a71, meta 0x71ec58f), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 201809920 unmapped: 32915456 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 452 ms_handle_reset con 0x560f6a20d000 session 0x560f6b897dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 32890880 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 453 ms_handle_reset con 0x560f6b223400 session 0x560f69b5b180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202924032 unmapped: 31801344 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.826086998s of 10.177419662s, submitted: 170
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 453 handle_osd_map epochs [453,454], i have 454, src has [1,454]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202924032 unmapped: 31801344 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 454 handle_osd_map epochs [454,455], i have 454, src has [1,455]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 455 ms_handle_reset con 0x560f6b4d0000 session 0x560f6be01a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202981376 unmapped: 31744000 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 455 ms_handle_reset con 0x560f6b4d0000 session 0x560f6b896c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 455 heartbeat osd_stat(store_statfs(0x4f129b000/0x0/0x4ffc00000, data 0x74a0652/0x770f000, compress 0x0/0x0/0x0, omap 0x64995, meta 0x71eb66b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515976 data_alloc: 251658240 data_used: 27604086
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202981376 unmapped: 31744000 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202981376 unmapped: 31744000 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202981376 unmapped: 31744000 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202981376 unmapped: 31744000 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202981376 unmapped: 31744000 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 456 ms_handle_reset con 0x560f68ee4c00 session 0x560f6c7f0000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521616 data_alloc: 251658240 data_used: 27604699
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 456 heartbeat osd_stat(store_statfs(0x4f1299000/0x0/0x4ffc00000, data 0x74a2151/0x7713000, compress 0x0/0x0/0x0, omap 0x64f64, meta 0x71eb09c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202981376 unmapped: 31744000 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202981376 unmapped: 31744000 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 456 handle_osd_map epochs [456,457], i have 456, src has [1,457]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 457 ms_handle_reset con 0x560f6a20d000 session 0x560f6fe0ddc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202997760 unmapped: 31727616 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 457 ms_handle_reset con 0x560f68ee5000 session 0x560f6b485340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 457 ms_handle_reset con 0x560f6917b400 session 0x560f6f023340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 457 handle_osd_map epochs [457,458], i have 458, src has [1,458]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.062201500s of 10.206005096s, submitted: 75
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 458 ms_handle_reset con 0x560f68ee4c00 session 0x560f6e5efdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 458 ms_handle_reset con 0x560f68ee5000 session 0x560f6b896e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 458 heartbeat osd_stat(store_statfs(0x4f1291000/0x0/0x4ffc00000, data 0x74a57a4/0x7719000, compress 0x0/0x0/0x0, omap 0x65380, meta 0x71eac80), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202997760 unmapped: 31727616 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 459 ms_handle_reset con 0x560f6a20d000 session 0x560f6ba1c540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 459 ms_handle_reset con 0x560f69ae5c00 session 0x560f6ba4ddc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202997760 unmapped: 31727616 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3538023 data_alloc: 251658240 data_used: 29161593
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 202997760 unmapped: 31727616 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 459 handle_osd_map epochs [459,460], i have 459, src has [1,460]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 460 ms_handle_reset con 0x560f6b4d0000 session 0x560f6ad26000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204046336 unmapped: 30679040 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204054528 unmapped: 30670848 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 460 ms_handle_reset con 0x560f68ee5000 session 0x560f685efc00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 460 ms_handle_reset con 0x560f69ae5c00 session 0x560f6a0c9500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 460 heartbeat osd_stat(store_statfs(0x4f00eb000/0x0/0x4ffc00000, data 0x74a8f30/0x771f000, compress 0x0/0x0/0x0, omap 0x65a4f, meta 0x838a5b1), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 460 handle_osd_map epochs [460,461], i have 460, src has [1,461]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 461 ms_handle_reset con 0x560f68ee4c00 session 0x560f6a0c8c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 461 ms_handle_reset con 0x560f6a20d000 session 0x560f6fdfa540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204226560 unmapped: 30498816 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204226560 unmapped: 30498816 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3298261 data_alloc: 234881024 data_used: 23192071
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204226560 unmapped: 30498816 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 461 heartbeat osd_stat(store_statfs(0x4f2969000/0x0/0x4ffc00000, data 0x4c29aca/0x4e9f000, compress 0x0/0x0/0x0, omap 0x66147, meta 0x8389eb9), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204226560 unmapped: 30498816 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204226560 unmapped: 30498816 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204226560 unmapped: 30498816 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204226560 unmapped: 30498816 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3300571 data_alloc: 234881024 data_used: 23192071
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204226560 unmapped: 30498816 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f2968000/0x0/0x4ffc00000, data 0x4c2b565/0x4ea2000, compress 0x0/0x0/0x0, omap 0x6630e, meta 0x8389cf2), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204226560 unmapped: 30498816 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.497895241s of 13.722840309s, submitted: 90
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 462 ms_handle_reset con 0x560f6b4d0000 session 0x560f69ee2c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204234752 unmapped: 30490624 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204234752 unmapped: 30490624 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 463 ms_handle_reset con 0x560f68ee5000 session 0x560f6ba5b880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 464 ms_handle_reset con 0x560f69ae5c00 session 0x560f6b896540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 30474240 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 464 handle_osd_map epochs [464,465], i have 464, src has [1,465]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 465 ms_handle_reset con 0x560f6a20d000 session 0x560f6e5ef880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 465 ms_handle_reset con 0x560f6b223400 session 0x560f69b9b340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 465 ms_handle_reset con 0x560f68ee4c00 session 0x560f6ba5ae00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3312841 data_alloc: 234881024 data_used: 23192087
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 465 heartbeat osd_stat(store_statfs(0x4f2961000/0x0/0x4ffc00000, data 0x4c2eb90/0x4ea9000, compress 0x0/0x0/0x0, omap 0x66cff, meta 0x8389301), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204259328 unmapped: 30466048 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 465 heartbeat osd_stat(store_statfs(0x4f2961000/0x0/0x4ffc00000, data 0x4c2eb90/0x4ea9000, compress 0x0/0x0/0x0, omap 0x66cff, meta 0x8389301), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204259328 unmapped: 30466048 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 465 handle_osd_map epochs [465,466], i have 465, src has [1,466]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 466 ms_handle_reset con 0x560f68ee5000 session 0x560f69b5a8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 466 ms_handle_reset con 0x560f69ae5c00 session 0x560f69ee2700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204316672 unmapped: 30408704 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 467 ms_handle_reset con 0x560f6a20d000 session 0x560f6e5ef500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204324864 unmapped: 30400512 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 467 ms_handle_reset con 0x560f6b223400 session 0x560f6eb3b180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204324864 unmapped: 30400512 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316879 data_alloc: 234881024 data_used: 23192071
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204324864 unmapped: 30400512 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 467 heartbeat osd_stat(store_statfs(0x4f2957000/0x0/0x4ffc00000, data 0x4c33f18/0x4eb1000, compress 0x0/0x0/0x0, omap 0x67531, meta 0x8388acf), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204324864 unmapped: 30400512 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204324864 unmapped: 30400512 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204324864 unmapped: 30400512 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204324864 unmapped: 30400512 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 467 heartbeat osd_stat(store_statfs(0x4f2957000/0x0/0x4ffc00000, data 0x4c33f18/0x4eb1000, compress 0x0/0x0/0x0, omap 0x67531, meta 0x8388acf), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316879 data_alloc: 234881024 data_used: 23192071
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204324864 unmapped: 30400512 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204324864 unmapped: 30400512 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.288922310s of 15.452919006s, submitted: 38
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204324864 unmapped: 30400512 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204324864 unmapped: 30400512 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204324864 unmapped: 30400512 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3319189 data_alloc: 234881024 data_used: 23192071
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 468 heartbeat osd_stat(store_statfs(0x4f2956000/0x0/0x4ffc00000, data 0x4c359cf/0x4eb4000, compress 0x0/0x0/0x0, omap 0x6776e, meta 0x8388892), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 468 ms_handle_reset con 0x560f6b4b9400 session 0x560f6fe0c380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204341248 unmapped: 30384128 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204341248 unmapped: 30384128 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 468 handle_osd_map epochs [468,469], i have 469, src has [1,469]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 469 ms_handle_reset con 0x560f68ee5000 session 0x560f6c7f1c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204365824 unmapped: 30359552 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 469 ms_handle_reset con 0x560f69ae5c00 session 0x560f6fdfa1c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 469 handle_osd_map epochs [469,470], i have 469, src has [1,470]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 470 ms_handle_reset con 0x560f6a20d000 session 0x560f6b485dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204374016 unmapped: 30351360 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204374016 unmapped: 30351360 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 470 ms_handle_reset con 0x560f6b223400 session 0x560f68ede8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 470 ms_handle_reset con 0x560f6b4b8000 session 0x560f6e5ee380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326931 data_alloc: 234881024 data_used: 23192071
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204382208 unmapped: 30343168 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 470 heartbeat osd_stat(store_statfs(0x4f2950000/0x0/0x4ffc00000, data 0x4c391cd/0x4ebc000, compress 0x0/0x0/0x0, omap 0x68385, meta 0x8387c7b), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 470 ms_handle_reset con 0x560f68ee5000 session 0x560f6fb22540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204382208 unmapped: 30343168 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204382208 unmapped: 30343168 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 470 handle_osd_map epochs [470,471], i have 471, src has [1,471]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.997470856s of 11.160695076s, submitted: 86
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 471 ms_handle_reset con 0x560f69ae5c00 session 0x560f69b9a540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204382208 unmapped: 30343168 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 471 heartbeat osd_stat(store_statfs(0x4f2951000/0x0/0x4ffc00000, data 0x4c3916b/0x4ebb000, compress 0x0/0x0/0x0, omap 0x6835a, meta 0x8387ca6), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 471 ms_handle_reset con 0x560f6a20d000 session 0x560f6ba4d6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 471 ms_handle_reset con 0x560f6b223400 session 0x560f6cfc7340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204398592 unmapped: 30326784 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 472 ms_handle_reset con 0x560f689ff400 session 0x560f6c7f0e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 472 ms_handle_reset con 0x560f68ee5000 session 0x560f6b8961c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334651 data_alloc: 234881024 data_used: 23192071
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 30318592 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 472 ms_handle_reset con 0x560f69ae5c00 session 0x560f6ba5b500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 30318592 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204414976 unmapped: 30310400 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 473 handle_osd_map epochs [473,474], i have 473, src has [1,474]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 474 ms_handle_reset con 0x560f6a20d000 session 0x560f6c7f0700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204423168 unmapped: 30302208 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 474 ms_handle_reset con 0x560f6b223400 session 0x560f6cfc7dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 474 heartbeat osd_stat(store_statfs(0x4f2942000/0x0/0x4ffc00000, data 0x4c3ffc8/0x4ec8000, compress 0x0/0x0/0x0, omap 0x68ccf, meta 0x8387331), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 474 handle_osd_map epochs [474,475], i have 474, src has [1,475]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 475 ms_handle_reset con 0x560f6b207800 session 0x560f6b4856c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 475 heartbeat osd_stat(store_statfs(0x4f2942000/0x0/0x4ffc00000, data 0x4c3ffc8/0x4ec8000, compress 0x0/0x0/0x0, omap 0x68ccf, meta 0x8387331), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204447744 unmapped: 30277632 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 475 ms_handle_reset con 0x560f68ee5000 session 0x560f6cfc76c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345963 data_alloc: 234881024 data_used: 23192972
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 475 ms_handle_reset con 0x560f69ae5c00 session 0x560f6b896fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204447744 unmapped: 30277632 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 475 ms_handle_reset con 0x560f6a20d000 session 0x560f6e5eee00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204447744 unmapped: 30277632 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 475 handle_osd_map epochs [475,476], i have 475, src has [1,476]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 476 heartbeat osd_stat(store_statfs(0x4f293d000/0x0/0x4ffc00000, data 0x4c41b56/0x4eca000, compress 0x0/0x0/0x0, omap 0x68e26, meta 0x83871da), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204455936 unmapped: 30269440 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.020059586s of 10.088260651s, submitted: 51
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 477 ms_handle_reset con 0x560f6b207800 session 0x560f6b897880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204464128 unmapped: 30261248 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 477 ms_handle_reset con 0x560f6b223400 session 0x560f6e5ef180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 477 handle_osd_map epochs [477,478], i have 477, src has [1,478]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 478 ms_handle_reset con 0x560f68ee5000 session 0x560f6ef33180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204472320 unmapped: 30253056 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3353683 data_alloc: 234881024 data_used: 23192956
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 478 ms_handle_reset con 0x560f69ae5c00 session 0x560f6fb23880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 478 ms_handle_reset con 0x560f6a20d000 session 0x560f69b9aa80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204472320 unmapped: 30253056 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 478 heartbeat osd_stat(store_statfs(0x4f2938000/0x0/0x4ffc00000, data 0x4c46d89/0x4ed2000, compress 0x0/0x0/0x0, omap 0x6964c, meta 0x83869b4), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204472320 unmapped: 30253056 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 478 ms_handle_reset con 0x560f6b207800 session 0x560f6eb3bdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 478 heartbeat osd_stat(store_statfs(0x4f2938000/0x0/0x4ffc00000, data 0x4c46d89/0x4ed2000, compress 0x0/0x0/0x0, omap 0x6964c, meta 0x83869b4), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204521472 unmapped: 30203904 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204546048 unmapped: 30179328 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 480 ms_handle_reset con 0x560f6b4b8800 session 0x560f6fafe700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 480 ms_handle_reset con 0x560f68ee5000 session 0x560f6e5ef500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204570624 unmapped: 30154752 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 481 ms_handle_reset con 0x560f69ae5c00 session 0x560f6be00e00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3364647 data_alloc: 234881024 data_used: 23193569
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 481 ms_handle_reset con 0x560f6a20d000 session 0x560f69ee2700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204619776 unmapped: 30105600 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 481 ms_handle_reset con 0x560f6b207800 session 0x560f6cfc7340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 481 heartbeat osd_stat(store_statfs(0x4f292e000/0x0/0x4ffc00000, data 0x4c4c02e/0x4edc000, compress 0x0/0x0/0x0, omap 0x6a725, meta 0x83858db), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204619776 unmapped: 30105600 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 481 handle_osd_map epochs [481,482], i have 481, src has [1,482]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 482 ms_handle_reset con 0x560f69f93000 session 0x560f6f0236c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204619776 unmapped: 30105600 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f292f000/0x0/0x4ffc00000, data 0x4c4bfcc/0x4edb000, compress 0x0/0x0/0x0, omap 0x6a725, meta 0x83858db), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204619776 unmapped: 30105600 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f2929000/0x0/0x4ffc00000, data 0x4c4da77/0x4edf000, compress 0x0/0x0/0x0, omap 0x6a8ea, meta 0x8385716), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.477232933s of 10.621196747s, submitted: 91
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 483 ms_handle_reset con 0x560f68ee5000 session 0x560f6952a380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 483 ms_handle_reset con 0x560f69ae5c00 session 0x560f69b9b340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204636160 unmapped: 30089216 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 483 ms_handle_reset con 0x560f69f93000 session 0x560f69b9aa80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3373833 data_alloc: 234881024 data_used: 23193569
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 484 ms_handle_reset con 0x560f6a20d000 session 0x560f6eb3bdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204636160 unmapped: 30089216 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 484 ms_handle_reset con 0x560f6b207800 session 0x560f6b484380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f2923000/0x0/0x4ffc00000, data 0x4c5121f/0x4ee5000, compress 0x0/0x0/0x0, omap 0x6af5c, meta 0x83850a4), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204636160 unmapped: 30089216 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 484 handle_osd_map epochs [484,485], i have 484, src has [1,485]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 30081024 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204644352 unmapped: 30081024 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 485 handle_osd_map epochs [486,486], i have 485, src has [1,486]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 486 ms_handle_reset con 0x560f68ee5000 session 0x560f6bfe2000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 486 ms_handle_reset con 0x560f69ae5c00 session 0x560f6bfe3dc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 30072832 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 486 handle_osd_map epochs [486,487], i have 486, src has [1,487]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 487 ms_handle_reset con 0x560f69f93000 session 0x560f6adbe380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3382661 data_alloc: 234881024 data_used: 23194182
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204685312 unmapped: 30040064 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 487 heartbeat osd_stat(store_statfs(0x4f291a000/0x0/0x4ffc00000, data 0x4c56452/0x4eed000, compress 0x0/0x0/0x0, omap 0x6b799, meta 0x8384867), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204685312 unmapped: 30040064 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 488 ms_handle_reset con 0x560f6a20d000 session 0x560f6fdfa8c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204701696 unmapped: 30023680 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204701696 unmapped: 30023680 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.966103554s of 10.109176636s, submitted: 68
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 489 ms_handle_reset con 0x560f6b207800 session 0x560f6fb22fc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204718080 unmapped: 30007296 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 489 ms_handle_reset con 0x560f68ee5000 session 0x560f6f023c00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 490 ms_handle_reset con 0x560f69ae5c00 session 0x560f68ede540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392316 data_alloc: 234881024 data_used: 23195380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f2914000/0x0/0x4ffc00000, data 0x4c59b07/0x4ef4000, compress 0x0/0x0/0x0, omap 0x6bd6b, meta 0x8384295), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204718080 unmapped: 30007296 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 490 ms_handle_reset con 0x560f69f93000 session 0x560f6ba5ae00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 490 ms_handle_reset con 0x560f6a20d000 session 0x560f6f00ee00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204726272 unmapped: 29999104 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204726272 unmapped: 29999104 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 490 handle_osd_map epochs [491,491], i have 490, src has [1,491]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 491 ms_handle_reset con 0x560f6b774400 session 0x560f69b9bdc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204570624 unmapped: 30154752 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204570624 unmapped: 30154752 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 491 handle_osd_map epochs [491,492], i have 491, src has [1,492]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397988 data_alloc: 234881024 data_used: 23196578
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204587008 unmapped: 30138368 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 492 heartbeat osd_stat(store_statfs(0x4f290c000/0x0/0x4ffc00000, data 0x4c5ece8/0x4efc000, compress 0x0/0x0/0x0, omap 0x6c98a, meta 0x8383676), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 492 ms_handle_reset con 0x560f68ee5000 session 0x560f6eb3b180
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204587008 unmapped: 30138368 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 492 heartbeat osd_stat(store_statfs(0x4f290c000/0x0/0x4ffc00000, data 0x4c5ece8/0x4efc000, compress 0x0/0x0/0x0, omap 0x6c98a, meta 0x8383676), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 492 handle_osd_map epochs [492,493], i have 492, src has [1,493]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204595200 unmapped: 30130176 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204595200 unmapped: 30130176 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204595200 unmapped: 30130176 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3400298 data_alloc: 234881024 data_used: 23196578
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204595200 unmapped: 30130176 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f290b000/0x0/0x4ffc00000, data 0x4c60910/0x4eff000, compress 0x0/0x0/0x0, omap 0x6cf2b, meta 0x83830d5), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204595200 unmapped: 30130176 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204603392 unmapped: 30121984 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204603392 unmapped: 30121984 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f290b000/0x0/0x4ffc00000, data 0x4c60910/0x4eff000, compress 0x0/0x0/0x0, omap 0x6cf2b, meta 0x83830d5), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204603392 unmapped: 30121984 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3400298 data_alloc: 234881024 data_used: 23196578
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204603392 unmapped: 30121984 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204603392 unmapped: 30121984 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 493 handle_osd_map epochs [494,494], i have 493, src has [1,494]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.758632660s of 17.859876633s, submitted: 111
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204611584 unmapped: 30113792 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204611584 unmapped: 30113792 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f2908000/0x0/0x4ffc00000, data 0x4c6238f/0x4f02000, compress 0x0/0x0/0x0, omap 0x6d0f4, meta 0x8382f0c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204611584 unmapped: 30113792 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f2908000/0x0/0x4ffc00000, data 0x4c6238f/0x4f02000, compress 0x0/0x0/0x0, omap 0x6d0f4, meta 0x8382f0c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403072 data_alloc: 234881024 data_used: 23196578
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 23K writes, 93K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 23K writes, 8277 syncs, 2.79 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8596 writes, 36K keys, 8596 commit groups, 1.0 writes per commit group, ingest: 28.03 MB, 0.05 MB/s#012Interval WAL: 8596 writes, 3597 syncs, 2.39 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204611584 unmapped: 30113792 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204611584 unmapped: 30113792 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204611584 unmapped: 30113792 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204611584 unmapped: 30113792 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f2908000/0x0/0x4ffc00000, data 0x4c6238f/0x4f02000, compress 0x0/0x0/0x0, omap 0x6d0f4, meta 0x8382f0c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204611584 unmapped: 30113792 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403072 data_alloc: 234881024 data_used: 23196578
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f2908000/0x0/0x4ffc00000, data 0x4c6238f/0x4f02000, compress 0x0/0x0/0x0, omap 0x6d0f4, meta 0x8382f0c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204611584 unmapped: 30113792 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204611584 unmapped: 30113792 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204611584 unmapped: 30113792 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204619776 unmapped: 30105600 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204627968 unmapped: 30097408 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403072 data_alloc: 234881024 data_used: 23196578
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204627968 unmapped: 30097408 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f2908000/0x0/0x4ffc00000, data 0x4c6238f/0x4f02000, compress 0x0/0x0/0x0, omap 0x6d0f4, meta 0x8382f0c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204627968 unmapped: 30097408 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204627968 unmapped: 30097408 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204627968 unmapped: 30097408 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204627968 unmapped: 30097408 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403072 data_alloc: 234881024 data_used: 23196578
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204627968 unmapped: 30097408 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204627968 unmapped: 30097408 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f2908000/0x0/0x4ffc00000, data 0x4c6238f/0x4f02000, compress 0x0/0x0/0x0, omap 0x6d0f4, meta 0x8382f0c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204627968 unmapped: 30097408 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204627968 unmapped: 30097408 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f2908000/0x0/0x4ffc00000, data 0x4c6238f/0x4f02000, compress 0x0/0x0/0x0, omap 0x6d0f4, meta 0x8382f0c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204627968 unmapped: 30097408 heap: 234725376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.535429001s of 23.544799805s, submitted: 11
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476799 data_alloc: 234881024 data_used: 23196578
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 ms_handle_reset con 0x560f69ae5c00 session 0x560f6fb22a80
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 ms_handle_reset con 0x560f69f93000 session 0x560f6fafe540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 ms_handle_reset con 0x560f6a20d000 session 0x560f692b4000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 ms_handle_reset con 0x560f6b4d3000 session 0x560f685ee540
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 ms_handle_reset con 0x560f68ee5000 session 0x560f6ba1cfc0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204718080 unmapped: 38404096 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204718080 unmapped: 38404096 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204718080 unmapped: 38404096 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204718080 unmapped: 38404096 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f1ed6000/0x0/0x4ffc00000, data 0x56953f1/0x5936000, compress 0x0/0x0/0x0, omap 0x6d0f4, meta 0x8382f0c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204718080 unmapped: 38404096 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 ms_handle_reset con 0x560f69ae5c00 session 0x560f6f023a40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468517 data_alloc: 234881024 data_used: 23196578
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204718080 unmapped: 38404096 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 204718080 unmapped: 38404096 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 210804736 unmapped: 32317440 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 210804736 unmapped: 32317440 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 210804736 unmapped: 32317440 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3531237 data_alloc: 251658240 data_used: 33783714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f1ed6000/0x0/0x4ffc00000, data 0x56953f1/0x5936000, compress 0x0/0x0/0x0, omap 0x6d0f4, meta 0x8382f0c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 210804736 unmapped: 32317440 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 210804736 unmapped: 32317440 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 210804736 unmapped: 32317440 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f1ed6000/0x0/0x4ffc00000, data 0x56953f1/0x5936000, compress 0x0/0x0/0x0, omap 0x6d0f4, meta 0x8382f0c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 210804736 unmapped: 32317440 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 210804736 unmapped: 32317440 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3531237 data_alloc: 251658240 data_used: 33783714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 210804736 unmapped: 32317440 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 210804736 unmapped: 32317440 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.505640030s of 16.604169846s, submitted: 29
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f1e84000/0x0/0x4ffc00000, data 0x56e73f1/0x5988000, compress 0x0/0x0/0x0, omap 0x6d0f4, meta 0x8382f0c), peers [0,1] op hist [0,0,0,2,2])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 215023616 unmapped: 28098560 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 215203840 unmapped: 27918336 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 215203840 unmapped: 27918336 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f13ab000/0x0/0x4ffc00000, data 0x61b83f1/0x6459000, compress 0x0/0x0/0x0, omap 0x6d0f4, meta 0x8382f0c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3604499 data_alloc: 251658240 data_used: 34182050
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 215203840 unmapped: 27918336 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 215203840 unmapped: 27918336 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 215203840 unmapped: 27918336 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214532096 unmapped: 28590080 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f13b0000/0x0/0x4ffc00000, data 0x61bb3f1/0x645c000, compress 0x0/0x0/0x0, omap 0x6d0f4, meta 0x8382f0c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214532096 unmapped: 28590080 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3599963 data_alloc: 251658240 data_used: 34186146
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214532096 unmapped: 28590080 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214532096 unmapped: 28590080 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f13b0000/0x0/0x4ffc00000, data 0x61bb3f1/0x645c000, compress 0x0/0x0/0x0, omap 0x6d0f4, meta 0x8382f0c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214532096 unmapped: 28590080 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214532096 unmapped: 28590080 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214532096 unmapped: 28590080 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3599963 data_alloc: 251658240 data_used: 34186146
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214532096 unmapped: 28590080 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f13b0000/0x0/0x4ffc00000, data 0x61bb3f1/0x645c000, compress 0x0/0x0/0x0, omap 0x6d0f4, meta 0x8382f0c), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.581731796s of 14.748417854s, submitted: 102
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214532096 unmapped: 28590080 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 handle_osd_map epochs [495,495], i have 494, src has [1,495]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 494 handle_osd_map epochs [494,495], i have 495, src has [1,495]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214532096 unmapped: 28590080 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214597632 unmapped: 28524544 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 495 handle_osd_map epochs [496,496], i have 495, src has [1,496]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214720512 unmapped: 28401664 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f1310000/0x0/0x4ffc00000, data 0x62dab29/0x64f6000, compress 0x0/0x0/0x0, omap 0x6db67, meta 0x8382499), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3628723 data_alloc: 251658240 data_used: 34198434
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214720512 unmapped: 28401664 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214745088 unmapped: 28377088 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 496 handle_osd_map epochs [497,497], i have 496, src has [1,497]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214745088 unmapped: 28377088 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 ms_handle_reset con 0x560f6917ac00 session 0x560f6cfc7880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f130a000/0x0/0x4ffc00000, data 0x636b6c5/0x64ff000, compress 0x0/0x0/0x0, omap 0x6e0b2, meta 0x8381f4e), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214745088 unmapped: 28377088 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 ms_handle_reset con 0x560f6da92000 session 0x560f6ef336c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214753280 unmapped: 28368896 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3640657 data_alloc: 251658240 data_used: 34198434
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214753280 unmapped: 28368896 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214753280 unmapped: 28368896 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.354393959s of 10.658625603s, submitted: 128
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214761472 unmapped: 28360704 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f130a000/0x0/0x4ffc00000, data 0x636d6d5/0x6502000, compress 0x0/0x0/0x0, omap 0x6e436, meta 0x8381bca), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 215326720 unmapped: 27795456 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214769664 unmapped: 28352512 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 ms_handle_reset con 0x560f6b4f4000 session 0x560f6b897340
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3649590 data_alloc: 251658240 data_used: 34759586
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214769664 unmapped: 28352512 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214769664 unmapped: 28352512 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f1281000/0x0/0x4ffc00000, data 0x63f56d5/0x658a000, compress 0x0/0x0/0x0, omap 0x6e436, meta 0x8381bca), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214769664 unmapped: 28352512 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214769664 unmapped: 28352512 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214769664 unmapped: 28352512 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3650334 data_alloc: 251658240 data_used: 34771874
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214769664 unmapped: 28352512 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 215343104 unmapped: 27779072 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 ms_handle_reset con 0x560f68ee5000 session 0x560f685ef6c0
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 215343104 unmapped: 27779072 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.716408730s of 10.775461197s, submitted: 20
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f11f9000/0x0/0x4ffc00000, data 0x647d6d5/0x6612000, compress 0x0/0x0/0x0, omap 0x6e436, meta 0x8381bca), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 ms_handle_reset con 0x560f6917ac00 session 0x560f69372c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214777856 unmapped: 28344320 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214777856 unmapped: 28344320 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3659918 data_alloc: 251658240 data_used: 35320738
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214777856 unmapped: 28344320 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f11f8000/0x0/0x4ffc00000, data 0x647e737/0x6614000, compress 0x0/0x0/0x0, omap 0x6e436, meta 0x8381bca), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214777856 unmapped: 28344320 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214777856 unmapped: 28344320 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214786048 unmapped: 28336128 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f11ec000/0x0/0x4ffc00000, data 0x648a737/0x6620000, compress 0x0/0x0/0x0, omap 0x6e436, meta 0x8381bca), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 ms_handle_reset con 0x560f69ae5c00 session 0x560f6fdb6c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214794240 unmapped: 28327936 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3664677 data_alloc: 251658240 data_used: 35877794
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214794240 unmapped: 28327936 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 ms_handle_reset con 0x560f6da92000 session 0x560f6a0c8c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214794240 unmapped: 28327936 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f11ea000/0x0/0x4ffc00000, data 0x64806d5/0x6615000, compress 0x0/0x0/0x0, omap 0x6e436, meta 0x8381bca), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 ms_handle_reset con 0x560f6df2c000 session 0x560f6ba1ce00
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214794240 unmapped: 28327936 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.898348808s of 10.970306396s, submitted: 28
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 ms_handle_reset con 0x560f6df2c000 session 0x560f6b434c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214802432 unmapped: 28319744 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 ms_handle_reset con 0x560f68ee5000 session 0x560f6fdb7880
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214810624 unmapped: 28311552 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 ms_handle_reset con 0x560f6917ac00 session 0x560f69b5a700
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f1307000/0x0/0x4ffc00000, data 0x63716c5/0x6505000, compress 0x0/0x0/0x0, omap 0x6e48b, meta 0x8381b75), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 handle_osd_map epochs [498,498], i have 497, src has [1,498]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 497 handle_osd_map epochs [497,498], i have 498, src has [1,498]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3653235 data_alloc: 251658240 data_used: 35873600
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214810624 unmapped: 28311552 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 498 ms_handle_reset con 0x560f69ae5c00 session 0x560f6f023500
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 498 handle_osd_map epochs [498,499], i have 498, src has [1,499]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214786048 unmapped: 28336128 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 499 ms_handle_reset con 0x560f6da92000 session 0x560f6fdb6380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 214786048 unmapped: 28336128 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 499 handle_osd_map epochs [500,500], i have 499, src has [1,500]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 27230208 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 27230208 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 500 ms_handle_reset con 0x560f6da92000 session 0x560f6c7f0000
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3628145 data_alloc: 251658240 data_used: 34190757
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 500 heartbeat osd_stat(store_statfs(0x4f1306000/0x0/0x4ffc00000, data 0x625ba95/0x6504000, compress 0x0/0x0/0x0, omap 0x6eebd, meta 0x8381143), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 27230208 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 500 ms_handle_reset con 0x560f69f93000 session 0x560f6e5ee380
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 500 ms_handle_reset con 0x560f6a20d000 session 0x560f6ba1cc40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 500 handle_osd_map epochs [501,501], i have 500, src has [1,501]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 501 ms_handle_reset con 0x560f68ee5000 session 0x560f6bfe2c40
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 35880960 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 501 heartbeat osd_stat(store_statfs(0x4f26c0000/0x0/0x4ffc00000, data 0x4c6e4ce/0x4f17000, compress 0x0/0x0/0x0, omap 0x6f04e, meta 0x8380fb2), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 35880960 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 35880960 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 35880960 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3445385 data_alloc: 234881024 data_used: 23201189
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 35880960 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 35880960 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 501 handle_osd_map epochs [502,502], i have 501, src has [1,502]
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.300390244s of 13.494618416s, submitted: 84
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448095 data_alloc: 234881024 data_used: 23205187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448095 data_alloc: 234881024 data_used: 23205187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448095 data_alloc: 234881024 data_used: 23205187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448095 data_alloc: 234881024 data_used: 23205187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448095 data_alloc: 234881024 data_used: 23205187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448095 data_alloc: 234881024 data_used: 23205187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448095 data_alloc: 234881024 data_used: 23205187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448095 data_alloc: 234881024 data_used: 23205187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448095 data_alloc: 234881024 data_used: 23205187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448095 data_alloc: 234881024 data_used: 23205187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448095 data_alloc: 234881024 data_used: 23205187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448095 data_alloc: 234881024 data_used: 23205187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448095 data_alloc: 234881024 data_used: 23205187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448095 data_alloc: 234881024 data_used: 23205187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448095 data_alloc: 234881024 data_used: 23205187
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 35872768 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f28f0000/0x0/0x4ffc00000, data 0x4c6ff4d/0x4f1a000, compress 0x0/0x0/0x0, omap 0x6f219, meta 0x8380de7), peers [0,1] op hist [])
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207405056 unmapped: 35717120 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: do_command 'config diff' '{prefix=config diff}'
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: do_command 'config show' '{prefix=config show}'
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: do_command 'counter dump' '{prefix=counter dump}'
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: do_command 'counter schema' '{prefix=counter schema}'
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207519744 unmapped: 35602432 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: prioritycache tune_memory target: 4294967296 mapped: 207544320 unmapped: 35577856 heap: 243122176 old mem: 2845415832 new mem: 2845415832
Feb  2 07:14:57 np0005604943 ceph-osd[88236]: do_command 'log dump' '{prefix=log dump}'
Feb  2 07:14:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb  2 07:14:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1637848304' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb  2 07:14:57 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19150 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 07:14:57 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.ctqttb", "name": "rgw_frontends"} v 0)
Feb  2 07:14:57 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.ctqttb", "name": "rgw_frontends"} : dispatch
Feb  2 07:14:57 np0005604943 nova_compute[238883]: 2026-02-02 12:14:57.908 238887 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 07:14:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 07:14:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb  2 07:14:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3416094954' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb  2 07:14:58 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19154 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 07:14:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.ctqttb", "name": "rgw_frontends"} v 0)
Feb  2 07:14:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3350765502' entity='mgr.compute-0.twcemg' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.ctqttb", "name": "rgw_frontends"} : dispatch
Feb  2 07:14:58 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19158 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 07:14:58 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:14:58 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb  2 07:14:58 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2507840961' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb  2 07:14:59 np0005604943 podman[276135]: 2026-02-02 12:14:59.034864224 +0000 UTC m=+0.048056488 container health_status f7a8a7d56deab4622312f47586dba2f4884d78e46b23bfb226b684327aab18c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb  2 07:14:59 np0005604943 podman[276132]: 2026-02-02 12:14:59.101117309 +0000 UTC m=+0.112413131 container health_status dd42911bf41e885c3ba4077012f09a191888946e3867784418685a91ce34a059 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f753062280449c359ff4c2dce751de4cb0e8717503110c3ea49626eae4ec2b5b-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2-af67468a64521187a3d19b349040e64b8ebe04ca093912f980732f2b3e0883e2'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true)
Feb  2 07:14:59 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19160 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 07:14:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb  2 07:14:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2068899128' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb  2 07:14:59 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19164 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 07:14:59 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 07:14:59 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/973839920' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb  2 07:15:00 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19168 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 07:15:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb  2 07:15:00 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3822542353' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb  2 07:15:00 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19172 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 07:15:00 np0005604943 ceph-mgr[75558]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Feb  2 07:15:00 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb  2 07:15:00 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/61825183' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb  2 07:15:01 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19176 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 07:15:01 np0005604943 ceph-mgr[75558]: log_channel(audit) log [DBG] : from='client.19180 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 07:15:01 np0005604943 ceph-mgr[75558]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb  2 07:15:01 np0005604943 ceph-4548a36b-7cdc-5e3e-a814-4e1571be1fae-mgr-compute-0-twcemg[75554]: 2026-02-02T12:15:01.546+0000 7f564e481640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 143 ms_handle_reset con 0x558f10028400 session 0x558f130aa700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 143 ms_handle_reset con 0x558f134cf800 session 0x558f12d9d500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 143 ms_handle_reset con 0x558f0fa41000 session 0x558f0fa45180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 143 ms_handle_reset con 0x558f0fa41400 session 0x558f12b94fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 14401536 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 143 ms_handle_reset con 0x558f10028400 session 0x558f113e1dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.173399925s of 10.008166313s, submitted: 211
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 144 ms_handle_reset con 0x558f13d28400 session 0x558f12f83a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 14368768 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 145 ms_handle_reset con 0x558f134ce000 session 0x558f12d46000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 145 ms_handle_reset con 0x558f0fa41000 session 0x558f12d9ca80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157198 data_alloc: 218103808 data_used: 13374
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 145 ms_handle_reset con 0x558f0fa41400 session 0x558f12773c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 145 ms_handle_reset con 0x558f10028400 session 0x558f0fb0ce00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 14295040 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 146 ms_handle_reset con 0x558f13d28400 session 0x558f127b5500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc185000/0x0/0x4ffc00000, data 0xdc3306/0xea3000, compress 0x0/0x0/0x0, omap 0x19a35, meta 0x2bb65cb), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc180000/0x0/0x4ffc00000, data 0xdc4f06/0xea6000, compress 0x0/0x0/0x0, omap 0x1a15e, meta 0x2bb5ea2), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 14286848 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 146 ms_handle_reset con 0x558f12194400 session 0x558f12d9c380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 14434304 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 146 ms_handle_reset con 0x558f0fa41000 session 0x558f12ad1c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fc180000/0x0/0x4ffc00000, data 0xdc4f06/0xea6000, compress 0x0/0x0/0x0, omap 0x1a15e, meta 0x2bb5ea2), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 14434304 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89251840 unmapped: 14434304 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156976 data_alloc: 218103808 data_used: 13374
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 14385152 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 14385152 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fc187000/0x0/0x4ffc00000, data 0xdc4ef6/0xea5000, compress 0x0/0x0/0x0, omap 0x1a3a5, meta 0x2bb5c5b), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 14385152 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 14385152 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 14385152 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160470 data_alloc: 218103808 data_used: 13374
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 14385152 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fc182000/0x0/0x4ffc00000, data 0xdc69a1/0xea8000, compress 0x0/0x0/0x0, omap 0x1a673, meta 0x2bb598d), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 147 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.362982750s of 11.869185448s, submitted: 99
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 14385152 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 148 ms_handle_reset con 0x558f0fa41400 session 0x558f12b94700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 14385152 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 14385152 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 14385152 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 ms_handle_reset con 0x558f10028400 session 0x558f0fa45c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 ms_handle_reset con 0x558f13d28400 session 0x558f12b94000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166018 data_alloc: 218103808 data_used: 13374
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17f000/0x0/0x4ffc00000, data 0xdc843c/0xeab000, compress 0x0/0x0/0x0, omap 0x1aa81, meta 0x2bb557f), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 14385152 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17c000/0x0/0x4ffc00000, data 0xdc9ebb/0xeae000, compress 0x0/0x0/0x0, omap 0x1ad3c, meta 0x2bb52c4), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17c000/0x0/0x4ffc00000, data 0xdc9ebb/0xeae000, compress 0x0/0x0/0x0, omap 0x1ad3c, meta 0x2bb52c4), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166018 data_alloc: 218103808 data_used: 13374
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.152154922s of 11.176357269s, submitted: 26
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 ms_handle_reset con 0x558f12194800 session 0x558f1242f500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 ms_handle_reset con 0x558f0fa41000 session 0x558f12651c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17c000/0x0/0x4ffc00000, data 0xdc9ebb/0xeae000, compress 0x0/0x0/0x0, omap 0x1addc, meta 0x2bb5224), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 ms_handle_reset con 0x558f0fa41400 session 0x558f12f836c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 ms_handle_reset con 0x558f10028400 session 0x558f127b56c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 ms_handle_reset con 0x558f13d28400 session 0x558f12f821c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 ms_handle_reset con 0x558f0fdc7400 session 0x558f127b5a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17e000/0x0/0x4ffc00000, data 0xdc9ebb/0xeae000, compress 0x0/0x0/0x0, omap 0x1af1c, meta 0x2bb50e4), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165298 data_alloc: 218103808 data_used: 13374
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 ms_handle_reset con 0x558f0fa41000 session 0x558f12ad0540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17e000/0x0/0x4ffc00000, data 0xdc9ebb/0xeae000, compress 0x0/0x0/0x0, omap 0x1b05c, meta 0x2bb4fa4), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 ms_handle_reset con 0x558f0fa41400 session 0x558f130cfdc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 ms_handle_reset con 0x558f10028400 session 0x558f130abdc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165298 data_alloc: 218103808 data_used: 13374
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 ms_handle_reset con 0x558f13d28400 session 0x558f12cd1a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17d000/0x0/0x4ffc00000, data 0xdc9f1d/0xeaf000, compress 0x0/0x0/0x0, omap 0x1b31d, meta 0x2bb4ce3), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.459716797s of 10.475934029s, submitted: 9
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 ms_handle_reset con 0x558f0fdc7800 session 0x558f12d46540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 ms_handle_reset con 0x558f0fa41000 session 0x558f12d461c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17e000/0x0/0x4ffc00000, data 0xdc9ebb/0xeae000, compress 0x0/0x0/0x0, omap 0x1b51a, meta 0x2bb4ae6), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167224 data_alloc: 218103808 data_used: 13374
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17e000/0x0/0x4ffc00000, data 0xdc9ebb/0xeae000, compress 0x0/0x0/0x0, omap 0x1b51a, meta 0x2bb4ae6), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 14376960 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 ms_handle_reset con 0x558f0fa41400 session 0x558f13098540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 14336000 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 14336000 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc17d000/0x0/0x4ffc00000, data 0xdc9f1d/0xeaf000, compress 0x0/0x0/0x0, omap 0x1b5ba, meta 0x2bb4a46), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 14336000 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 150 ms_handle_reset con 0x558f10028400 session 0x558f121e5dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174163 data_alloc: 218103808 data_used: 13472
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 14319616 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 150 ms_handle_reset con 0x558f13d28400 session 0x558f113e01c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 14319616 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 151 ms_handle_reset con 0x558f12194000 session 0x558f12773880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 151 ms_handle_reset con 0x558f0fa41000 session 0x558f121e5c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.024594307s of 10.131167412s, submitted: 49
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 151 ms_handle_reset con 0x558f0fa41400 session 0x558f126d1340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 14295040 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 14295040 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fc176000/0x0/0x4ffc00000, data 0xdcd647/0xeb4000, compress 0x0/0x0/0x0, omap 0x1bbf6, meta 0x2bb440a), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 14295040 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fc176000/0x0/0x4ffc00000, data 0xdcd647/0xeb4000, compress 0x0/0x0/0x0, omap 0x1bbf6, meta 0x2bb440a), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175439 data_alloc: 218103808 data_used: 13374
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 14295040 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 14295040 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 14295040 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 14295040 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fc176000/0x0/0x4ffc00000, data 0xdcd647/0xeb4000, compress 0x0/0x0/0x0, omap 0x1bbf6, meta 0x2bb440a), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 14295040 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 152 ms_handle_reset con 0x558f10028400 session 0x558f126d0a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179245 data_alloc: 218103808 data_used: 13374
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 152 ms_handle_reset con 0x558f13d28400 session 0x558f0fdaafc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 14286848 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 14286848 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 152 ms_handle_reset con 0x558f12194800 session 0x558f10b3ca80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 152 ms_handle_reset con 0x558f0fa41000 session 0x558f13099180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 14286848 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fc174000/0x0/0x4ffc00000, data 0xdcf0d6/0xeb8000, compress 0x0/0x0/0x0, omap 0x1c0a5, meta 0x2bb3f5b), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 14286848 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 14286848 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179245 data_alloc: 218103808 data_used: 13374
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89399296 unmapped: 14286848 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.963131905s of 13.996237755s, submitted: 24
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 154 ms_handle_reset con 0x558f10028400 session 0x558f121e5880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 14295040 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 14147584 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 155 ms_handle_reset con 0x558f134cf800 session 0x558f127b4540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fc164000/0x0/0x4ffc00000, data 0xdd442a/0xec2000, compress 0x0/0x0/0x0, omap 0x1c87c, meta 0x2bb3784), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 155 ms_handle_reset con 0x558f13d28400 session 0x558f120261c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 156 ms_handle_reset con 0x558f0fa41400 session 0x558f128b9dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 14098432 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 14098432 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194277 data_alloc: 218103808 data_used: 13390
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 156 ms_handle_reset con 0x558f0fa41000 session 0x558f121e56c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 14098432 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 156 ms_handle_reset con 0x558f10028400 session 0x558f126508c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89587712 unmapped: 14098432 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 156 ms_handle_reset con 0x558f13d28400 session 0x558f128a7180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 156 ms_handle_reset con 0x558f134ce000 session 0x558f10757340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 156 ms_handle_reset con 0x558f12a96000 session 0x558f126516c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89735168 unmapped: 13950976 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 156 ms_handle_reset con 0x558f12a96000 session 0x558f12650a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 157 ms_handle_reset con 0x558f1272f800 session 0x558f12f83880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 157 ms_handle_reset con 0x558f0fa41000 session 0x558f1245ce00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 157 ms_handle_reset con 0x558f10028400 session 0x558f13099340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 157 ms_handle_reset con 0x558f13d28400 session 0x558f113b9880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 157 ms_handle_reset con 0x558f134ce000 session 0x558f12651180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 13942784 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 157 ms_handle_reset con 0x558f134cf800 session 0x558f12babc00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 157 heartbeat osd_stat(store_statfs(0x4fc167000/0x0/0x4ffc00000, data 0xdd5fe2/0xec5000, compress 0x0/0x0/0x0, omap 0x1cc8d, meta 0x2bb3373), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 89743360 unmapped: 13942784 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 158 ms_handle_reset con 0x558f0fa41000 session 0x558f13099c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207761 data_alloc: 218103808 data_used: 4667632
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 158 ms_handle_reset con 0x558f10028400 session 0x558f124e3180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 94412800 unmapped: 9273344 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 158 ms_handle_reset con 0x558f1272f800 session 0x558f124e2c40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 158 ms_handle_reset con 0x558f0fa41000 session 0x558f124e2700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.302581787s of 10.369608879s, submitted: 38
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 158 handle_osd_map epochs [158,159], i have 159, src has [1,159]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 94404608 unmapped: 9281536 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 159 ms_handle_reset con 0x558f12a96000 session 0x558f130aa000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 159 heartbeat osd_stat(store_statfs(0x4fc15f000/0x0/0x4ffc00000, data 0xdd9619/0xecb000, compress 0x0/0x0/0x0, omap 0x1d28f, meta 0x2bb2d71), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 159 ms_handle_reset con 0x558f1272f800 session 0x558f12650fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 94420992 unmapped: 9265152 heap: 103686144 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 160 ms_handle_reset con 0x558f10028400 session 0x558f124e2380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 160 ms_handle_reset con 0x558f134ce000 session 0x558f1292cc40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 94789632 unmapped: 18948096 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 161 ms_handle_reset con 0x558f0fa41000 session 0x558f1273ae00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 161 ms_handle_reset con 0x558f10028400 session 0x558f130996c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 94871552 unmapped: 18866176 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 161 ms_handle_reset con 0x558f1272f800 session 0x558f12b948c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278891 data_alloc: 218103808 data_used: 4668185
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 94855168 unmapped: 18882560 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 94855168 unmapped: 18882560 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb667000/0x0/0x4ffc00000, data 0x18ccd2b/0x19c1000, compress 0x0/0x0/0x0, omap 0x1dbb3, meta 0x2bb244d), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 161 ms_handle_reset con 0x558f12a96000 session 0x558f12b94e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 94871552 unmapped: 18866176 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 162 ms_handle_reset con 0x558f13d28400 session 0x558f124e2540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 94879744 unmapped: 18857984 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 163 ms_handle_reset con 0x558f10028400 session 0x558f130aafc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 163 ms_handle_reset con 0x558f134cf800 session 0x558f130ce540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 7577600 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353199 data_alloc: 234881024 data_used: 15006505
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 7577600 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 164 heartbeat osd_stat(store_statfs(0x4fb65c000/0x0/0x4ffc00000, data 0x18d2514/0x19cc000, compress 0x0/0x0/0x0, omap 0x1e6e1, meta 0x2bb191f), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 165 ms_handle_reset con 0x558f1272f800 session 0x558f124e2e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 165 ms_handle_reset con 0x558f12a96000 session 0x558f1242fdc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 165 ms_handle_reset con 0x558f12a96400 session 0x558f12d9c8c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 165 heartbeat osd_stat(store_statfs(0x4fb657000/0x0/0x4ffc00000, data 0x18d3c0d/0x19ce000, compress 0x0/0x0/0x0, omap 0x1eaa9, meta 0x2bb1557), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 7380992 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 7356416 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.855780602s of 11.174186707s, submitted: 168
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 166 ms_handle_reset con 0x558f10028400 session 0x558f124e21c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 166 ms_handle_reset con 0x558f1272f800 session 0x558f1245b880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 7356416 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 166 ms_handle_reset con 0x558f12a96000 session 0x558f116b4000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 166 ms_handle_reset con 0x558f134cf800 session 0x558f12cd0700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 7348224 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 166 ms_handle_reset con 0x558f13d28400 session 0x558f128a6380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 166 ms_handle_reset con 0x558f0fa41000 session 0x558f121e48c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 166 heartbeat osd_stat(store_statfs(0x4fb65a000/0x0/0x4ffc00000, data 0x18d5809/0x19d0000, compress 0x0/0x0/0x0, omap 0x1f089, meta 0x2bb0f77), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356744 data_alloc: 234881024 data_used: 15007074
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 100139008 unmapped: 13598720 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 166 ms_handle_reset con 0x558f13d28400 session 0x558f12d476c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 99901440 unmapped: 13836288 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 99901440 unmapped: 13836288 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 99901440 unmapped: 13836288 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 166 handle_osd_map epochs [166,167], i have 166, src has [1,167]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 13811712 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 167 ms_handle_reset con 0x558f10028400 session 0x558f11e341c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243118 data_alloc: 218103808 data_used: 4672831
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 13803520 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 167 heartbeat osd_stat(store_statfs(0x4fc145000/0x0/0x4ffc00000, data 0xde92ad/0xee5000, compress 0x0/0x0/0x0, omap 0x1f805, meta 0x2bb07fb), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 167 ms_handle_reset con 0x558f1272f800 session 0x558f130b7500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 167 ms_handle_reset con 0x558f12a96000 session 0x558f125b8380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 13803520 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 167 ms_handle_reset con 0x558f0fa41000 session 0x558f125b9880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 99934208 unmapped: 13803520 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.092245102s of 10.220550537s, submitted: 116
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 167 ms_handle_reset con 0x558f1272f800 session 0x558f13098c40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 167 ms_handle_reset con 0x558f13d28400 session 0x558f0fdaa700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 99966976 unmapped: 13770752 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 168 ms_handle_reset con 0x558f10028400 session 0x558f12bac000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 100016128 unmapped: 13721600 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 168 ms_handle_reset con 0x558f12a96c00 session 0x558f12cd1180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 169 ms_handle_reset con 0x558f0fa41000 session 0x558f0fa45a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 169 ms_handle_reset con 0x558f134cf800 session 0x558f10b3d500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 169 ms_handle_reset con 0x558f12a96800 session 0x558f12bac1c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 169 heartbeat osd_stat(store_statfs(0x4faf99000/0x0/0x4ffc00000, data 0xdecb1b/0xeef000, compress 0x0/0x0/0x0, omap 0x2090c, meta 0x3d4f6f4), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260102 data_alloc: 218103808 data_used: 4673059
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 169 ms_handle_reset con 0x558f10028400 session 0x558f121e5340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 102227968 unmapped: 11509760 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 170 ms_handle_reset con 0x558f1272f800 session 0x558f12027880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 170 ms_handle_reset con 0x558f0fa41000 session 0x558f12cd08c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 170 ms_handle_reset con 0x558f13d28400 session 0x558f0fa44e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101416960 unmapped: 12320768 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 170 ms_handle_reset con 0x558f12a97400 session 0x558f121e4000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 171 ms_handle_reset con 0x558f12a97800 session 0x558f0ffd68c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 171 ms_handle_reset con 0x558f12a97000 session 0x558f12d47880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 171 ms_handle_reset con 0x558f0fa41000 session 0x558f0ffc3500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 171 heartbeat osd_stat(store_statfs(0x4faf99000/0x0/0x4ffc00000, data 0xdee71b/0xef3000, compress 0x0/0x0/0x0, omap 0x213e4, meta 0x3d4ec1c), peers [0,2] op hist [1])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 171 ms_handle_reset con 0x558f12a97400 session 0x558f0ffd6000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101310464 unmapped: 12427264 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 171 ms_handle_reset con 0x558f12a97800 session 0x558f12b94540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 171 ms_handle_reset con 0x558f13d28400 session 0x558f113b96c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 171 heartbeat osd_stat(store_statfs(0x4faf98000/0x0/0x4ffc00000, data 0xdf0247/0xef2000, compress 0x0/0x0/0x0, omap 0x21ac3, meta 0x3d4e53d), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 171 ms_handle_reset con 0x558f12a97c00 session 0x558f128b9500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 171 ms_handle_reset con 0x558f0fa41000 session 0x558f1074e1c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 171 ms_handle_reset con 0x558f12a97400 session 0x558f0ffd6380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 12402688 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 171 ms_handle_reset con 0x558f12a97800 session 0x558f1242e700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 12402688 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 172 ms_handle_reset con 0x558f12a97c00 session 0x558f12213a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268284 data_alloc: 218103808 data_used: 4672831
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101351424 unmapped: 12386304 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 172 ms_handle_reset con 0x558f13d28400 session 0x558f12cd0e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 172 ms_handle_reset con 0x558f0fa41000 session 0x558f11e34380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 12574720 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 172 ms_handle_reset con 0x558f12a97800 session 0x558f12650e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 172 ms_handle_reset con 0x558f12a97400 session 0x558f120268c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 172 ms_handle_reset con 0x558f12a97c00 session 0x558f0fa44a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101163008 unmapped: 12574720 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 172 heartbeat osd_stat(store_statfs(0x4faf95000/0x0/0x4ffc00000, data 0xdf1dff/0xef5000, compress 0x0/0x0/0x0, omap 0x22766, meta 0x3d4d89a), peers [0,2] op hist [1])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 172 ms_handle_reset con 0x558f1272f800 session 0x558f121e4a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.428157806s of 10.001365662s, submitted: 206
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 172 ms_handle_reset con 0x558f0fa41000 session 0x558f130aae00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 172 ms_handle_reset con 0x558f12a97400 session 0x558f11e34540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 172 ms_handle_reset con 0x558f12a97c00 session 0x558f12bac8c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 12525568 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 172 handle_osd_map epochs [172,173], i have 173, src has [1,173]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 ms_handle_reset con 0x558f12a96000 session 0x558f11e35dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 ms_handle_reset con 0x558f12a97800 session 0x558f12212540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101236736 unmapped: 12500992 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 ms_handle_reset con 0x558f0fa41000 session 0x558f121e4540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 ms_handle_reset con 0x558f12a96000 session 0x558f1245c540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 ms_handle_reset con 0x558f12a97400 session 0x558f12bab6c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1276365 data_alloc: 218103808 data_used: 4673476
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101294080 unmapped: 12443648 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 ms_handle_reset con 0x558f12a97c00 session 0x558f12b941c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 ms_handle_reset con 0x558f12a97000 session 0x558f11e348c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 12402688 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 ms_handle_reset con 0x558f0fa41000 session 0x558f12cd1340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 ms_handle_reset con 0x558f12a96000 session 0x558f12bac380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101351424 unmapped: 12386304 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 heartbeat osd_stat(store_statfs(0x4faf96000/0x0/0x4ffc00000, data 0xdf39b5/0xef6000, compress 0x0/0x0/0x0, omap 0x237e4, meta 0x3d4c81c), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 heartbeat osd_stat(store_statfs(0x4faf96000/0x0/0x4ffc00000, data 0xdf39b5/0xef6000, compress 0x0/0x0/0x0, omap 0x237e4, meta 0x3d4c81c), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101351424 unmapped: 12386304 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 ms_handle_reset con 0x558f12a97400 session 0x558f12f82000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101351424 unmapped: 12386304 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1272987 data_alloc: 218103808 data_used: 4673460
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 ms_handle_reset con 0x558f12a97c00 session 0x558f126cddc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 ms_handle_reset con 0x558f12a96c00 session 0x558f12d46380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101351424 unmapped: 12386304 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 ms_handle_reset con 0x558f12a96c00 session 0x558f125b9180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 12378112 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 heartbeat osd_stat(store_statfs(0x4faf96000/0x0/0x4ffc00000, data 0xdf39b5/0xef6000, compress 0x0/0x0/0x0, omap 0x239a6, meta 0x3d4c65a), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 12378112 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 ms_handle_reset con 0x558f0fa41000 session 0x558f1273ac40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.519535065s of 10.699847221s, submitted: 108
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 ms_handle_reset con 0x558f12a96000 session 0x558f12cd16c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 12378112 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 12378112 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 heartbeat osd_stat(store_statfs(0x4faf96000/0x0/0x4ffc00000, data 0xdf39b5/0xef6000, compress 0x0/0x0/0x0, omap 0x23a3c, meta 0x3d4c5c4), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1276481 data_alloc: 218103808 data_used: 4673460
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 12378112 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101359616 unmapped: 12378112 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f12a97400 session 0x558f130cf340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101392384 unmapped: 12345344 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f12a97c00 session 0x558f12d47dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101384192 unmapped: 12353536 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f0fa41000 session 0x558f125b8000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101384192 unmapped: 12353536 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f12a96000 session 0x558f125b8fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283723 data_alloc: 218103808 data_used: 4673460
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101384192 unmapped: 12353536 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f12a96c00 session 0x558f12bace00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 heartbeat osd_stat(store_statfs(0x4faf91000/0x0/0x4ffc00000, data 0xdf5454/0xefb000, compress 0x0/0x0/0x0, omap 0x24351, meta 0x3d4bcaf), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f12a97400 session 0x558f12baddc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f12a96800 session 0x558f0fb0c8c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 12468224 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f0fa41000 session 0x558f128b81c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101269504 unmapped: 12468224 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f12a96000 session 0x558f0fb0c1c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f12a96c00 session 0x558f1273a700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f13d28400 session 0x558f1273afc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f12a97400 session 0x558f11e34fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f11f8dc00 session 0x558f116b5180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101441536 unmapped: 12296192 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 heartbeat osd_stat(store_statfs(0x4faf90000/0x0/0x4ffc00000, data 0xdf5464/0xefc000, compress 0x0/0x0/0x0, omap 0x24756, meta 0x3d4b8aa), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.700469017s of 10.801136971s, submitted: 61
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f0fa41000 session 0x558f12bada40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f12a96c00 session 0x558f12bad880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f13d28400 session 0x558f1242ea80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f1252e000 session 0x558f113b9dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f1252e000 session 0x558f12baba40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f0fa41000 session 0x558f1292d6c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f12a96000 session 0x558f127b4a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 12115968 heap: 113737728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f12a96c00 session 0x558f125b88c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f11f8dc00 session 0x558f1242ee00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f11f8dc00 session 0x558f12cd0fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f0fa41000 session 0x558f125b8540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f1252e000 session 0x558f10777500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f12a96000 session 0x558f11e35880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331938 data_alloc: 218103808 data_used: 4673460
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f12a96c00 session 0x558f125b9500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101695488 unmapped: 16809984 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f0fa41000 session 0x558f1245a700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f11f8dc00 session 0x558f1273a380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f1252e000 session 0x558f12cd1500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101695488 unmapped: 16809984 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f12a96000 session 0x558f108548c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f13d28400 session 0x558f116b4e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 heartbeat osd_stat(store_statfs(0x4fa785000/0x0/0x4ffc00000, data 0x1602444/0x1707000, compress 0x0/0x0/0x0, omap 0x24a4e, meta 0x3d4b5b2), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101752832 unmapped: 16752640 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 heartbeat osd_stat(store_statfs(0x4fa761000/0x0/0x4ffc00000, data 0x1626444/0x172b000, compress 0x0/0x0/0x0, omap 0x24a4e, meta 0x3d4b5b2), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 11902976 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 11902976 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f0fa41000 session 0x558f12bad340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f11f8dc00 session 0x558f12bad180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 ms_handle_reset con 0x558f1252e000 session 0x558f130b7a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287762 data_alloc: 218103808 data_used: 4673460
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 103071744 unmapped: 15433728 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 heartbeat osd_stat(store_statfs(0x4fa761000/0x0/0x4ffc00000, data 0x1626444/0x172b000, compress 0x0/0x0/0x0, omap 0x24a4e, meta 0x3d4b5b2), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 103071744 unmapped: 15433728 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 15933440 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 174 handle_osd_map epochs [174,175], i have 175, src has [1,175]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 175 ms_handle_reset con 0x558f12a96000 session 0x558f124e3c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 102588416 unmapped: 15917056 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 102588416 unmapped: 15917056 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 175 ms_handle_reset con 0x558f1252e400 session 0x558f12773a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296487 data_alloc: 218103808 data_used: 4673479
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 175 ms_handle_reset con 0x558f0fa41000 session 0x558f126d0e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 102588416 unmapped: 15917056 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.259850502s of 11.416339874s, submitted: 59
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 175 handle_osd_map epochs [175,176], i have 175, src has [1,176]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 176 ms_handle_reset con 0x558f11f8dc00 session 0x558f0fa44540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101957632 unmapped: 16547840 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 176 heartbeat osd_stat(store_statfs(0x4faf8d000/0x0/0x4ffc00000, data 0xdf7402/0xeff000, compress 0x0/0x0/0x0, omap 0x25008, meta 0x3d4aff8), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 177 ms_handle_reset con 0x558f1252e000 session 0x558f13099880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 102023168 unmapped: 16482304 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 177 ms_handle_reset con 0x558f1252e400 session 0x558f1245da40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 102031360 unmapped: 16474112 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 177 ms_handle_reset con 0x558f1252e800 session 0x558f12026380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 177 handle_osd_map epochs [177,178], i have 177, src has [1,178]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 178 ms_handle_reset con 0x558f12a96000 session 0x558f130ce8c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 102039552 unmapped: 16465920 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 178 ms_handle_reset con 0x558f1252e800 session 0x558f12d9ce00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306354 data_alloc: 218103808 data_used: 4673479
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 178 ms_handle_reset con 0x558f0fa41000 session 0x558f12027340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 16457728 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 178 ms_handle_reset con 0x558f11f8dc00 session 0x558f121e5a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 178 ms_handle_reset con 0x558f1252e000 session 0x558f124e3dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 178 heartbeat osd_stat(store_statfs(0x4faf83000/0x0/0x4ffc00000, data 0xdfc3ca/0xf06000, compress 0x0/0x0/0x0, omap 0x25935, meta 0x3d4a6cb), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 102195200 unmapped: 16310272 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 178 ms_handle_reset con 0x558f0fa41000 session 0x558f0fb0dc00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 178 ms_handle_reset con 0x558f11f8dc00 session 0x558f125b9dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 178 ms_handle_reset con 0x558f1252e800 session 0x558f126d16c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 178 ms_handle_reset con 0x558f12a96000 session 0x558f12d9c540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 102211584 unmapped: 16293888 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 102211584 unmapped: 16293888 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 178 ms_handle_reset con 0x558f1252e400 session 0x558f12baa380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 178 ms_handle_reset con 0x558f0fa41000 session 0x558f121e5500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 178 ms_handle_reset con 0x558f11f8dc00 session 0x558f113b9c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 178 ms_handle_reset con 0x558f1252e800 session 0x558f126d0c40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 178 ms_handle_reset con 0x558f12a96000 session 0x558f12b95500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 103006208 unmapped: 15499264 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1358942 data_alloc: 218103808 data_used: 4674064
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 103006208 unmapped: 15499264 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 103006208 unmapped: 15499264 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 179 heartbeat osd_stat(store_statfs(0x4faa1b000/0x0/0x4ffc00000, data 0x1364e03/0x146f000, compress 0x0/0x0/0x0, omap 0x25e87, meta 0x3d4a179), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 179 ms_handle_reset con 0x558f1252ec00 session 0x558f12650000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 103137280 unmapped: 15368192 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.995203972s of 12.247943878s, submitted: 129
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 179 ms_handle_reset con 0x558f0fa41000 session 0x558f127b48c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 179 ms_handle_reset con 0x558f11f8dc00 session 0x558f12d9cfc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 15589376 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 179 handle_osd_map epochs [179,180], i have 179, src has [1,180]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 180 ms_handle_reset con 0x558f1252e800 session 0x558f10777180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 180 ms_handle_reset con 0x558f1252ec00 session 0x558f12026e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 180 ms_handle_reset con 0x558f12a96000 session 0x558f125b8a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 180 ms_handle_reset con 0x558f0fa41000 session 0x558f12026c40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 15589376 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362533 data_alloc: 218103808 data_used: 4674072
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 15589376 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 14999552 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 103505920 unmapped: 14999552 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 180 heartbeat osd_stat(store_statfs(0x4faa16000/0x0/0x4ffc00000, data 0x1366882/0x1472000, compress 0x0/0x0/0x0, omap 0x262fa, meta 0x3d49d06), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 180 heartbeat osd_stat(store_statfs(0x4faa16000/0x0/0x4ffc00000, data 0x1366882/0x1472000, compress 0x0/0x0/0x0, omap 0x262fa, meta 0x3d49d06), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 14934016 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 180 ms_handle_reset con 0x558f11f8dc00 session 0x558f0fa44c40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 14934016 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 180 ms_handle_reset con 0x558f1252e800 session 0x558f11e34e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321492 data_alloc: 218103808 data_used: 4674064
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 180 ms_handle_reset con 0x558f1252ec00 session 0x558f0fb0c700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 16793600 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 180 heartbeat osd_stat(store_statfs(0x4faf7f000/0x0/0x4ffc00000, data 0xdff8f4/0xf0d000, compress 0x0/0x0/0x0, omap 0x2660d, meta 0x3d499f3), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 16793600 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 180 ms_handle_reset con 0x558f1252f000 session 0x558f11e35c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 ms_handle_reset con 0x558f0fa41000 session 0x558f1273a8c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 101842944 unmapped: 16662528 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.418493271s of 10.489959717s, submitted: 57
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 ms_handle_reset con 0x558f1252e800 session 0x558f11e34a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 ms_handle_reset con 0x558f11f8dc00 session 0x558f130b6380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 heartbeat osd_stat(store_statfs(0x4faf77000/0x0/0x4ffc00000, data 0xe01564/0xf13000, compress 0x0/0x0/0x0, omap 0x26bf8, meta 0x3d49408), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 102891520 unmapped: 15613952 heap: 118505472 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 ms_handle_reset con 0x558f1252ec00 session 0x558f1242e1c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 ms_handle_reset con 0x558f1252f400 session 0x558f12ad1dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 15532032 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1556873 data_alloc: 218103808 data_used: 4674178
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 22740992 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 ms_handle_reset con 0x558f1252e800 session 0x558f12ad16c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 108838912 unmapped: 26468352 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 heartbeat osd_stat(store_statfs(0x4f3f79000/0x0/0x4ffc00000, data 0x7e01564/0x7f13000, compress 0x0/0x0/0x0, omap 0x26eab, meta 0x3d49155), peers [0,2] op hist [1])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 ms_handle_reset con 0x558f1252ec00 session 0x558f0ffc2000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 heartbeat osd_stat(store_statfs(0x4f0f79000/0x0/0x4ffc00000, data 0xae01564/0xaf13000, compress 0x0/0x0/0x0, omap 0x26eab, meta 0x3d49155), peers [0,2] op hist [0,0,0,0,0,0,3])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 13877248 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 ms_handle_reset con 0x558f1252fc00 session 0x558f11e35500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 ms_handle_reset con 0x558f1252f800 session 0x558f0ffc3340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 28680192 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 107094016 unmapped: 28213248 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057635 data_alloc: 218103808 data_used: 4674178
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 ms_handle_reset con 0x558f11f8dc00 session 0x558f12651500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 ms_handle_reset con 0x558f0fa41000 session 0x558f130b6fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 ms_handle_reset con 0x558f11f8dc00 session 0x558f1292d500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 27885568 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 ms_handle_reset con 0x558f1252e800 session 0x558f0ffd6a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 heartbeat osd_stat(store_statfs(0x4e6fb7000/0x0/0x4ffc00000, data 0x14dc4554/0x14ed5000, compress 0x0/0x0/0x0, omap 0x26fd3, meta 0x3d4902d), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 ms_handle_reset con 0x558f1252ec00 session 0x558f10756000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 28188672 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 ms_handle_reset con 0x558f1252f800 session 0x558f0fb0cc40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 28188672 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 heartbeat osd_stat(store_statfs(0x4f93b7000/0x0/0x4ffc00000, data 0x15c44f2/0x16d4000, compress 0x0/0x0/0x0, omap 0x26fd3, meta 0x3d4902d), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 181 handle_osd_map epochs [182,182], i have 181, src has [1,182]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 182 ms_handle_reset con 0x558f11f8dc00 session 0x558f124e2700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 182 ms_handle_reset con 0x558f0fa41000 session 0x558f124e36c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 28172288 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 28172288 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.079856873s of 11.501769066s, submitted: 490
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1477604 data_alloc: 218103808 data_used: 4796034
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 28155904 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 182 ms_handle_reset con 0x558f1252f800 session 0x558f12ad1500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 182 ms_handle_reset con 0x558f1252fc00 session 0x558f130b76c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 111820800 unmapped: 23486464 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 182 ms_handle_reset con 0x558f1252ec00 session 0x558f12651880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 182 ms_handle_reset con 0x558f1252e800 session 0x558f1273b500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 182 ms_handle_reset con 0x558f1252ec00 session 0x558f12babdc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 182 heartbeat osd_stat(store_statfs(0x4fa7b6000/0x0/0x4ffc00000, data 0x15c6080/0x16d6000, compress 0x0/0x0/0x0, omap 0x274cb, meta 0x3d48b35), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 182 ms_handle_reset con 0x558f0fa41000 session 0x558f12b94a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 26353664 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 26353664 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 108953600 unmapped: 26353664 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 ms_handle_reset con 0x558f11f8dc00 session 0x558f113b8000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 ms_handle_reset con 0x558f1252f800 session 0x558f0fdaba40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488297 data_alloc: 218103808 data_used: 4678176
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 ms_handle_reset con 0x558f0fa41000 session 0x558f10854a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 ms_handle_reset con 0x558f11f8dc00 session 0x558f11e34700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 24641536 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 24641536 heap: 135307264 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 ms_handle_reset con 0x558f1252e800 session 0x558f108c2a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 heartbeat osd_stat(store_statfs(0x4fa75b000/0x0/0x4ffc00000, data 0x161eb61/0x1731000, compress 0x0/0x0/0x0, omap 0x27d47, meta 0x3d482b9), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 110690304 unmapped: 33013760 heap: 143704064 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 110297088 unmapped: 33406976 heap: 143704064 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 110313472 unmapped: 33390592 heap: 143704064 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.543740273s of 10.050633430s, submitted: 103
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1791940 data_alloc: 218103808 data_used: 4678176
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 33398784 heap: 143704064 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x5e1eb71/0x5f32000, compress 0x0/0x0/0x0, omap 0x2769f, meta 0x3d48961), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 110305280 unmapped: 33398784 heap: 143704064 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 110313472 unmapped: 33390592 heap: 143704064 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 110313472 unmapped: 33390592 heap: 143704064 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 110313472 unmapped: 33390592 heap: 143704064 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 heartbeat osd_stat(store_statfs(0x4eef5a000/0x0/0x4ffc00000, data 0xce1eb71/0xcf32000, compress 0x0/0x0/0x0, omap 0x2769f, meta 0x3d48961), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2582264 data_alloc: 218103808 data_used: 4678176
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 33374208 heap: 143704064 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 ms_handle_reset con 0x558f1252f800 session 0x558f1074f340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 ms_handle_reset con 0x558f1252fc00 session 0x558f130ce700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 ms_handle_reset con 0x558f0fa41000 session 0x558f116b5dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 ms_handle_reset con 0x558f11f8dc00 session 0x558f108c2e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 heartbeat osd_stat(store_statfs(0x4ed75a000/0x0/0x4ffc00000, data 0xe61eb71/0xe732000, compress 0x0/0x0/0x0, omap 0x2769f, meta 0x3d48961), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 ms_handle_reset con 0x558f1252e800 session 0x558f108541c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 ms_handle_reset con 0x558f1252f800 session 0x558f0ffd76c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 ms_handle_reset con 0x558f13f18000 session 0x558f0ffd6e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 ms_handle_reset con 0x558f0fa41000 session 0x558f0ffc3a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 ms_handle_reset con 0x558f11f8dc00 session 0x558f107761c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 33079296 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 41418752 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 ms_handle_reset con 0x558f1252e800 session 0x558f0fa44380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 41418752 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 41369600 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.328946114s of 10.229765892s, submitted: 63
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3072839 data_alloc: 218103808 data_used: 4678176
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 110780416 unmapped: 41320448 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 heartbeat osd_stat(store_statfs(0x4e7413000/0x0/0x4ffc00000, data 0x14963b91/0x14a79000, compress 0x0/0x0/0x0, omap 0x278e1, meta 0x3d4871f), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 32686080 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 111149056 unmapped: 40951808 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 ms_handle_reset con 0x558f1252f800 session 0x558f0ffc21c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 40386560 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 184 ms_handle_reset con 0x558f1252ec00 session 0x558f108c3dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 40353792 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3560504 data_alloc: 234881024 data_used: 15284784
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 33669120 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 118439936 unmapped: 33660928 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 ms_handle_reset con 0x558f0fa41000 session 0x558f0fdaaa80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 heartbeat osd_stat(store_statfs(0x4e2be6000/0x0/0x4ffc00000, data 0x1918f72d/0x192a6000, compress 0x0/0x0/0x0, omap 0x28bd9, meta 0x3d47427), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 ms_handle_reset con 0x558f11f8dc00 session 0x558f113b8fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 34471936 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 ms_handle_reset con 0x558f1252e800 session 0x558f108c3500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 ms_handle_reset con 0x558f1252f800 session 0x558f108c2000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 ms_handle_reset con 0x558f13f18c00 session 0x558f108c21c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 ms_handle_reset con 0x558f0fa41000 session 0x558f0fdaa8c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 ms_handle_reset con 0x558f1252e800 session 0x558f116b5c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 ms_handle_reset con 0x558f11f8dc00 session 0x558f1273bdc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 ms_handle_reset con 0x558f1252f800 session 0x558f12baca80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 ms_handle_reset con 0x558f13f19000 session 0x558f12d9ddc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 ms_handle_reset con 0x558f0fa41000 session 0x558f130b68c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 ms_handle_reset con 0x558f11f8dc00 session 0x558f1074efc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 118595584 unmapped: 33505280 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 118595584 unmapped: 33505280 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 heartbeat osd_stat(store_statfs(0x4f9446000/0x0/0x4ffc00000, data 0x292b37f/0x2a44000, compress 0x0/0x0/0x0, omap 0x29005, meta 0x3d46ffb), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1763316 data_alloc: 234881024 data_used: 15284784
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 ms_handle_reset con 0x558f1252e800 session 0x558f10776000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 118611968 unmapped: 33488896 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 ms_handle_reset con 0x558f1252f800 session 0x558f0ffd7c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 118611968 unmapped: 33488896 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 ms_handle_reset con 0x558f13f19400 session 0x558f0fa44c40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.500283241s of 12.052241325s, submitted: 143
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 ms_handle_reset con 0x558f0fa41000 session 0x558f124d56c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 118620160 unmapped: 33480704 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 heartbeat osd_stat(store_statfs(0x4f9447000/0x0/0x4ffc00000, data 0x292b38f/0x2a45000, compress 0x0/0x0/0x0, omap 0x290cb, meta 0x3d46f35), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 122757120 unmapped: 29343744 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 185 handle_osd_map epochs [185,186], i have 186, src has [1,186]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 186 heartbeat osd_stat(store_statfs(0x4f9442000/0x0/0x4ffc00000, data 0x292ce0e/0x2a48000, compress 0x0/0x0/0x0, omap 0x29417, meta 0x3d46be9), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 129236992 unmapped: 22863872 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 186 heartbeat osd_stat(store_statfs(0x4f9442000/0x0/0x4ffc00000, data 0x292ce0e/0x2a48000, compress 0x0/0x0/0x0, omap 0x29417, meta 0x3d46be9), peers [0,2] op hist [0,0,0,1,10])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1886487 data_alloc: 234881024 data_used: 23987264
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 16728064 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 136036352 unmapped: 16064512 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 136036352 unmapped: 16064512 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 136036352 unmapped: 16064512 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 186 heartbeat osd_stat(store_statfs(0x4f8985000/0x0/0x4ffc00000, data 0x33eae0e/0x3506000, compress 0x0/0x0/0x0, omap 0x29417, meta 0x3d46be9), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 136069120 unmapped: 16031744 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1898363 data_alloc: 234881024 data_used: 24065088
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 186 heartbeat osd_stat(store_statfs(0x4f8985000/0x0/0x4ffc00000, data 0x33eae0e/0x3506000, compress 0x0/0x0/0x0, omap 0x29417, meta 0x3d46be9), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 136101888 unmapped: 15998976 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 136101888 unmapped: 15998976 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 186 heartbeat osd_stat(store_statfs(0x4f8985000/0x0/0x4ffc00000, data 0x33eae0e/0x3506000, compress 0x0/0x0/0x0, omap 0x29417, meta 0x3d46be9), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 134766592 unmapped: 17334272 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.089828491s of 11.352212906s, submitted: 128
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 137019392 unmapped: 15081472 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 186 ms_handle_reset con 0x558f1252f800 session 0x558f113e1180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 136151040 unmapped: 15949824 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1962853 data_alloc: 234881024 data_used: 24196160
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 15818752 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 186 heartbeat osd_stat(store_statfs(0x4f7f3e000/0x0/0x4ffc00000, data 0x3e29e70/0x3f46000, compress 0x0/0x0/0x0, omap 0x2973a, meta 0x3d468c6), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 15802368 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 186 heartbeat osd_stat(store_statfs(0x4f7f3e000/0x0/0x4ffc00000, data 0x3e29e70/0x3f46000, compress 0x0/0x0/0x0, omap 0x2973a, meta 0x3d468c6), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 15671296 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 187 ms_handle_reset con 0x558f13f19c00 session 0x558f0ffc2fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 16105472 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 187 handle_osd_map epochs [187,188], i have 187, src has [1,188]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 188 ms_handle_reset con 0x558f134ce800 session 0x558f113b8380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 188 ms_handle_reset con 0x558f134cec00 session 0x558f1242f6c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 188 ms_handle_reset con 0x558f0fa41000 session 0x558f130b6000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 188 ms_handle_reset con 0x558f1252e000 session 0x558f10855dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 15646720 heap: 152100864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 188 ms_handle_reset con 0x558f134cf400 session 0x558f12ad1340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 188 handle_osd_map epochs [188,189], i have 188, src has [1,189]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 189 ms_handle_reset con 0x558f134cf000 session 0x558f113b88c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 189 ms_handle_reset con 0x558f1252f800 session 0x558f10855c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 189 ms_handle_reset con 0x558f0fa41000 session 0x558f108c3c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 189 ms_handle_reset con 0x558f1252e000 session 0x558f128b8000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 189 ms_handle_reset con 0x558f134cf000 session 0x558f10793500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 189 ms_handle_reset con 0x558f134cf400 session 0x558f113b8700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2008676 data_alloc: 234881024 data_used: 24200354
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 136683520 unmapped: 19095552 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 189 ms_handle_reset con 0x558f134cf800 session 0x558f108c3180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 189 heartbeat osd_stat(store_statfs(0x4f784f000/0x0/0x4ffc00000, data 0x45191b6/0x463b000, compress 0x0/0x0/0x0, omap 0x29f74, meta 0x3d4608c), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 19226624 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 190 ms_handle_reset con 0x558f0fa41000 session 0x558f107576c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 190 heartbeat osd_stat(store_statfs(0x4f784f000/0x0/0x4ffc00000, data 0x45191b6/0x463b000, compress 0x0/0x0/0x0, omap 0x29f74, meta 0x3d4608c), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 19193856 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 190 ms_handle_reset con 0x558f134cf000 session 0x558f0ffc36c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.723563194s of 10.288646698s, submitted: 162
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 19193856 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 191 ms_handle_reset con 0x558f134cf400 session 0x558f108c3a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 191 ms_handle_reset con 0x558f134cfc00 session 0x558f12bacc40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 191 ms_handle_reset con 0x558f134cec00 session 0x558f108c2c40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 136855552 unmapped: 18923520 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 191 heartbeat osd_stat(store_statfs(0x4f7849000/0x0/0x4ffc00000, data 0x451c9a6/0x4641000, compress 0x0/0x0/0x0, omap 0x2a3a8, meta 0x3d45c58), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 191 handle_osd_map epochs [191,192], i have 191, src has [1,192]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 192 ms_handle_reset con 0x558f134ce800 session 0x558f127b5880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2016966 data_alloc: 234881024 data_used: 24200873
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 18350080 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 192 ms_handle_reset con 0x558f13f19800 session 0x558f12baafc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 192 ms_handle_reset con 0x558f13f18400 session 0x558f113b8e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 192 ms_handle_reset con 0x558f13f18800 session 0x558f12baa1c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142516224 unmapped: 13262848 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 192 ms_handle_reset con 0x558f1252e000 session 0x558f1074ec40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 192 ms_handle_reset con 0x558f1252e000 session 0x558f124e3180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 192 ms_handle_reset con 0x558f11f8dc00 session 0x558f12d9d180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 192 ms_handle_reset con 0x558f1252e800 session 0x558f130aba40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 135356416 unmapped: 20422656 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 192 ms_handle_reset con 0x558f13f19c00 session 0x558f122121c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 129581056 unmapped: 26198016 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 129581056 unmapped: 26198016 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706647 data_alloc: 234881024 data_used: 11809571
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 193 ms_handle_reset con 0x558f0fa40000 session 0x558f107768c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 129589248 unmapped: 26189824 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 193 heartbeat osd_stat(store_statfs(0x4f9605000/0x0/0x4ffc00000, data 0x1cf8504/0x1e1a000, compress 0x0/0x0/0x0, omap 0x2af74, meta 0x3d4508c), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 193 ms_handle_reset con 0x558f11f8dc00 session 0x558f12ad0000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 193 ms_handle_reset con 0x558f1252e000 session 0x558f113b81c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130686976 unmapped: 25092096 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 193 ms_handle_reset con 0x558f1252e800 session 0x558f116b4fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fa887000/0x0/0x4ffc00000, data 0x14dff3d/0x1602000, compress 0x0/0x0/0x0, omap 0x2b35c, meta 0x3d44ca4), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 194 ms_handle_reset con 0x558f13f19c00 session 0x558f108c36c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130686976 unmapped: 25092096 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 194 heartbeat osd_stat(store_statfs(0x4fa886000/0x0/0x4ffc00000, data 0x14e1ae5/0x1604000, compress 0x0/0x0/0x0, omap 0x2b6b8, meta 0x3d44948), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 194 ms_handle_reset con 0x558f14398400 session 0x558f0fb0d880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.676922798s of 10.031051636s, submitted: 176
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130686976 unmapped: 25092096 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 195 ms_handle_reset con 0x558f11f8dc00 session 0x558f1242e380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130686976 unmapped: 25092096 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 195 handle_osd_map epochs [195,196], i have 195, src has [1,196]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 196 ms_handle_reset con 0x558f1252e000 session 0x558f10777500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1679537 data_alloc: 234881024 data_used: 11805263
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130924544 unmapped: 24854528 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 196 heartbeat osd_stat(store_statfs(0x4f9edc000/0x0/0x4ffc00000, data 0x1e8928d/0x1fae000, compress 0x0/0x0/0x0, omap 0x2baea, meta 0x3d44516), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130686976 unmapped: 25092096 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130686976 unmapped: 25092096 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130686976 unmapped: 25092096 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130162688 unmapped: 25616384 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9eb6000/0x0/0x4ffc00000, data 0x1eadd28/0x1fd4000, compress 0x0/0x0/0x0, omap 0x2be4b, meta 0x3d441b5), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1726267 data_alloc: 234881024 data_used: 11884285
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9eb6000/0x0/0x4ffc00000, data 0x1eadd28/0x1fd4000, compress 0x0/0x0/0x0, omap 0x2be4b, meta 0x3d441b5), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130162688 unmapped: 25616384 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130162688 unmapped: 25616384 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130162688 unmapped: 25616384 heap: 155779072 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 197 heartbeat osd_stat(store_statfs(0x4f9eb5000/0x0/0x4ffc00000, data 0x1eb0d28/0x1fd7000, compress 0x0/0x0/0x0, omap 0x2be4b, meta 0x3d441b5), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 197 ms_handle_reset con 0x558f1252e800 session 0x558f12651a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 197 ms_handle_reset con 0x558f13f19c00 session 0x558f128b8540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 197 ms_handle_reset con 0x558f12a97800 session 0x558f1245c1c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 197 ms_handle_reset con 0x558f11f8dc00 session 0x558f130cea80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 197 ms_handle_reset con 0x558f1252e000 session 0x558f10793340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 197 handle_osd_map epochs [197,198], i have 198, src has [1,198]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.927391052s of 10.191184998s, submitted: 111
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130179072 unmapped: 27222016 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130179072 unmapped: 27222016 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1770933 data_alloc: 234881024 data_used: 11884285
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130179072 unmapped: 27222016 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 198 ms_handle_reset con 0x558f12a97800 session 0x558f1273a540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 198 ms_handle_reset con 0x558f13f19c00 session 0x558f130ab6c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 27213824 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 198 ms_handle_reset con 0x558f12a96400 session 0x558f12bab500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 198 heartbeat osd_stat(store_statfs(0x4f9826000/0x0/0x4ffc00000, data 0x253e7a7/0x2666000, compress 0x0/0x0/0x0, omap 0x2c4bb, meta 0x3d43b45), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 198 ms_handle_reset con 0x558f11f8dc00 session 0x558f127b4000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 198 ms_handle_reset con 0x558f1252e000 session 0x558f12bac700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 27213824 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 198 ms_handle_reset con 0x558f12a96400 session 0x558f1242e000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 130211840 unmapped: 27189248 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 199 ms_handle_reset con 0x558f13f19c00 session 0x558f12cd0c40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 199 ms_handle_reset con 0x558f12a97000 session 0x558f12651dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 25460736 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 199 handle_osd_map epochs [199,200], i have 199, src has [1,200]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 200 ms_handle_reset con 0x558f11f8dc00 session 0x558f0fdab6c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 200 ms_handle_reset con 0x558f12a96400 session 0x558f12d9d6c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 200 ms_handle_reset con 0x558f12a97800 session 0x558f0ffc3c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 200 ms_handle_reset con 0x558f13f19c00 session 0x558f11e35a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 200 ms_handle_reset con 0x558f12a97c00 session 0x558f0fdaac40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1823274 data_alloc: 234881024 data_used: 18749279
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 25411584 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 200 heartbeat osd_stat(store_statfs(0x4f981c000/0x0/0x4ffc00000, data 0x2541fa3/0x266e000, compress 0x0/0x0/0x0, omap 0x2d174, meta 0x3d42e8c), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 200 ms_handle_reset con 0x558f12a96400 session 0x558f113e0e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 132005888 unmapped: 25395200 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 201 ms_handle_reset con 0x558f11f8dc00 session 0x558f116b5880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 132005888 unmapped: 25395200 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 201 ms_handle_reset con 0x558f1252e800 session 0x558f1074ea80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 201 heartbeat osd_stat(store_statfs(0x4f9816000/0x0/0x4ffc00000, data 0x2543b41/0x2671000, compress 0x0/0x0/0x0, omap 0x2d4e0, meta 0x3d42b20), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 132022272 unmapped: 25378816 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 201 handle_osd_map epochs [201,202], i have 201, src has [1,202]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.930810928s of 10.077245712s, submitted: 89
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 202 ms_handle_reset con 0x558f12a97800 session 0x558f12027500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 202 ms_handle_reset con 0x558f134cec00 session 0x558f0ffc3dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 202 ms_handle_reset con 0x558f134cec00 session 0x558f1292cfc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 202 ms_handle_reset con 0x558f11f8dc00 session 0x558f10855a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 129449984 unmapped: 27951104 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 202 heartbeat osd_stat(store_statfs(0x4fa8a8000/0x0/0x4ffc00000, data 0x14b175b/0x15e1000, compress 0x0/0x0/0x0, omap 0x2d92a, meta 0x3d426d6), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704212 data_alloc: 234881024 data_used: 11553220
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 202 heartbeat osd_stat(store_statfs(0x4fa8a8000/0x0/0x4ffc00000, data 0x14b175b/0x15e1000, compress 0x0/0x0/0x0, omap 0x2d92a, meta 0x3d426d6), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 129449984 unmapped: 27951104 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 202 ms_handle_reset con 0x558f1252e800 session 0x558f0ffc2a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 202 ms_handle_reset con 0x558f12a96400 session 0x558f12650c40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 128729088 unmapped: 28672000 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 202 ms_handle_reset con 0x558f12a97800 session 0x558f10756e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 202 ms_handle_reset con 0x558f11f8dc00 session 0x558f1273a000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 128778240 unmapped: 28622848 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 203 ms_handle_reset con 0x558f1252e800 session 0x558f1074f6c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 203 ms_handle_reset con 0x558f12a96400 session 0x558f10756fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 203 ms_handle_reset con 0x558f134cec00 session 0x558f12651340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 203 ms_handle_reset con 0x558f12a97c00 session 0x558f10855880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 128843776 unmapped: 28557312 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 25927680 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 204 heartbeat osd_stat(store_statfs(0x4f9d2e000/0x0/0x4ffc00000, data 0x202ecbc/0x215c000, compress 0x0/0x0/0x0, omap 0x2e489, meta 0x3d41b77), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1771864 data_alloc: 234881024 data_used: 12368210
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 131522560 unmapped: 25878528 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 204 ms_handle_reset con 0x558f11f8dc00 session 0x558f0e416fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 204 ms_handle_reset con 0x558f1252e800 session 0x558f12650700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 131743744 unmapped: 25657344 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 204 ms_handle_reset con 0x558f12a96400 session 0x558f113e0540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 204 ms_handle_reset con 0x558f12a97c00 session 0x558f113b9a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 204 ms_handle_reset con 0x558f134cec00 session 0x558f126d1c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 25534464 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 25534464 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.986651421s of 10.514215469s, submitted: 254
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 205 ms_handle_reset con 0x558f11f8dc00 session 0x558f124e3880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 25534464 heap: 157401088 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 205 ms_handle_reset con 0x558f1252e800 session 0x558f127b41c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 205 ms_handle_reset con 0x558f12a96400 session 0x558f126d0380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 205 heartbeat osd_stat(store_statfs(0x4f9d1c000/0x0/0x4ffc00000, data 0x203e74b/0x216e000, compress 0x0/0x0/0x0, omap 0x2eb0e, meta 0x3d414f2), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1782157 data_alloc: 234881024 data_used: 12568914
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 132784128 unmapped: 28295168 heap: 161079296 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 205 ms_handle_reset con 0x558f12a97c00 session 0x558f124e3500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 205 ms_handle_reset con 0x558f134cec00 session 0x558f11e34000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 205 ms_handle_reset con 0x558f11f8dc00 session 0x558f127b5180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 29351936 heap: 161079296 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 205 ms_handle_reset con 0x558f1252e800 session 0x558f10855500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 205 ms_handle_reset con 0x558f12a96400 session 0x558f10777340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 205 ms_handle_reset con 0x558f12a97c00 session 0x558f107776c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 132005888 unmapped: 29073408 heap: 161079296 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 205 ms_handle_reset con 0x558f13f19c00 session 0x558f1074e380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 132005888 unmapped: 29073408 heap: 161079296 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 206 ms_handle_reset con 0x558f1252e800 session 0x558f0ffc28c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 206 ms_handle_reset con 0x558f11f8dc00 session 0x558f113e0380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 207 ms_handle_reset con 0x558f12a96400 session 0x558f124d4380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 134553600 unmapped: 26525696 heap: 161079296 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 207 ms_handle_reset con 0x558f134cec00 session 0x558f126f1340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 207 ms_handle_reset con 0x558f13f19c00 session 0x558f108541c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 207 ms_handle_reset con 0x558f11f8dc00 session 0x558f0ffc21c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969839 data_alloc: 234881024 data_used: 12569202
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 207 ms_handle_reset con 0x558f12a96400 session 0x558f12bac700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 208 ms_handle_reset con 0x558f1252e800 session 0x558f10855340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 208 heartbeat osd_stat(store_statfs(0x4f816f000/0x0/0x4ffc00000, data 0x3be3f57/0x3d19000, compress 0x0/0x0/0x0, omap 0x2f576, meta 0x3d40a8a), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 134053888 unmapped: 27025408 heap: 161079296 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 208 ms_handle_reset con 0x558f134cec00 session 0x558f0ffd6700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 208 ms_handle_reset con 0x558f12a97400 session 0x558f0ffd6a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 209 ms_handle_reset con 0x558f12a96000 session 0x558f10855c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 27705344 heap: 161079296 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 209 ms_handle_reset con 0x558f1252e800 session 0x558f10854380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 209 ms_handle_reset con 0x558f12a96400 session 0x558f1273aa80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 209 ms_handle_reset con 0x558f134cec00 session 0x558f10776700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 209 handle_osd_map epochs [209,210], i have 209, src has [1,210]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 210 ms_handle_reset con 0x558f12a97400 session 0x558f10b3cc40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 132923392 unmapped: 31834112 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 210 ms_handle_reset con 0x558f1252e800 session 0x558f12baa540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 210 ms_handle_reset con 0x558f11f8dc00 session 0x558f12d47a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 210 handle_osd_map epochs [210,211], i have 210, src has [1,211]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 211 ms_handle_reset con 0x558f1263c000 session 0x558f130b6c40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 132931584 unmapped: 31825920 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 211 handle_osd_map epochs [211,212], i have 211, src has [1,212]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.057014465s of 10.523157120s, submitted: 148
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 132947968 unmapped: 31809536 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 212 ms_handle_reset con 0x558f12a96000 session 0x558f10855dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 212 ms_handle_reset con 0x558f12a96400 session 0x558f10855180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 212 ms_handle_reset con 0x558f11f8dc00 session 0x558f0ffd7c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1951230 data_alloc: 234881024 data_used: 12573829
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 133013504 unmapped: 31744000 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 213 ms_handle_reset con 0x558f1252e800 session 0x558f108556c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 213 heartbeat osd_stat(store_statfs(0x4f8708000/0x0/0x4ffc00000, data 0x3648947/0x3782000, compress 0x0/0x0/0x0, omap 0x30482, meta 0x3d3fb7e), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 213 ms_handle_reset con 0x558f1263c000 session 0x558f1074e700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 213 heartbeat osd_stat(store_statfs(0x4f8708000/0x0/0x4ffc00000, data 0x3648947/0x3782000, compress 0x0/0x0/0x0, omap 0x30482, meta 0x3d3fb7e), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 213 ms_handle_reset con 0x558f12a96000 session 0x558f12773500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 213 ms_handle_reset con 0x558f134cec00 session 0x558f1273b880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 133070848 unmapped: 31686656 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 132931584 unmapped: 31825920 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 214 ms_handle_reset con 0x558f0fdc7400 session 0x558f0ffc2e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 214 ms_handle_reset con 0x558f11f8dc00 session 0x558f12d46fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 214 ms_handle_reset con 0x558f1252e800 session 0x558f1273b340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 214 heartbeat osd_stat(store_statfs(0x4f9360000/0x0/0x4ffc00000, data 0x29f1782/0x2b2c000, compress 0x0/0x0/0x0, omap 0x308ed, meta 0x3d3f713), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 132931584 unmapped: 31825920 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 215 ms_handle_reset con 0x558f1263c000 session 0x558f126508c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 215 ms_handle_reset con 0x558f12a96000 session 0x558f12d9c700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 215 ms_handle_reset con 0x558f11f8dc00 session 0x558f113e16c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 215 ms_handle_reset con 0x558f12a97c00 session 0x558f126cdc00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 132931584 unmapped: 31825920 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 215 ms_handle_reset con 0x558f1252e000 session 0x558f0fa448c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 215 handle_osd_map epochs [215,216], i have 215, src has [1,216]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 216 ms_handle_reset con 0x558f0fdc7400 session 0x558f10b3d180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 216 ms_handle_reset con 0x558f1252e800 session 0x558f12026a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1696128 data_alloc: 218103808 data_used: 4698521
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 216 ms_handle_reset con 0x558f0fdc7400 session 0x558f0ffc2a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126541824 unmapped: 38215680 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 216 ms_handle_reset con 0x558f1252e000 session 0x558f113e1180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 216 ms_handle_reset con 0x558f12a97c00 session 0x558f12027c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 38182912 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 216 handle_osd_map epochs [216,217], i have 217, src has [1,217]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 217 ms_handle_reset con 0x558f1263c000 session 0x558f12f82a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 38182912 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 217 handle_osd_map epochs [217,218], i have 217, src has [1,218]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 218 ms_handle_reset con 0x558f0fdc7800 session 0x558f0fb0d180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 218 ms_handle_reset con 0x558f1252e800 session 0x558f12026fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 218 heartbeat osd_stat(store_statfs(0x4faf08000/0x0/0x4ffc00000, data 0xe41378/0xf82000, compress 0x0/0x0/0x0, omap 0x31a95, meta 0x3d3e56b), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 218 ms_handle_reset con 0x558f0fdc7400 session 0x558f0ffd7880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 38141952 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 219 heartbeat osd_stat(store_statfs(0x4faf02000/0x0/0x4ffc00000, data 0xe42ecd/0xf86000, compress 0x0/0x0/0x0, omap 0x31f83, meta 0x3d3e07d), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 219 ms_handle_reset con 0x558f1252e000 session 0x558f13099a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 38141952 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.686390877s of 10.086072922s, submitted: 195
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 220 ms_handle_reset con 0x558f1263c000 session 0x558f12f82380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1717405 data_alloc: 218103808 data_used: 4699042
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126623744 unmapped: 38133760 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 220 ms_handle_reset con 0x558f1424ac00 session 0x558f10854fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 220 ms_handle_reset con 0x558f1424a000 session 0x558f0fa44000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 220 ms_handle_reset con 0x558f0fdc7400 session 0x558f0ffc3180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 220 ms_handle_reset con 0x558f1252e800 session 0x558f0ffc3dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 38117376 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 220 ms_handle_reset con 0x558f1252e000 session 0x558f113b9880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 221 ms_handle_reset con 0x558f12a97c00 session 0x558f12cd1880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 221 ms_handle_reset con 0x558f0fdc7400 session 0x558f10855dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126410752 unmapped: 38346752 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 221 ms_handle_reset con 0x558f1252e000 session 0x558f12773500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 222 ms_handle_reset con 0x558f1252e800 session 0x558f126d1180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 38330368 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 222 ms_handle_reset con 0x558f1424a000 session 0x558f127736c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 222 ms_handle_reset con 0x558f12a97c00 session 0x558f116b5a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 222 heartbeat osd_stat(store_statfs(0x4faef8000/0x0/0x4ffc00000, data 0xe48275/0xf90000, compress 0x0/0x0/0x0, omap 0x3291f, meta 0x3d3d6e1), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 38330368 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 222 heartbeat osd_stat(store_statfs(0x4faef8000/0x0/0x4ffc00000, data 0xe48275/0xf90000, compress 0x0/0x0/0x0, omap 0x3291f, meta 0x3d3d6e1), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 223 ms_handle_reset con 0x558f1263c000 session 0x558f108c3880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 223 ms_handle_reset con 0x558f1252e000 session 0x558f12213500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1724419 data_alloc: 218103808 data_used: 4699823
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 223 ms_handle_reset con 0x558f1252e800 session 0x558f12213340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126435328 unmapped: 38322176 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 224 ms_handle_reset con 0x558f0fdc7400 session 0x558f10777500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126484480 unmapped: 38273024 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 224 ms_handle_reset con 0x558f1424a000 session 0x558f126cdc00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 225 ms_handle_reset con 0x558f12a97c00 session 0x558f10757dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 225 ms_handle_reset con 0x558f0fdc7400 session 0x558f10854540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126500864 unmapped: 38256640 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 225 heartbeat osd_stat(store_statfs(0x4faef4000/0x0/0x4ffc00000, data 0xe4d5a9/0xf96000, compress 0x0/0x0/0x0, omap 0x334d7, meta 0x3d3cb29), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 225 ms_handle_reset con 0x558f1252e800 session 0x558f0ffd68c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126500864 unmapped: 38256640 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 226 ms_handle_reset con 0x558f1252e000 session 0x558f124d4fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126509056 unmapped: 38248448 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1734247 data_alloc: 218103808 data_used: 4700114
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 226 heartbeat osd_stat(store_statfs(0x4faeef000/0x0/0x4ffc00000, data 0xe4f17d/0xf99000, compress 0x0/0x0/0x0, omap 0x33b9b, meta 0x3d3c465), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.480701447s of 11.010142326s, submitted: 117
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126509056 unmapped: 38248448 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 226 ms_handle_reset con 0x558f1424ac00 session 0x558f0ffd6e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 226 ms_handle_reset con 0x558f1263c000 session 0x558f0fb0d880
Feb  2 07:15:01 np0005604943 ceph-mon[75271]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Feb  2 07:15:01 np0005604943 ceph-mon[75271]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3020315223' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 226 heartbeat osd_stat(store_statfs(0x4faeef000/0x0/0x4ffc00000, data 0xe4f17d/0xf99000, compress 0x0/0x0/0x0, omap 0x33c29, meta 0x3d3c3d7), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126509056 unmapped: 38248448 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 226 ms_handle_reset con 0x558f1424ac00 session 0x558f0fb0cfc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 226 ms_handle_reset con 0x558f0fdc7400 session 0x558f1292d340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 38240256 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 226 ms_handle_reset con 0x558f1252e000 session 0x558f127b5c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1252e800 session 0x558f10854000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 125845504 unmapped: 38912000 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 heartbeat osd_stat(store_statfs(0x4faeee000/0x0/0x4ffc00000, data 0xe50ddd/0xf9c000, compress 0x0/0x0/0x0, omap 0x34654, meta 0x3d3b9ac), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 125845504 unmapped: 38912000 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f0fdc7400 session 0x558f1273a1c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1252e000 session 0x558f12bac380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735333 data_alloc: 218103808 data_used: 4700699
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 37830656 heap: 164757504 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1263c000 session 0x558f1245bdc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f12a97c00 session 0x558f130cfc00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1424ac00 session 0x558f108c3a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1424b800 session 0x558f130b6c40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f0fdc7400 session 0x558f12026a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1252e000 session 0x558f0fa448c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1263c000 session 0x558f130cfc00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 127213568 unmapped: 39165952 heap: 166379520 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f12a97c00 session 0x558f0ffd6e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 heartbeat osd_stat(store_statfs(0x4fa645000/0x0/0x4ffc00000, data 0x16fdd32/0x1847000, compress 0x0/0x0/0x0, omap 0x345ad, meta 0x3d3ba53), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 127221760 unmapped: 39157760 heap: 166379520 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 127221760 unmapped: 39157760 heap: 166379520 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f0fdc7400 session 0x558f0ffc2380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1252e000 session 0x558f10854000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1263c000 session 0x558f12bac380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1424b800 session 0x558f0fb0dc00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1424b400 session 0x558f1074f500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f0fdc7400 session 0x558f12027180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1252e000 session 0x558f113e0e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1263c000 session 0x558f113e0c40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1424b800 session 0x558f12213500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 127639552 unmapped: 42418176 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1897046 data_alloc: 218103808 data_used: 4700585
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.703428268s of 10.065398216s, submitted: 189
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 127639552 unmapped: 42418176 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1424a400 session 0x558f10776700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 heartbeat osd_stat(store_statfs(0x4f9533000/0x0/0x4ffc00000, data 0x280edcd/0x2959000, compress 0x0/0x0/0x0, omap 0x345ad, meta 0x3d3ba53), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f0fdc7400 session 0x558f107568c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 127639552 unmapped: 42418176 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1252e000 session 0x558f12baa540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1263c000 session 0x558f10854380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 ms_handle_reset con 0x558f1424b000 session 0x558f113b9880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 127557632 unmapped: 42500096 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 42491904 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 228 ms_handle_reset con 0x558f10b54400 session 0x558f12772380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 228 ms_handle_reset con 0x558f0fdc7400 session 0x558f12bad500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 42491904 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 228 ms_handle_reset con 0x558f1252e000 session 0x558f12212540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 228 ms_handle_reset con 0x558f1263c000 session 0x558f126cda40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1938806 data_alloc: 234881024 data_used: 10017193
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 228 heartbeat osd_stat(store_statfs(0x4f952b000/0x0/0x4ffc00000, data 0x28108e2/0x295f000, compress 0x0/0x0/0x0, omap 0x34c83, meta 0x3d3b37d), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 134389760 unmapped: 35667968 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 134389760 unmapped: 35667968 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 228 heartbeat osd_stat(store_statfs(0x4f9503000/0x0/0x4ffc00000, data 0x283a8e2/0x2989000, compress 0x0/0x0/0x0, omap 0x34c83, meta 0x3d3b37d), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 148037632 unmapped: 22020096 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 148037632 unmapped: 22020096 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 228 heartbeat osd_stat(store_statfs(0x4f9503000/0x0/0x4ffc00000, data 0x283a8e2/0x2989000, compress 0x0/0x0/0x0, omap 0x34c83, meta 0x3d3b37d), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 148070400 unmapped: 21987328 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2059898 data_alloc: 251658240 data_used: 30387625
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 148070400 unmapped: 21987328 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 148070400 unmapped: 21987328 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 228 heartbeat osd_stat(store_statfs(0x4f9503000/0x0/0x4ffc00000, data 0x283a8e2/0x2989000, compress 0x0/0x0/0x0, omap 0x34c83, meta 0x3d3b37d), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 148103168 unmapped: 21954560 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 148103168 unmapped: 21954560 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 148135936 unmapped: 21921792 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.617062569s of 14.648895264s, submitted: 25
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2093936 data_alloc: 251658240 data_used: 30432681
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 154017792 unmapped: 16039936 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 155590656 unmapped: 14467072 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 228 heartbeat osd_stat(store_statfs(0x4f8c85000/0x0/0x4ffc00000, data 0x30a98e2/0x31f8000, compress 0x0/0x0/0x0, omap 0x34c83, meta 0x3d3b37d), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 228 heartbeat osd_stat(store_statfs(0x4f8c85000/0x0/0x4ffc00000, data 0x30a98e2/0x31f8000, compress 0x0/0x0/0x0, omap 0x34c83, meta 0x3d3b37d), peers [0,2] op hist [1])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 163758080 unmapped: 6299648 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 162947072 unmapped: 7110656 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 162963456 unmapped: 7094272 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2198380 data_alloc: 251658240 data_used: 33104297
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 162963456 unmapped: 7094272 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 162963456 unmapped: 7094272 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 162963456 unmapped: 7094272 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 228 heartbeat osd_stat(store_statfs(0x4f827f000/0x0/0x4ffc00000, data 0x3abe8e2/0x3c0d000, compress 0x0/0x0/0x0, omap 0x34c83, meta 0x3d3b37d), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 162963456 unmapped: 7094272 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 162963456 unmapped: 7094272 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2198652 data_alloc: 251658240 data_used: 33112489
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 162963456 unmapped: 7094272 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 162963456 unmapped: 7094272 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.249907494s of 11.884238243s, submitted: 204
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 228 ms_handle_reset con 0x558f146ed400 session 0x558f0fdabc00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 228 heartbeat osd_stat(store_statfs(0x4f827f000/0x0/0x4ffc00000, data 0x3abe8e2/0x3c0d000, compress 0x0/0x0/0x0, omap 0x34c83, meta 0x3d3b37d), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 229 ms_handle_reset con 0x558f12195400 session 0x558f113b9c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 163364864 unmapped: 6692864 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 230 ms_handle_reset con 0x558f12869400 session 0x558f113b9dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 230 ms_handle_reset con 0x558f12195400 session 0x558f1273b880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 230 ms_handle_reset con 0x558f146ec400 session 0x558f0ffd7dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 163373056 unmapped: 6684672 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 230 ms_handle_reset con 0x558f1252e000 session 0x558f10757180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 163373056 unmapped: 6684672 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 230 ms_handle_reset con 0x558f146ed400 session 0x558f10757a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2212532 data_alloc: 251658240 data_used: 33120795
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 230 handle_osd_map epochs [230,231], i have 230, src has [1,231]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 231 ms_handle_reset con 0x558f12195400 session 0x558f12027880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 163389440 unmapped: 6668288 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 232 ms_handle_reset con 0x558f146ed400 session 0x558f12026000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 232 ms_handle_reset con 0x558f1252e000 session 0x558f107568c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 232 ms_handle_reset con 0x558f1263c000 session 0x558f124e36c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 162349056 unmapped: 7708672 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 232 ms_handle_reset con 0x558f0fdc7400 session 0x558f0fdaac40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 162349056 unmapped: 7708672 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 232 ms_handle_reset con 0x558f12195400 session 0x558f10776fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 232 heartbeat osd_stat(store_statfs(0x4f826c000/0x0/0x4ffc00000, data 0x3ac6202/0x3c1e000, compress 0x0/0x0/0x0, omap 0x35a67, meta 0x3d3a599), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 232 handle_osd_map epochs [232,233], i have 232, src has [1,233]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 233 ms_handle_reset con 0x558f12869400 session 0x558f12d468c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 233 ms_handle_reset con 0x558f1252e000 session 0x558f10b3dc00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 162390016 unmapped: 7667712 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 233 ms_handle_reset con 0x558f1424b000 session 0x558f0e417340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 233 ms_handle_reset con 0x558f146ecc00 session 0x558f10793340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 233 ms_handle_reset con 0x558f12195400 session 0x558f0ffc2a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 233 heartbeat osd_stat(store_statfs(0x4f8167000/0x0/0x4ffc00000, data 0x3bcc904/0x3d25000, compress 0x0/0x0/0x0, omap 0x35e66, meta 0x3d3a19a), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 152223744 unmapped: 17833984 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2010966 data_alloc: 234881024 data_used: 18516507
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 152223744 unmapped: 17833984 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 233 ms_handle_reset con 0x558f12869400 session 0x558f0fb0d180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 233 ms_handle_reset con 0x558f1424b000 session 0x558f0fb0cfc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 233 ms_handle_reset con 0x558f1263c000 session 0x558f1292d340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 152231936 unmapped: 17825792 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 234 ms_handle_reset con 0x558f146ec400 session 0x558f12d47a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 234 ms_handle_reset con 0x558f12195400 session 0x558f113b9180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.830022812s of 10.019705772s, submitted: 107
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 234 ms_handle_reset con 0x558f1263c000 session 0x558f10757500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 235 ms_handle_reset con 0x558f146ed400 session 0x558f10756c40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 235 ms_handle_reset con 0x558f1252e000 session 0x558f10b3d340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151412736 unmapped: 18644992 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 235 ms_handle_reset con 0x558f1424b000 session 0x558f11e35dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 235 ms_handle_reset con 0x558f12195400 session 0x558f12cd0700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 236 ms_handle_reset con 0x558f1263c000 session 0x558f12d46fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151453696 unmapped: 18604032 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151461888 unmapped: 18595840 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 236 heartbeat osd_stat(store_statfs(0x4f97e5000/0x0/0x4ffc00000, data 0x2545c11/0x26a2000, compress 0x0/0x0/0x0, omap 0x36afb, meta 0x3d39505), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2030041 data_alloc: 234881024 data_used: 19573177
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151216128 unmapped: 18841600 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 237 ms_handle_reset con 0x558f1424b000 session 0x558f12cd1340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 237 ms_handle_reset con 0x558f146ed400 session 0x558f127b5340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151216128 unmapped: 18841600 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 237 ms_handle_reset con 0x558f146ed000 session 0x558f10b3ca80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151216128 unmapped: 18841600 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 237 handle_osd_map epochs [237,238], i have 237, src has [1,238]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 238 ms_handle_reset con 0x558f1424b800 session 0x558f10854a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 238 ms_handle_reset con 0x558f1424a800 session 0x558f10854540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 139960320 unmapped: 30097408 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 238 ms_handle_reset con 0x558f12195400 session 0x558f113b88c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 139976704 unmapped: 30081024 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 238 heartbeat osd_stat(store_statfs(0x4fadc6000/0x0/0x4ffc00000, data 0xf68cb7/0x10c2000, compress 0x0/0x0/0x0, omap 0x375c0, meta 0x3d38a40), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1820840 data_alloc: 218103808 data_used: 5765933
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 139976704 unmapped: 30081024 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 139976704 unmapped: 30081024 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 139976704 unmapped: 30081024 heap: 170057728 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 238 heartbeat osd_stat(store_statfs(0x4fadc6000/0x0/0x4ffc00000, data 0xf68cb7/0x10c2000, compress 0x0/0x0/0x0, omap 0x375c0, meta 0x3d38a40), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.873140335s of 11.106499672s, submitted: 139
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142655488 unmapped: 31653888 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142655488 unmapped: 31653888 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1863684 data_alloc: 218103808 data_used: 5917450
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142467072 unmapped: 31842304 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142467072 unmapped: 31842304 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 241 heartbeat osd_stat(store_statfs(0x4f95d0000/0x0/0x4ffc00000, data 0x15bbed4/0x1718000, compress 0x0/0x0/0x0, omap 0x37cad, meta 0x4ed8353), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 242 ms_handle_reset con 0x558f1263c000 session 0x558f12026a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 242 ms_handle_reset con 0x558f1424b000 session 0x558f124e2a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142467072 unmapped: 31842304 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142467072 unmapped: 31842304 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142467072 unmapped: 31842304 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1868618 data_alloc: 218103808 data_used: 5917640
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142467072 unmapped: 31842304 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 242 ms_handle_reset con 0x558f1424b000 session 0x558f113b9340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142491648 unmapped: 31817728 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142491648 unmapped: 31817728 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 242 heartbeat osd_stat(store_statfs(0x4f95d1000/0x0/0x4ffc00000, data 0x15bdac4/0x171b000, compress 0x0/0x0/0x0, omap 0x38106, meta 0x4ed7efa), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.819180489s of 10.049445152s, submitted: 95
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 242 handle_osd_map epochs [242,243], i have 243, src has [1,243]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 243 ms_handle_reset con 0x558f1263c000 session 0x558f12027180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142548992 unmapped: 31760384 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 244 ms_handle_reset con 0x558f1424b800 session 0x558f124e2000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142565376 unmapped: 31744000 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 245 ms_handle_reset con 0x558f146ed400 session 0x558f113b9a40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 245 ms_handle_reset con 0x558f146ec400 session 0x558f120261c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 245 ms_handle_reset con 0x558f12195400 session 0x558f1074f500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1884528 data_alloc: 218103808 data_used: 5917640
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142647296 unmapped: 31662080 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 245 handle_osd_map epochs [245,246], i have 245, src has [1,246]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 246 ms_handle_reset con 0x558f146ec400 session 0x558f12d47880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142196736 unmapped: 32112640 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 246 heartbeat osd_stat(store_statfs(0x4f95c2000/0x0/0x4ffc00000, data 0x15c48cd/0x1728000, compress 0x0/0x0/0x0, omap 0x395f2, meta 0x4ed6a0e), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 246 ms_handle_reset con 0x558f1424b000 session 0x558f0fa44380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 246 ms_handle_reset con 0x558f1424a800 session 0x558f130b6c40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 247 ms_handle_reset con 0x558f1424b800 session 0x558f10776700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 247 ms_handle_reset con 0x558f12195400 session 0x558f113b8000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 141934592 unmapped: 32374784 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 248 ms_handle_reset con 0x558f1424a800 session 0x558f130cfc00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 248 ms_handle_reset con 0x558f1263c000 session 0x558f12213500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 141934592 unmapped: 32374784 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 248 heartbeat osd_stat(store_statfs(0x4f95b6000/0x0/0x4ffc00000, data 0x15c8101/0x1730000, compress 0x0/0x0/0x0, omap 0x39b19, meta 0x4ed64e7), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 249 ms_handle_reset con 0x558f1424b000 session 0x558f1242f500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 249 ms_handle_reset con 0x558f146ed400 session 0x558f1273bdc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 141934592 unmapped: 32374784 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1901829 data_alloc: 218103808 data_used: 5917836
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 250 ms_handle_reset con 0x558f1424a800 session 0x558f12cd1180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 250 ms_handle_reset con 0x558f12195400 session 0x558f0fb0d6c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 141942784 unmapped: 32366592 heap: 174309376 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 251 ms_handle_reset con 0x558f1263c000 session 0x558f0ffd76c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 251 ms_handle_reset con 0x558f10c02000 session 0x558f127b5c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 251 ms_handle_reset con 0x558f146ec400 session 0x558f10b3c700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 251 ms_handle_reset con 0x558f12864800 session 0x558f12d46000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 251 ms_handle_reset con 0x558f116ad400 session 0x558f124e3dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 155852800 unmapped: 34136064 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 251 ms_handle_reset con 0x558f10c02000 session 0x558f12cd0540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 251 ms_handle_reset con 0x558f12195400 session 0x558f1245bdc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 252 ms_handle_reset con 0x558f1424b000 session 0x558f12d46e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 252 ms_handle_reset con 0x558f1263c000 session 0x558f12bacfc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 252 ms_handle_reset con 0x558f10c02000 session 0x558f116b56c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 252 ms_handle_reset con 0x558f116ad400 session 0x558f12212540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 155590656 unmapped: 34398208 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 252 handle_osd_map epochs [252,253], i have 252, src has [1,253]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.297321320s of 10.061747551s, submitted: 230
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 253 ms_handle_reset con 0x558f1424a800 session 0x558f10b3d6c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 253 ms_handle_reset con 0x558f12195400 session 0x558f128b9340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156704768 unmapped: 33284096 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 253 ms_handle_reset con 0x558f12195400 session 0x558f12d47c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 253 ms_handle_reset con 0x558f10c02000 session 0x558f10854e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 253 ms_handle_reset con 0x558f116ad400 session 0x558f11e34700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 253 heartbeat osd_stat(store_statfs(0x4f8515000/0x0/0x4ffc00000, data 0x26661c1/0x27d3000, compress 0x0/0x0/0x0, omap 0x3aeec, meta 0x4ed5114), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156745728 unmapped: 33243136 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 253 ms_handle_reset con 0x558f1424a800 session 0x558f113b96c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 253 ms_handle_reset con 0x558f1263c000 session 0x558f0fdabc00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2041062 data_alloc: 234881024 data_used: 12400850
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 149807104 unmapped: 40181760 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 253 ms_handle_reset con 0x558f116ad400 session 0x558f124d4fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 254 ms_handle_reset con 0x558f12195400 session 0x558f113b9880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 149823488 unmapped: 40165376 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 254 heartbeat osd_stat(store_statfs(0x4f8514000/0x0/0x4ffc00000, data 0x2667db1/0x27d6000, compress 0x0/0x0/0x0, omap 0x3b483, meta 0x4ed4b7d), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 254 handle_osd_map epochs [254,255], i have 254, src has [1,255]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 255 ms_handle_reset con 0x558f10c02000 session 0x558f0fdabc00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 149823488 unmapped: 40165376 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 255 handle_osd_map epochs [255,256], i have 255, src has [1,256]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 40624128 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 256 ms_handle_reset con 0x558f1424a800 session 0x558f113b96c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 256 ms_handle_reset con 0x558f146ec400 session 0x558f10757c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 256 ms_handle_reset con 0x558f12864800 session 0x558f10854e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 256 ms_handle_reset con 0x558f10c02000 session 0x558f128b9340
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 256 ms_handle_reset con 0x558f1424a800 session 0x558f12bacfc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 256 ms_handle_reset con 0x558f12193800 session 0x558f12d46000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 40624128 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 256 ms_handle_reset con 0x558f116ad400 session 0x558f12212540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 256 ms_handle_reset con 0x558f10c02000 session 0x558f11e34700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 256 ms_handle_reset con 0x558f116ad400 session 0x558f1074f500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 256 ms_handle_reset con 0x558f12193800 session 0x558f1273a700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 256 ms_handle_reset con 0x558f12864800 session 0x558f124e3180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 256 ms_handle_reset con 0x558f1424a800 session 0x558f10b3c700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2057182 data_alloc: 234881024 data_used: 12402118
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 257 ms_handle_reset con 0x558f1424a800 session 0x558f0ffd76c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 40632320 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 257 ms_handle_reset con 0x558f116ad400 session 0x558f113b8c40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 258 heartbeat osd_stat(store_statfs(0x4f8506000/0x0/0x4ffc00000, data 0x266d100/0x27e2000, compress 0x0/0x0/0x0, omap 0x3c052, meta 0x4ed3fae), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 258 ms_handle_reset con 0x558f10c02000 session 0x558f113e0000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 40607744 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 258 ms_handle_reset con 0x558f12195400 session 0x558f124e2700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 258 ms_handle_reset con 0x558f12193800 session 0x558f10b3c1c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 40607744 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 258 ms_handle_reset con 0x558f10c02000 session 0x558f12d47500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.951356888s of 10.141759872s, submitted: 85
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 258 ms_handle_reset con 0x558f116ad400 session 0x558f10b3c540
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 258 ms_handle_reset con 0x558f12864800 session 0x558f0ffd6e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 258 ms_handle_reset con 0x558f1424a800 session 0x558f11e35880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 148815872 unmapped: 41172992 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 258 ms_handle_reset con 0x558f108b2c00 session 0x558f0fa448c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 258 ms_handle_reset con 0x558f108b2c00 session 0x558f12d461c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 258 ms_handle_reset con 0x558f10c02000 session 0x558f1074e700
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 259 ms_handle_reset con 0x558f12864800 session 0x558f126d1dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 259 ms_handle_reset con 0x558f116ad400 session 0x558f10777180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 148856832 unmapped: 41132032 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2098337 data_alloc: 234881024 data_used: 12402817
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 259 handle_osd_map epochs [259,260], i have 259, src has [1,260]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 260 ms_handle_reset con 0x558f14852400 session 0x558f113b8380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 260 ms_handle_reset con 0x558f10c02000 session 0x558f113b9180
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 260 ms_handle_reset con 0x558f12195400 session 0x558f12027500
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 149159936 unmapped: 40828928 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 260 ms_handle_reset con 0x558f108b2c00 session 0x558f120268c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 260 heartbeat osd_stat(store_statfs(0x4f81bd000/0x0/0x4ffc00000, data 0x29ae938/0x2b2a000, compress 0x0/0x0/0x0, omap 0x3ce01, meta 0x4ed31ff), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151044096 unmapped: 38944768 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 260 ms_handle_reset con 0x558f116ad400 session 0x558f1074ee00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 260 ms_handle_reset con 0x558f12864800 session 0x558f12026fc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 260 heartbeat osd_stat(store_statfs(0x4f81bd000/0x0/0x4ffc00000, data 0x29ae938/0x2b2a000, compress 0x0/0x0/0x0, omap 0x3ce01, meta 0x4ed31ff), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151044096 unmapped: 38944768 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 260 ms_handle_reset con 0x558f108b2c00 session 0x558f12bac000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 260 ms_handle_reset con 0x558f10c02000 session 0x558f126d0380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 260 handle_osd_map epochs [260,261], i have 260, src has [1,261]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 261 ms_handle_reset con 0x558f12195400 session 0x558f12d46e00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151134208 unmapped: 38854656 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 261 heartbeat osd_stat(store_statfs(0x4f81bd000/0x0/0x4ffc00000, data 0x29ae938/0x2b2a000, compress 0x0/0x0/0x0, omap 0x3cf19, meta 0x4ed30e7), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 262 ms_handle_reset con 0x558f116ad400 session 0x558f1245c1c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151166976 unmapped: 38821888 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 262 ms_handle_reset con 0x558f14852400 session 0x558f12cd0380
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 262 ms_handle_reset con 0x558f12864800 session 0x558f124e3880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2152294 data_alloc: 234881024 data_used: 19537025
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 262 handle_osd_map epochs [262,263], i have 263, src has [1,263]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151216128 unmapped: 38772736 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 263 ms_handle_reset con 0x558f10c02000 session 0x558f12cd0000
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 263 ms_handle_reset con 0x558f108b2c00 session 0x558f12cd01c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 264 ms_handle_reset con 0x558f116ad400 session 0x558f125b9880
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 264 ms_handle_reset con 0x558f12195400 session 0x558f0fb0ca80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151265280 unmapped: 38723584 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 264 ms_handle_reset con 0x558f12195400 session 0x558f0fa441c0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 264 ms_handle_reset con 0x558f108b2c00 session 0x558f10855c00
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151248896 unmapped: 38739968 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.103104591s of 10.508928299s, submitted: 98
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 265 ms_handle_reset con 0x558f10c02000 session 0x558f0fb0cc40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 265 ms_handle_reset con 0x558f116ad400 session 0x558f124d4a80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 265 ms_handle_reset con 0x558f12864800 session 0x558f1273aa80
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151281664 unmapped: 38707200 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 265 heartbeat osd_stat(store_statfs(0x4f81b1000/0x0/0x4ffc00000, data 0x29b57fc/0x2b36000, compress 0x0/0x0/0x0, omap 0x3e263, meta 0x4ed1d9d), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151281664 unmapped: 38707200 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2160452 data_alloc: 234881024 data_used: 19538223
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151306240 unmapped: 38682624 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 266 ms_handle_reset con 0x558f108b2c00 session 0x558f124e3dc0
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156352512 unmapped: 33636352 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 267 ms_handle_reset con 0x558f10c02000 session 0x558f12026c40
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 267 heartbeat osd_stat(store_statfs(0x4f7cde000/0x0/0x4ffc00000, data 0x2e8ba98/0x300e000, compress 0x0/0x0/0x0, omap 0x3ebe9, meta 0x4ed1417), peers [0,2] op hist [])
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156475392 unmapped: 33513472 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Feb  2 07:15:01 np0005604943 ceph-osd[87192]: osd.1 268 ms_handle_reset con 0x558f12195400 session 0x558f113b8a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156483584 unmapped: 33505280 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 268 heartbeat osd_stat(store_statfs(0x4f7cd4000/0x0/0x4ffc00000, data 0x2e8f123/0x3014000, compress 0x0/0x0/0x0, omap 0x3f0fc, meta 0x4ed0f04), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 36667392 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 269 ms_handle_reset con 0x558f14852800 session 0x558f11e34540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 269 heartbeat osd_stat(store_statfs(0x4f7cd1000/0x0/0x4ffc00000, data 0x2e90de1/0x3019000, compress 0x0/0x0/0x0, omap 0x3f527, meta 0x4ed0ad9), peers [0,2] op hist [1])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 269 ms_handle_reset con 0x558f14852c00 session 0x558f0ffd7dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2219253 data_alloc: 234881024 data_used: 22651357
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 153362432 unmapped: 36626432 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 36601856 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 272 ms_handle_reset con 0x558f116ad400 session 0x558f12cd1180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 272 ms_handle_reset con 0x558f108b2c00 session 0x558f1273bdc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 153468928 unmapped: 36519936 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 153468928 unmapped: 36519936 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.905306816s of 10.215412140s, submitted: 166
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 274 ms_handle_reset con 0x558f10c02000 session 0x558f113b9500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 274 ms_handle_reset con 0x558f12195400 session 0x558f108c3a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 153509888 unmapped: 36478976 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 274 heartbeat osd_stat(store_statfs(0x4f7cc1000/0x0/0x4ffc00000, data 0x2e997ac/0x3025000, compress 0x0/0x0/0x0, omap 0x40237, meta 0x4ecfdc9), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2230412 data_alloc: 234881024 data_used: 23053378
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 153559040 unmapped: 36429824 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 275 ms_handle_reset con 0x558f14852800 session 0x558f0fa45880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 275 ms_handle_reset con 0x558f108b2c00 session 0x558f12bad180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 153616384 unmapped: 36372480 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 275 ms_handle_reset con 0x558f10c02000 session 0x558f113b8000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 275 ms_handle_reset con 0x558f116ad400 session 0x558f0ffd6fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 153616384 unmapped: 36372480 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 276 ms_handle_reset con 0x558f12195400 session 0x558f12d46700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 153640960 unmapped: 36347904 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 153640960 unmapped: 36347904 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 276 heartbeat osd_stat(store_statfs(0x4f7cbf000/0x0/0x4ffc00000, data 0x2e9ce87/0x302b000, compress 0x0/0x0/0x0, omap 0x41700, meta 0x4ece900), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 276 heartbeat osd_stat(store_statfs(0x4f7cbf000/0x0/0x4ffc00000, data 0x2e9ce87/0x302b000, compress 0x0/0x0/0x0, omap 0x41700, meta 0x4ece900), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2235861 data_alloc: 234881024 data_used: 23053378
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 276 ms_handle_reset con 0x558f14853000 session 0x558f12027dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 276 ms_handle_reset con 0x558f14853000 session 0x558f10757dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 153640960 unmapped: 36347904 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 276 ms_handle_reset con 0x558f108b2c00 session 0x558f113e1a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 276 ms_handle_reset con 0x558f10c02000 session 0x558f0ffd7dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 30851072 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 276 ms_handle_reset con 0x558f116ad400 session 0x558f113b8a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 276 ms_handle_reset con 0x558f12195400 session 0x558f12027c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 30851072 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 277 ms_handle_reset con 0x558f108b2c00 session 0x558f0fb0ca80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 277 ms_handle_reset con 0x558f10c02000 session 0x558f12cd01c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 30769152 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 277 ms_handle_reset con 0x558f14853000 session 0x558f0fa44380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.839365005s of 10.054286003s, submitted: 119
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 278 ms_handle_reset con 0x558f14853400 session 0x558f0fb0ce00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156901376 unmapped: 33087488 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 278 ms_handle_reset con 0x558f14853800 session 0x558f125b9880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 279 ms_handle_reset con 0x558f108b2c00 session 0x558f1273aa80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 279 ms_handle_reset con 0x558f116ad400 session 0x558f12cd0000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2274420 data_alloc: 234881024 data_used: 24786290
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156909568 unmapped: 33079296 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 280 ms_handle_reset con 0x558f10c02000 session 0x558f1273b880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 280 heartbeat osd_stat(store_statfs(0x4f7a84000/0x0/0x4ffc00000, data 0x30ce5db/0x3264000, compress 0x0/0x0/0x0, omap 0x42664, meta 0x4ecd99c), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 280 ms_handle_reset con 0x558f14853000 session 0x558f12d9c700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156942336 unmapped: 33046528 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 280 ms_handle_reset con 0x558f14853400 session 0x558f12d47880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 280 ms_handle_reset con 0x558f14853400 session 0x558f130cefc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 280 ms_handle_reset con 0x558f108b2c00 session 0x558f12cd08c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 280 ms_handle_reset con 0x558f10c02000 session 0x558f10854540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 280 ms_handle_reset con 0x558f116ad400 session 0x558f12cd0700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156975104 unmapped: 33013760 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 281 ms_handle_reset con 0x558f14853000 session 0x558f12d46700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 281 ms_handle_reset con 0x558f14853000 session 0x558f113b8000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156852224 unmapped: 33136640 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 281 ms_handle_reset con 0x558f108b2c00 session 0x558f124d4a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 281 ms_handle_reset con 0x558f116ad400 session 0x558f11e34540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 281 ms_handle_reset con 0x558f10c02000 session 0x558f120268c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 ms_handle_reset con 0x558f14853400 session 0x558f113b9500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 heartbeat osd_stat(store_statfs(0x4f7a78000/0x0/0x4ffc00000, data 0x30d35ae/0x3270000, compress 0x0/0x0/0x0, omap 0x430bd, meta 0x4eccf43), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157917184 unmapped: 32071680 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 ms_handle_reset con 0x558f108b2c00 session 0x558f1074ee00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 ms_handle_reset con 0x558f10c02000 session 0x558f12213a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 ms_handle_reset con 0x558f116ad400 session 0x558f12bac700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2287627 data_alloc: 234881024 data_used: 24786875
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 ms_handle_reset con 0x558f14853000 session 0x558f113b9340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157925376 unmapped: 32063488 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157925376 unmapped: 32063488 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 ms_handle_reset con 0x558f14853400 session 0x558f108c2700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 ms_handle_reset con 0x558f14853400 session 0x558f0ffd6a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157941760 unmapped: 32047104 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 ms_handle_reset con 0x558f10c02000 session 0x558f0fb0d180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 ms_handle_reset con 0x558f116ad400 session 0x558f12026380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157949952 unmapped: 32038912 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.836668968s of 10.116587639s, submitted: 112
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 ms_handle_reset con 0x558f14853000 session 0x558f12d46700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 ms_handle_reset con 0x558f14853c00 session 0x558f10854540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 ms_handle_reset con 0x558f10c02000 session 0x558f0fa44380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 158400512 unmapped: 31588352 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 ms_handle_reset con 0x558f108b2c00 session 0x558f10757500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 heartbeat osd_stat(store_statfs(0x4f7a81000/0x0/0x4ffc00000, data 0x30d3468/0x326b000, compress 0x0/0x0/0x0, omap 0x43383, meta 0x4eccc7d), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2296145 data_alloc: 234881024 data_used: 26249147
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 158408704 unmapped: 31580160 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 ms_handle_reset con 0x558f14853000 session 0x558f126d0c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 283 ms_handle_reset con 0x558f116ad400 session 0x558f10757dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 283 ms_handle_reset con 0x558f14853400 session 0x558f12213500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 283 ms_handle_reset con 0x558f1424a800 session 0x558f0fdaae00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 283 ms_handle_reset con 0x558f14852000 session 0x558f0fa44fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 158498816 unmapped: 31490048 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 283 ms_handle_reset con 0x558f14853400 session 0x558f0fdaa1c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 283 ms_handle_reset con 0x558f108b2c00 session 0x558f12026e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157507584 unmapped: 32481280 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157564928 unmapped: 32423936 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157605888 unmapped: 32382976 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 284 ms_handle_reset con 0x558f10c02000 session 0x558f10777340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 284 ms_handle_reset con 0x558f108b2c00 session 0x558f12cd0700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 284 ms_handle_reset con 0x558f1424a800 session 0x558f12d46fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 284 heartbeat osd_stat(store_statfs(0x4f817c000/0x0/0x4ffc00000, data 0x29d7ad5/0x2b70000, compress 0x0/0x0/0x0, omap 0x4450d, meta 0x4ecbaf3), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 284 ms_handle_reset con 0x558f14852000 session 0x558f10776e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2261378 data_alloc: 234881024 data_used: 25569081
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 284 ms_handle_reset con 0x558f14853400 session 0x558f107576c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 284 ms_handle_reset con 0x558f116ad400 session 0x558f0ffd7dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 158654464 unmapped: 31334400 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 284 ms_handle_reset con 0x558f108b2c00 session 0x558f0ffc28c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 285 ms_handle_reset con 0x558f14852000 session 0x558f13099a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 285 ms_handle_reset con 0x558f1424a800 session 0x558f12cd01c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 285 ms_handle_reset con 0x558f14853400 session 0x558f10757880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 151994368 unmapped: 37994496 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 285 ms_handle_reset con 0x558f14853000 session 0x558f108541c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 285 ms_handle_reset con 0x558f1424a800 session 0x558f1074ee00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 285 heartbeat osd_stat(store_statfs(0x4f8178000/0x0/0x4ffc00000, data 0x29d7bb9/0x2b74000, compress 0x0/0x0/0x0, omap 0x44a66, meta 0x4ecb59a), peers [0,2] op hist [0,0,0,0,0,1])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 285 ms_handle_reset con 0x558f14852000 session 0x558f1245c1c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 286 ms_handle_reset con 0x558f108b2c00 session 0x558f12bac700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 286 heartbeat osd_stat(store_statfs(0x4f920c000/0x0/0x4ffc00000, data 0x1944753/0x1ae0000, compress 0x0/0x0/0x0, omap 0x45242, meta 0x4ecadbe), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 286 ms_handle_reset con 0x558f14853400 session 0x558f1245da40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 286 ms_handle_reset con 0x558f14bee000 session 0x558f0ffd7340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 152051712 unmapped: 37937152 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 287 ms_handle_reset con 0x558f14bee000 session 0x558f0ffc2e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 287 ms_handle_reset con 0x558f108b2c00 session 0x558f1273b340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 152051712 unmapped: 37937152 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 287 ms_handle_reset con 0x558f1424a800 session 0x558f126d1340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.646669388s of 10.140687943s, submitted: 310
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 287 ms_handle_reset con 0x558f1252e000 session 0x558f113b8700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 287 ms_handle_reset con 0x558f12869400 session 0x558f113e1880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 287 ms_handle_reset con 0x558f14852000 session 0x558f0fb0d880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 287 ms_handle_reset con 0x558f108b2c00 session 0x558f126d0a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 287 ms_handle_reset con 0x558f1252e000 session 0x558f12bac000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145752064 unmapped: 44236800 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 287 ms_handle_reset con 0x558f1424a800 session 0x558f0fdab6c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 287 ms_handle_reset con 0x558f14bee000 session 0x558f124e2fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 287 ms_handle_reset con 0x558f108b2c00 session 0x558f124e3880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2037708 data_alloc: 218103808 data_used: 4731041
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 44212224 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 287 ms_handle_reset con 0x558f1252e000 session 0x558f1242f500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 44212224 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 287 ms_handle_reset con 0x558f1424a800 session 0x558f0ffd6a80
Feb  2 07:15:02 np0005604943 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 287 heartbeat osd_stat(store_statfs(0x4f9961000/0x0/0x4ffc00000, data 0x11f4cb0/0x138b000, compress 0x0/0x0/0x0, omap 0x45c6d, meta 0x4eca393), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 44212224 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 287 ms_handle_reset con 0x558f14852000 session 0x558f1273ba40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 288 ms_handle_reset con 0x558f14853400 session 0x558f107761c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145784832 unmapped: 44204032 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 288 ms_handle_reset con 0x558f108b2c00 session 0x558f124d4a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145768448 unmapped: 44220416 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 289 ms_handle_reset con 0x558f1252e000 session 0x558f10854000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2045140 data_alloc: 218103808 data_used: 4731041
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 44195840 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 289 heartbeat osd_stat(store_statfs(0x4f9956000/0x0/0x4ffc00000, data 0x11f8303/0x1391000, compress 0x0/0x0/0x0, omap 0x463a9, meta 0x4ec9c57), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 289 ms_handle_reset con 0x558f1424a800 session 0x558f10b3d340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145727488 unmapped: 44261376 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 290 ms_handle_reset con 0x558f14852000 session 0x558f1273a380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145727488 unmapped: 44261376 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 291 ms_handle_reset con 0x558f14bee800 session 0x558f12cd1340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 291 ms_handle_reset con 0x558f14bee400 session 0x558f1074f180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145727488 unmapped: 44261376 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145727488 unmapped: 44261376 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.510087967s of 10.903784752s, submitted: 105
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 291 ms_handle_reset con 0x558f108b2c00 session 0x558f10776700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 291 ms_handle_reset con 0x558f1252e000 session 0x558f126d0700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2054557 data_alloc: 218103808 data_used: 4736892
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 44253184 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 291 heartbeat osd_stat(store_statfs(0x4f9953000/0x0/0x4ffc00000, data 0x11fbaff/0x1399000, compress 0x0/0x0/0x0, omap 0x46bc5, meta 0x4ec943b), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 291 ms_handle_reset con 0x558f1424a800 session 0x558f113b8000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 292 ms_handle_reset con 0x558f14852000 session 0x558f12cd16c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145752064 unmapped: 44236800 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145752064 unmapped: 44236800 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 292 ms_handle_reset con 0x558f108b2c00 session 0x558f10b3ca80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 292 heartbeat osd_stat(store_statfs(0x4f9951000/0x0/0x4ffc00000, data 0x11fd68d/0x139b000, compress 0x0/0x0/0x0, omap 0x46f11, meta 0x4ec90ef), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 292 handle_osd_map epochs [293,293], i have 293, src has [1,293]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 293 ms_handle_reset con 0x558f1252e000 session 0x558f130b6c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 293 ms_handle_reset con 0x558f1424a800 session 0x558f130cfc00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 44253184 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 293 ms_handle_reset con 0x558f14bee400 session 0x558f1273a8c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 293 heartbeat osd_stat(store_statfs(0x4f994c000/0x0/0x4ffc00000, data 0x11ff28d/0x139e000, compress 0x0/0x0/0x0, omap 0x4738d, meta 0x4ec8c73), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145743872 unmapped: 44244992 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 293 ms_handle_reset con 0x558f14bef000 session 0x558f12bad500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2060690 data_alloc: 218103808 data_used: 4738118
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 293 ms_handle_reset con 0x558f14bef000 session 0x558f12d47a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145768448 unmapped: 44220416 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 294 ms_handle_reset con 0x558f14beec00 session 0x558f12650fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 294 ms_handle_reset con 0x558f1252e000 session 0x558f1273b180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 294 handle_osd_map epochs [295,295], i have 295, src has [1,295]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 44212224 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 295 ms_handle_reset con 0x558f108b2c00 session 0x558f0ffd68c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 295 heartbeat osd_stat(store_statfs(0x4f9945000/0x0/0x4ffc00000, data 0x1202a2b/0x13a3000, compress 0x0/0x0/0x0, omap 0x47d83, meta 0x4ec827d), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 295 ms_handle_reset con 0x558f1424a800 session 0x558f113b88c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 44212224 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 296 ms_handle_reset con 0x558f108b2c00 session 0x558f108556c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 296 ms_handle_reset con 0x558f1252e000 session 0x558f1273ae00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145801216 unmapped: 44187648 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 296 ms_handle_reset con 0x558f14beec00 session 0x558f12027180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145801216 unmapped: 44187648 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.012571335s of 10.072498322s, submitted: 119
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 297 ms_handle_reset con 0x558f14bef000 session 0x558f12bac000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2071888 data_alloc: 218103808 data_used: 4739246
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145801216 unmapped: 44187648 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 297 ms_handle_reset con 0x558f14bee400 session 0x558f12d46380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 297 ms_handle_reset con 0x558f108b2c00 session 0x558f0fa45340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145817600 unmapped: 44171264 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145817600 unmapped: 44171264 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 298 heartbeat osd_stat(store_statfs(0x4f9944000/0x0/0x4ffc00000, data 0x1206074/0x13a8000, compress 0x0/0x0/0x0, omap 0x488f8, meta 0x4ec7708), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145817600 unmapped: 44171264 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 299 ms_handle_reset con 0x558f1252e000 session 0x558f12bad340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 299 heartbeat osd_stat(store_statfs(0x4f993f000/0x0/0x4ffc00000, data 0x1207b2f/0x13ab000, compress 0x0/0x0/0x0, omap 0x48e2f, meta 0x4ec71d1), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145817600 unmapped: 44171264 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2076840 data_alloc: 218103808 data_used: 4739246
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 299 ms_handle_reset con 0x558f14bee400 session 0x558f108c28c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145817600 unmapped: 44171264 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 299 ms_handle_reset con 0x558f14beec00 session 0x558f108c2700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 299 ms_handle_reset con 0x558f14bef400 session 0x558f113b9500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 299 ms_handle_reset con 0x558f14bef000 session 0x558f12cd0380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145817600 unmapped: 44171264 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 299 ms_handle_reset con 0x558f14bef400 session 0x558f10756c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 299 ms_handle_reset con 0x558f108b2c00 session 0x558f0fb0dc00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 299 ms_handle_reset con 0x558f1252e000 session 0x558f0ffc2540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 299 ms_handle_reset con 0x558f14bee400 session 0x558f12cd1dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 299 heartbeat osd_stat(store_statfs(0x4f993c000/0x0/0x4ffc00000, data 0x12096e7/0x13ae000, compress 0x0/0x0/0x0, omap 0x49411, meta 0x4ec6bef), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145825792 unmapped: 44163072 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 300 ms_handle_reset con 0x558f1252e000 session 0x558f12650fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145825792 unmapped: 44163072 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 301 ms_handle_reset con 0x558f14bef400 session 0x558f1245d340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 301 ms_handle_reset con 0x558f14bef000 session 0x558f1273b340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 301 ms_handle_reset con 0x558f108b2c00 session 0x558f12d47a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145833984 unmapped: 44154880 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2087393 data_alloc: 218103808 data_used: 4739246
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 301 ms_handle_reset con 0x558f14beec00 session 0x558f120268c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.611740112s of 10.786938667s, submitted: 83
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 301 ms_handle_reset con 0x558f108b2c00 session 0x558f1292d500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145833984 unmapped: 44154880 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 301 heartbeat osd_stat(store_statfs(0x4f9936000/0x0/0x4ffc00000, data 0x120cf1d/0x13b6000, compress 0x0/0x0/0x0, omap 0x49b84, meta 0x4ec647c), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 302 ms_handle_reset con 0x558f1252e000 session 0x558f0ffc28c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145833984 unmapped: 44154880 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 302 ms_handle_reset con 0x558f14beec00 session 0x558f113b8700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 145842176 unmapped: 44146688 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 302 ms_handle_reset con 0x558f14bef000 session 0x558f108c2540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 302 heartbeat osd_stat(store_statfs(0x4f9933000/0x0/0x4ffc00000, data 0x120eab7/0x13b7000, compress 0x0/0x0/0x0, omap 0x49d2b, meta 0x4ec62d5), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 17K writes, 66K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 17K writes, 5894 syncs, 3.01 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 24.34 MB, 0.04 MB/s#012Interval WAL: 10K writes, 4497 syncs, 2.36 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 302 ms_handle_reset con 0x558f14bef400 session 0x558f0fb0d880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 302 ms_handle_reset con 0x558f108b2c00 session 0x558f124e2c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 302 ms_handle_reset con 0x558f1252e000 session 0x558f108541c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 302 ms_handle_reset con 0x558f14beec00 session 0x558f0ffd7dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 146382848 unmapped: 43606016 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 302 ms_handle_reset con 0x558f14bef000 session 0x558f1245c1c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 302 ms_handle_reset con 0x558f14bef800 session 0x558f0ffd68c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 302 ms_handle_reset con 0x558f108b2c00 session 0x558f0fdab6c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 146440192 unmapped: 43548672 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2141051 data_alloc: 218103808 data_used: 4739859
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 303 ms_handle_reset con 0x558f14beec00 session 0x558f130cfc00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 143597568 unmapped: 46391296 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 303 ms_handle_reset con 0x558f12194800 session 0x558f12cd0700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 303 ms_handle_reset con 0x558f14bef000 session 0x558f0fa44fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 303 ms_handle_reset con 0x558f12194400 session 0x558f0fa44380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 303 ms_handle_reset con 0x558f12194400 session 0x558f0fb0c540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 303 ms_handle_reset con 0x558f173aec00 session 0x558f10777340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 304 ms_handle_reset con 0x558f14398400 session 0x558f126d0e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 304 ms_handle_reset con 0x558f1252e000 session 0x558f126d0380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 141877248 unmapped: 48111616 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 304 ms_handle_reset con 0x558f173ae000 session 0x558f10792a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 304 ms_handle_reset con 0x558f12194400 session 0x558f113b88c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: mgrc ms_handle_reset ms_handle_reset con 0x558f0fd32000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1000647904
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1000647904,v1:192.168.122.100:6801/1000647904]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: mgrc handle_mgr_configure stats_period=5
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 304 ms_handle_reset con 0x558f1252e000 session 0x558f10855c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142106624 unmapped: 47882240 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 304 heartbeat osd_stat(store_statfs(0x4f8b39000/0x0/0x4ffc00000, data 0x20003b5/0x21b1000, compress 0x0/0x0/0x0, omap 0x4abd9, meta 0x4ec5427), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 305 ms_handle_reset con 0x558f14398400 session 0x558f124e3880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 305 ms_handle_reset con 0x558f146ec000 session 0x558f0fdaae00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 305 ms_handle_reset con 0x558f173aec00 session 0x558f128b8700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 141475840 unmapped: 48513024 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 305 ms_handle_reset con 0x558f146ec000 session 0x558f130cefc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 305 ms_handle_reset con 0x558f14398400 session 0x558f126cda40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 305 ms_handle_reset con 0x558f0f492c00 session 0x558f10792700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 306 ms_handle_reset con 0x558f1252e000 session 0x558f12bac700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 306 ms_handle_reset con 0x558f146ec400 session 0x558f0ffd6fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 48971776 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 306 ms_handle_reset con 0x558f0fca0000 session 0x558f12772540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 306 ms_handle_reset con 0x558f0fd33000 session 0x558f1242e8c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 306 ms_handle_reset con 0x558f12194400 session 0x558f10854380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 306 ms_handle_reset con 0x558f1261d000 session 0x558f1245ddc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2281902 data_alloc: 218103808 data_used: 4740842
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.703756332s of 10.155076981s, submitted: 205
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 307 ms_handle_reset con 0x558f0fca1400 session 0x558f125b9500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 48947200 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 307 ms_handle_reset con 0x558f14852c00 session 0x558f113b9dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 307 ms_handle_reset con 0x558f0f492c00 session 0x558f125b8fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 307 heartbeat osd_stat(store_statfs(0x4f7fff000/0x0/0x4ffc00000, data 0x2b37a08/0x2ceb000, compress 0x0/0x0/0x0, omap 0x4b7a4, meta 0x4ec485c), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 141377536 unmapped: 48611328 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 308 ms_handle_reset con 0x558f14852800 session 0x558f0fb0d180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 308 ms_handle_reset con 0x558f0f492c00 session 0x558f124d4a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 308 ms_handle_reset con 0x558f146ec400 session 0x558f1245dc00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 308 ms_handle_reset con 0x558f0fca1400 session 0x558f12cd08c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 308 ms_handle_reset con 0x558f12194400 session 0x558f1245c8c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 141377536 unmapped: 48611328 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 308 ms_handle_reset con 0x558f0f492c00 session 0x558f10b3d180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 308 ms_handle_reset con 0x558f0fca1400 session 0x558f12d46c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 141377536 unmapped: 48611328 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 308 ms_handle_reset con 0x558f14852800 session 0x558f10792e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 308 handle_osd_map epochs [308,309], i have 309, src has [1,309]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142434304 unmapped: 47554560 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 309 ms_handle_reset con 0x558f14852c00 session 0x558f1245c700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 309 heartbeat osd_stat(store_statfs(0x4f7ff9000/0x0/0x4ffc00000, data 0x2b3b6e9/0x2cf3000, compress 0x0/0x0/0x0, omap 0x4c147, meta 0x4ec3eb9), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2294480 data_alloc: 218103808 data_used: 4741525
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142450688 unmapped: 47538176 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 310 ms_handle_reset con 0x558f1261d000 session 0x558f127b5340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 310 ms_handle_reset con 0x558f146ec400 session 0x558f0ffd68c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142467072 unmapped: 47521792 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142467072 unmapped: 47521792 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 311 ms_handle_reset con 0x558f0fca1400 session 0x558f11e34fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 311 ms_handle_reset con 0x558f0f492c00 session 0x558f113b88c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142499840 unmapped: 47489024 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142499840 unmapped: 47489024 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 311 ms_handle_reset con 0x558f14852c00 session 0x558f12cd0000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 312 heartbeat osd_stat(store_statfs(0x4f7fee000/0x0/0x4ffc00000, data 0x2b40a11/0x2cfc000, compress 0x0/0x0/0x0, omap 0x4cdf3, meta 0x4ec320d), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 312 ms_handle_reset con 0x558f14853800 session 0x558f126d0700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2305593 data_alloc: 218103808 data_used: 4741427
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142499840 unmapped: 47489024 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.453857422s of 10.606096268s, submitted: 87
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 313 ms_handle_reset con 0x558f0f492c00 session 0x558f116b5340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 313 heartbeat osd_stat(store_statfs(0x4f7fe6000/0x0/0x4ffc00000, data 0x2b44245/0x2d04000, compress 0x0/0x0/0x0, omap 0x4d01b, meta 0x4ec2fe5), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 313 ms_handle_reset con 0x558f0fca1400 session 0x558f10854a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 313 ms_handle_reset con 0x558f14852800 session 0x558f124d48c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142548992 unmapped: 47439872 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 313 ms_handle_reset con 0x558f146ec400 session 0x558f113b8000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142843904 unmapped: 47144960 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 313 handle_osd_map epochs [313,314], i have 313, src has [1,314]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 314 ms_handle_reset con 0x558f14853c00 session 0x558f130b6c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 314 ms_handle_reset con 0x558f14852c00 session 0x558f0fa44fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 314 ms_handle_reset con 0x558f14853c00 session 0x558f0fa45dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 314 ms_handle_reset con 0x558f0f492c00 session 0x558f124e3dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142852096 unmapped: 47136768 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 314 ms_handle_reset con 0x558f0fca1400 session 0x558f0ffc21c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142852096 unmapped: 47136768 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2313727 data_alloc: 218103808 data_used: 4745539
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 142852096 unmapped: 47136768 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 314 ms_handle_reset con 0x558f146ec400 session 0x558f124d4380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 314 ms_handle_reset con 0x558f0f492c00 session 0x558f125b8540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 314 heartbeat osd_stat(store_statfs(0x4f7fe6000/0x0/0x4ffc00000, data 0x2b45f3d/0x2d06000, compress 0x0/0x0/0x0, omap 0x4d616, meta 0x4ec29ea), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 149700608 unmapped: 40288256 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 316 ms_handle_reset con 0x558f0fca1400 session 0x558f108c3c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f7fe5000/0x0/0x4ffc00000, data 0x2b45f4d/0x2d07000, compress 0x0/0x0/0x0, omap 0x4d616, meta 0x4ec29ea), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 149733376 unmapped: 40255488 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 316 ms_handle_reset con 0x558f14852800 session 0x558f124e2a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f7fda000/0x0/0x4ffc00000, data 0x2b4963a/0x2d0e000, compress 0x0/0x0/0x0, omap 0x4dbc6, meta 0x4ec243a), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 316 handle_osd_map epochs [316,317], i have 316, src has [1,317]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 317 ms_handle_reset con 0x558f14852c00 session 0x558f1273a8c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 149913600 unmapped: 40075264 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 317 ms_handle_reset con 0x558f146ec400 session 0x558f12cd1500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 317 ms_handle_reset con 0x558f13d28c00 session 0x558f13099a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 317 heartbeat osd_stat(store_statfs(0x4f7fd3000/0x0/0x4ffc00000, data 0x2b4b427/0x2d13000, compress 0x0/0x0/0x0, omap 0x4dcdc, meta 0x4ec2324), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 318 ms_handle_reset con 0x558f0f492c00 session 0x558f1242e8c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 318 ms_handle_reset con 0x558f0fca1400 session 0x558f12bac700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 318 ms_handle_reset con 0x558f14853c00 session 0x558f12cd01c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 149970944 unmapped: 40017920 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 318 ms_handle_reset con 0x558f14852800 session 0x558f12cd1340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 318 ms_handle_reset con 0x558f14852800 session 0x558f126d0380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2406712 data_alloc: 234881024 data_used: 14559669
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 150011904 unmapped: 39976960 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.167462349s of 10.326120377s, submitted: 82
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 154247168 unmapped: 35741696 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 319 ms_handle_reset con 0x558f13d28c00 session 0x558f113e0380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 154329088 unmapped: 35659776 heap: 189988864 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 320 ms_handle_reset con 0x558f14853c00 session 0x558f1292d6c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 150241280 unmapped: 43950080 heap: 194191360 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 320 ms_handle_reset con 0x558f14852c00 session 0x558f12650c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 150372352 unmapped: 43819008 heap: 194191360 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 320 heartbeat osd_stat(store_statfs(0x4eabcf000/0x0/0x4ffc00000, data 0xff50b79/0x1011d000, compress 0x0/0x0/0x0, omap 0x4e895, meta 0x4ec176b), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3655274 data_alloc: 234881024 data_used: 14571859
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 154722304 unmapped: 39469056 heap: 194191360 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 169009152 unmapped: 29384704 heap: 198393856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 320 handle_osd_map epochs [320,321], i have 321, src has [1,321]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 321 ms_handle_reset con 0x558f0fca1400 session 0x558f12027340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 321 ms_handle_reset con 0x558f12671400 session 0x558f12d46e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 321 ms_handle_reset con 0x558f0fca1400 session 0x558f0ffc2380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 321 ms_handle_reset con 0x558f0f492c00 session 0x558f126d0e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 322 ms_handle_reset con 0x558f13d28c00 session 0x558f124e61c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 322 ms_handle_reset con 0x558f10028800 session 0x558f124d48c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157220864 unmapped: 41172992 heap: 198393856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156614656 unmapped: 41779200 heap: 198393856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 322 ms_handle_reset con 0x558f10028800 session 0x558f10855c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 323 ms_handle_reset con 0x558f0f492c00 session 0x558f130cefc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156614656 unmapped: 41779200 heap: 198393856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 324 ms_handle_reset con 0x558f0fca1400 session 0x558f126cda40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 324 ms_handle_reset con 0x558f12671400 session 0x558f12cd1500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 324 heartbeat osd_stat(store_statfs(0x4de62a000/0x0/0x4ffc00000, data 0x1babb7f8/0x1b522000, compress 0x0/0x0/0x0, omap 0x4ef73, meta 0x606108d), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 324 ms_handle_reset con 0x558f13d28c00 session 0x558f10854000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4615481 data_alloc: 234881024 data_used: 16100252
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 324 ms_handle_reset con 0x558f13d28c00 session 0x558f0ffd6a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156647424 unmapped: 41746432 heap: 198393856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 324 heartbeat osd_stat(store_statfs(0x4de622000/0x0/0x4ffc00000, data 0x1babeaa1/0x1b526000, compress 0x0/0x0/0x0, omap 0x4f606, meta 0x60609fa), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156647424 unmapped: 41746432 heap: 198393856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.073735237s of 10.368062019s, submitted: 325
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 324 ms_handle_reset con 0x558f0fca1400 session 0x558f11e34540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 324 ms_handle_reset con 0x558f12671400 session 0x558f113e0380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 324 ms_handle_reset con 0x558f14852800 session 0x558f0fdaae00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156672000 unmapped: 41721856 heap: 198393856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 325 ms_handle_reset con 0x558f14852c00 session 0x558f12026000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 325 ms_handle_reset con 0x558f0fca1400 session 0x558f0ffd68c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 325 ms_handle_reset con 0x558f12671400 session 0x558f113e0540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 325 ms_handle_reset con 0x558f14852800 session 0x558f12026c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 325 ms_handle_reset con 0x558f10028800 session 0x558f1292d180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 325 heartbeat osd_stat(store_statfs(0x4de620000/0x0/0x4ffc00000, data 0x1bac06d7/0x1b52a000, compress 0x0/0x0/0x0, omap 0x4faec, meta 0x6060514), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156737536 unmapped: 41656320 heap: 198393856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 326 ms_handle_reset con 0x558f14852c00 session 0x558f1074f180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 326 ms_handle_reset con 0x558f13d28c00 session 0x558f113b88c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 326 ms_handle_reset con 0x558f0f492c00 session 0x558f10757880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 326 ms_handle_reset con 0x558f0fca1400 session 0x558f125b9880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156753920 unmapped: 41639936 heap: 198393856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 326 ms_handle_reset con 0x558f14853000 session 0x558f12d9c700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 326 ms_handle_reset con 0x558f0fa40000 session 0x558f12cd0e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 326 ms_handle_reset con 0x558f0f492c00 session 0x558f113e16c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 326 ms_handle_reset con 0x558f13d28c00 session 0x558f12026380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4622834 data_alloc: 234881024 data_used: 16098042
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156835840 unmapped: 41558016 heap: 198393856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 327 ms_handle_reset con 0x558f14853000 session 0x558f12772540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 327 ms_handle_reset con 0x558f12671400 session 0x558f12d46c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 158949376 unmapped: 39444480 heap: 198393856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 328 ms_handle_reset con 0x558f10028800 session 0x558f12d46700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 328 ms_handle_reset con 0x558f0fca1400 session 0x558f1242e700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 328 ms_handle_reset con 0x558f14852800 session 0x558f1245c8c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 159039488 unmapped: 39354368 heap: 198393856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 330 heartbeat osd_stat(store_statfs(0x4de615000/0x0/0x4ffc00000, data 0x1bac73ae/0x1b533000, compress 0x0/0x0/0x0, omap 0x51a61, meta 0x605e59f), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 159088640 unmapped: 39305216 heap: 198393856 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 330 ms_handle_reset con 0x558f0f492c00 session 0x558f1292cfc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 330 ms_handle_reset con 0x558f0fdc6c00 session 0x558f0fdaa000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 330 ms_handle_reset con 0x558f13d28c00 session 0x558f12cd01c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 330 ms_handle_reset con 0x558f14853000 session 0x558f12cd08c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 159383552 unmapped: 47415296 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 330 handle_osd_map epochs [330,331], i have 330, src has [1,331]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 330 handle_osd_map epochs [331,331], i have 331, src has [1,331]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 331 ms_handle_reset con 0x558f0fca1400 session 0x558f1245ddc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 331 ms_handle_reset con 0x558f14853000 session 0x558f12651180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5004417 data_alloc: 234881024 data_used: 16097742
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 331 ms_handle_reset con 0x558f12670800 session 0x558f113e16c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 331 ms_handle_reset con 0x558f14852800 session 0x558f13099a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156336128 unmapped: 50462720 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 331 ms_handle_reset con 0x558f14853c00 session 0x558f0ffd7340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 160890880 unmapped: 45907968 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.345360756s of 10.020819664s, submitted: 349
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 332 handle_osd_map epochs [332,333], i have 332, src has [1,333]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 333 ms_handle_reset con 0x558f13d28c00 session 0x558f10776e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 333 ms_handle_reset con 0x558f0fca1400 session 0x558f0fa44380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 333 handle_osd_map epochs [333,334], i have 333, src has [1,334]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157163520 unmapped: 49635328 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 334 heartbeat osd_stat(store_statfs(0x4d1331000/0x0/0x4ffc00000, data 0x28646a8d/0x2881b000, compress 0x0/0x0/0x0, omap 0x54035, meta 0x605bfcb), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157573120 unmapped: 49225728 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 334 ms_handle_reset con 0x558f0f492c00 session 0x558f125b88c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 334 ms_handle_reset con 0x558f12671400 session 0x558f12cd0e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 155836416 unmapped: 50962432 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6085513 data_alloc: 218103808 data_used: 2916618
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 155836416 unmapped: 50962432 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 335 ms_handle_reset con 0x558f12670800 session 0x558f10b3d500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 155852800 unmapped: 50946048 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 335 handle_osd_map epochs [335,336], i have 335, src has [1,336]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 337 ms_handle_reset con 0x558f0f492c00 session 0x558f12bad180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 155918336 unmapped: 50880512 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 337 ms_handle_reset con 0x558f0fca1400 session 0x558f12d9c700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 155918336 unmapped: 50880512 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 337 heartbeat osd_stat(store_statfs(0x4cbb22000/0x0/0x4ffc00000, data 0x2de4bd46/0x2e024000, compress 0x0/0x0/0x0, omap 0x54dcb, meta 0x605b235), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156008448 unmapped: 50790400 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6094010 data_alloc: 218103808 data_used: 2916536
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156033024 unmapped: 50765824 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 337 ms_handle_reset con 0x558f13d28c00 session 0x558f10756000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 337 ms_handle_reset con 0x558f14852800 session 0x558f0fa441c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 338 ms_handle_reset con 0x558f14853000 session 0x558f12d47880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156499968 unmapped: 50298880 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.762193680s of 10.218363762s, submitted: 277
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 338 handle_osd_map epochs [338,339], i have 338, src has [1,339]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 339 ms_handle_reset con 0x558f14853000 session 0x558f125b96c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 339 ms_handle_reset con 0x558f12671400 session 0x558f11e35500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 339 handle_osd_map epochs [339,340], i have 339, src has [1,340]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 340 ms_handle_reset con 0x558f0f492c00 session 0x558f10757180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156565504 unmapped: 50233344 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 340 heartbeat osd_stat(store_statfs(0x4cbb1a000/0x0/0x4ffc00000, data 0x2de50fb3/0x2e02e000, compress 0x0/0x0/0x0, omap 0x557e3, meta 0x605a81d), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 340 ms_handle_reset con 0x558f0fca1400 session 0x558f0fa44000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156565504 unmapped: 50233344 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 340 ms_handle_reset con 0x558f13d28c00 session 0x558f12772540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156590080 unmapped: 50208768 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 340 ms_handle_reset con 0x558f0f492c00 session 0x558f113b9dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 340 ms_handle_reset con 0x558f0fca1400 session 0x558f12bac700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6193797 data_alloc: 218103808 data_used: 4751544
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 340 ms_handle_reset con 0x558f12671400 session 0x558f0fa44fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156737536 unmapped: 50061312 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 340 ms_handle_reset con 0x558f14852800 session 0x558f113b9a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 341 ms_handle_reset con 0x558f14befc00 session 0x558f10757340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156762112 unmapped: 50036736 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 341 heartbeat osd_stat(store_statfs(0x4cae19000/0x0/0x4ffc00000, data 0x2eb50bdd/0x2ed31000, compress 0x0/0x0/0x0, omap 0x55df8, meta 0x605a208), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 342 ms_handle_reset con 0x558f14437400 session 0x558f113e0540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 342 ms_handle_reset con 0x558f0fca1400 session 0x558f0ffc36c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 342 ms_handle_reset con 0x558f14853000 session 0x558f12213180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156868608 unmapped: 49930240 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156434432 unmapped: 50364416 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 343 ms_handle_reset con 0x558f12671400 session 0x558f10756380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 344 ms_handle_reset con 0x558f14852800 session 0x558f12650a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 344 ms_handle_reset con 0x558f14436800 session 0x558f10757180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 344 ms_handle_reset con 0x558f0f492c00 session 0x558f12cd0000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 50601984 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 344 ms_handle_reset con 0x558f0fca1400 session 0x558f10792e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6226083 data_alloc: 218103808 data_used: 4752129
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 344 ms_handle_reset con 0x558f12671400 session 0x558f125b9880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156205056 unmapped: 50593792 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156205056 unmapped: 50593792 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 344 heartbeat osd_stat(store_statfs(0x4cacaf000/0x0/0x4ffc00000, data 0x2ecb3156/0x2ee99000, compress 0x0/0x0/0x0, omap 0x56bd3, meta 0x605942d), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156295168 unmapped: 50503680 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.277519226s of 10.803236008s, submitted: 186
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 344 ms_handle_reset con 0x558f14853000 session 0x558f12cd1880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 344 heartbeat osd_stat(store_statfs(0x4cacb2000/0x0/0x4ffc00000, data 0x2ecb3165/0x2ee9a000, compress 0x0/0x0/0x0, omap 0x56bd3, meta 0x605942d), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 345 ms_handle_reset con 0x558f0fca1400 session 0x558f0ffd68c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156246016 unmapped: 50552832 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 346 ms_handle_reset con 0x558f0f492c00 session 0x558f113e0e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 346 ms_handle_reset con 0x558f14437400 session 0x558f0fb0c8c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156286976 unmapped: 50511872 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6235155 data_alloc: 218103808 data_used: 4752812
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 347 ms_handle_reset con 0x558f12671400 session 0x558f107576c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156303360 unmapped: 50495488 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 347 ms_handle_reset con 0x558f14436800 session 0x558f1292d340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156303360 unmapped: 50495488 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 347 handle_osd_map epochs [347,348], i have 347, src has [1,348]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 348 ms_handle_reset con 0x558f0f492c00 session 0x558f0ffc2e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156336128 unmapped: 50462720 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 348 heartbeat osd_stat(store_statfs(0x4caca6000/0x0/0x4ffc00000, data 0x2ecba0a8/0x2eea4000, compress 0x0/0x0/0x0, omap 0x57834, meta 0x60587cc), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156336128 unmapped: 50462720 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 349 ms_handle_reset con 0x558f12671400 session 0x558f12f82540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 349 ms_handle_reset con 0x558f14437400 session 0x558f10854a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 349 ms_handle_reset con 0x558f14436c00 session 0x558f116b5340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 349 ms_handle_reset con 0x558f0fca1400 session 0x558f126d0700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 349 ms_handle_reset con 0x558f0f492c00 session 0x558f10792700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 156385280 unmapped: 50413568 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 349 ms_handle_reset con 0x558f0fca1400 session 0x558f1273a8c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 349 ms_handle_reset con 0x558f14436c00 session 0x558f12cd1a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 349 ms_handle_reset con 0x558f14437400 session 0x558f12cd0540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 349 ms_handle_reset con 0x558f179d1c00 session 0x558f12026000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 349 ms_handle_reset con 0x558f0f492c00 session 0x558f11e34540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 349 handle_osd_map epochs [349,350], i have 349, src has [1,350]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6267648 data_alloc: 218103808 data_used: 4752714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157949952 unmapped: 48848896 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 350 ms_handle_reset con 0x558f12671400 session 0x558f124d4380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157949952 unmapped: 48848896 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 350 heartbeat osd_stat(store_statfs(0x4caace000/0x0/0x4ffc00000, data 0x2ee9087e/0x2f07c000, compress 0x0/0x0/0x0, omap 0x57dc7, meta 0x6058239), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 350 handle_osd_map epochs [350,351], i have 350, src has [1,351]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 351 ms_handle_reset con 0x558f0fca1400 session 0x558f12026fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 351 ms_handle_reset con 0x558f14436c00 session 0x558f124e2540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.415821075s of 10.014881134s, submitted: 172
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157974528 unmapped: 48824320 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 351 heartbeat osd_stat(store_statfs(0x4caaca000/0x0/0x4ffc00000, data 0x2ee924c2/0x2f07e000, compress 0x0/0x0/0x0, omap 0x57fac, meta 0x6058054), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 351 ms_handle_reset con 0x558f14437400 session 0x558f128b8000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157564928 unmapped: 49233920 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 351 heartbeat osd_stat(store_statfs(0x4caaa4000/0x0/0x4ffc00000, data 0x2eebc4c2/0x2f0a8000, compress 0x0/0x0/0x0, omap 0x57fac, meta 0x6058054), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157073408 unmapped: 49725440 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6282180 data_alloc: 218103808 data_used: 6454191
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157073408 unmapped: 49725440 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157081600 unmapped: 49717248 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157089792 unmapped: 49709056 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 352 ms_handle_reset con 0x558f14436c00 session 0x558f12bad180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157089792 unmapped: 49709056 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 352 heartbeat osd_stat(store_statfs(0x4caa5f000/0x0/0x4ffc00000, data 0x2eefdf7d/0x2f0eb000, compress 0x0/0x0/0x0, omap 0x585c7, meta 0x6057a39), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157089792 unmapped: 49709056 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 352 heartbeat osd_stat(store_statfs(0x4caa5f000/0x0/0x4ffc00000, data 0x2eefdf7d/0x2f0eb000, compress 0x0/0x0/0x0, omap 0x585c7, meta 0x6057a39), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 352 heartbeat osd_stat(store_statfs(0x4caa5f000/0x0/0x4ffc00000, data 0x2eefdf7d/0x2f0eb000, compress 0x0/0x0/0x0, omap 0x585c7, meta 0x6057a39), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6288804 data_alloc: 218103808 data_used: 6454191
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157089792 unmapped: 49709056 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 352 heartbeat osd_stat(store_statfs(0x4caa5f000/0x0/0x4ffc00000, data 0x2eefdf7d/0x2f0eb000, compress 0x0/0x0/0x0, omap 0x585c7, meta 0x6057a39), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 352 heartbeat osd_stat(store_statfs(0x4caa5f000/0x0/0x4ffc00000, data 0x2eefdf7d/0x2f0eb000, compress 0x0/0x0/0x0, omap 0x585c7, meta 0x6057a39), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157089792 unmapped: 49709056 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157106176 unmapped: 49692672 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157114368 unmapped: 49684480 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 heartbeat osd_stat(store_statfs(0x4caa5c000/0x0/0x4ffc00000, data 0x2eeff9fc/0x2f0ee000, compress 0x0/0x0/0x0, omap 0x58721, meta 0x60578df), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 157114368 unmapped: 49684480 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.159007072s of 12.223500252s, submitted: 33
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 ms_handle_reset con 0x558f179d1000 session 0x558f113e1a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6475602 data_alloc: 218103808 data_used: 7694767
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 163684352 unmapped: 43114496 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 ms_handle_reset con 0x558f13d29000 session 0x558f124d5880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 164184064 unmapped: 42614784 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 ms_handle_reset con 0x558f13d28400 session 0x558f124e6380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 ms_handle_reset con 0x558f12670c00 session 0x558f124e6380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 164331520 unmapped: 42467328 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 heartbeat osd_stat(store_statfs(0x4c9a03000/0x0/0x4ffc00000, data 0x3088ba1f/0x30149000, compress 0x0/0x0/0x0, omap 0x5b63d, meta 0x60549c3), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 164331520 unmapped: 42467328 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 164331520 unmapped: 42467328 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6495605 data_alloc: 218103808 data_used: 8834047
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 164331520 unmapped: 42467328 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 164470784 unmapped: 42328064 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 164470784 unmapped: 42328064 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 heartbeat osd_stat(store_statfs(0x4c99de000/0x0/0x4ffc00000, data 0x308b0a1f/0x3016e000, compress 0x0/0x0/0x0, omap 0x5b63d, meta 0x60549c3), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 ms_handle_reset con 0x558f12670c00 session 0x558f0fa441c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 ms_handle_reset con 0x558f13d28400 session 0x558f10756000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 164478976 unmapped: 42319872 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 ms_handle_reset con 0x558f0f492c00 session 0x558f10854fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 ms_handle_reset con 0x558f0fca1400 session 0x558f0ffd7500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 ms_handle_reset con 0x558f12671400 session 0x558f128b9a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 ms_handle_reset con 0x558f0f492c00 session 0x558f0ffc2e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 162258944 unmapped: 44539904 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6455047 data_alloc: 218103808 data_used: 7254511
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 162258944 unmapped: 44539904 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 heartbeat osd_stat(store_statfs(0x4c9bd6000/0x0/0x4ffc00000, data 0x306b39ec/0x2ff6f000, compress 0x0/0x0/0x0, omap 0x5bbbf, meta 0x6054441), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 162258944 unmapped: 44539904 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 ms_handle_reset con 0x558f0fca1400 session 0x558f124d4a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.717720032s of 12.168184280s, submitted: 204
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 ms_handle_reset con 0x558f12670c00 session 0x558f1273b880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 353 handle_osd_map epochs [353,354], i have 354, src has [1,354]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 162095104 unmapped: 44703744 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 354 ms_handle_reset con 0x558f13d28400 session 0x558f1242fdc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 355 heartbeat osd_stat(store_statfs(0x4c9c18000/0x0/0x4ffc00000, data 0x30675588/0x2ff32000, compress 0x0/0x0/0x0, omap 0x5bde9, meta 0x6054217), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 355 ms_handle_reset con 0x558f12869000 session 0x558f113b9a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 168394752 unmapped: 38404096 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 355 ms_handle_reset con 0x558f0f492c00 session 0x558f12027880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 355 ms_handle_reset con 0x558f12671400 session 0x558f1245c700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 168542208 unmapped: 38256640 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6557997 data_alloc: 218103808 data_used: 6997884
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 355 ms_handle_reset con 0x558f12670c00 session 0x558f12026c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 356 ms_handle_reset con 0x558f0fca1400 session 0x558f12026000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 356 ms_handle_reset con 0x558f13d28400 session 0x558f113b9dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 38248448 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 356 ms_handle_reset con 0x558f13d28400 session 0x558f12213180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 168550400 unmapped: 38248448 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 356 ms_handle_reset con 0x558f0f492c00 session 0x558f12cd0c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 356 handle_osd_map epochs [356,357], i have 356, src has [1,357]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 38232064 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c8e84000/0x0/0x4ffc00000, data 0x3155ed14/0x30cc8000, compress 0x0/0x0/0x0, omap 0x5cc5a, meta 0x60533a6), peers [0,2] op hist [0,0,1])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 357 ms_handle_reset con 0x558f0fca1400 session 0x558f10792a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 168599552 unmapped: 38199296 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c9d6c000/0x0/0x4ffc00000, data 0x3051d8b2/0x2fddd000, compress 0x0/0x0/0x0, omap 0x5cee5, meta 0x605311b), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 357 ms_handle_reset con 0x558f12671400 session 0x558f0fb0d6c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 357 ms_handle_reset con 0x558f12670c00 session 0x558f10777340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 357 ms_handle_reset con 0x558f0f492c00 session 0x558f12d9c700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 168615936 unmapped: 38182912 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 357 ms_handle_reset con 0x558f0fca1400 session 0x558f1292d6c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 357 ms_handle_reset con 0x558f12671400 session 0x558f1242f880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6467430 data_alloc: 218103808 data_used: 7001882
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 168501248 unmapped: 38297600 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 358 ms_handle_reset con 0x558f10b55400 session 0x558f10b3d340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 358 ms_handle_reset con 0x558f13d28400 session 0x558f1242e700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 168509440 unmapped: 38289408 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 358 ms_handle_reset con 0x558f0fca1400 session 0x558f12772380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 358 ms_handle_reset con 0x558f0f492c00 session 0x558f0ffd6fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.124475479s of 10.801313400s, submitted: 208
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 168517632 unmapped: 38281216 heap: 206798848 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 359 heartbeat osd_stat(store_statfs(0x4c9d1b000/0x0/0x4ffc00000, data 0x3056d4dc/0x2fe2f000, compress 0x0/0x0/0x0, omap 0x5dd0c, meta 0x60522f4), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 174194688 unmapped: 41009152 heap: 215203840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170614784 unmapped: 44589056 heap: 215203840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7455070 data_alloc: 218103808 data_used: 7002739
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 175489024 unmapped: 43917312 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 359 heartbeat osd_stat(store_statfs(0x4bd918000/0x0/0x4ffc00000, data 0x3c96efe9/0x3c234000, compress 0x0/0x0/0x0, omap 0x5e458, meta 0x6051ba8), peers [0,2] op hist [0,0,0,0,0,0,2])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 176766976 unmapped: 42639360 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 359 ms_handle_reset con 0x558f14bef400 session 0x558f10854c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 359 ms_handle_reset con 0x558f14bef800 session 0x558f10756380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 ms_handle_reset con 0x558f179d0000 session 0x558f12650c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 173137920 unmapped: 46268416 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 ms_handle_reset con 0x558f12671400 session 0x558f1273bdc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 ms_handle_reset con 0x558f10b55400 session 0x558f10855c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 ms_handle_reset con 0x558f0f492c00 session 0x558f0fb0d180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 173522944 unmapped: 45883392 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 ms_handle_reset con 0x558f14bef400 session 0x558f108c3a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 ms_handle_reset con 0x558f0fca1400 session 0x558f0ffd7c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 ms_handle_reset con 0x558f14bef800 session 0x558f12b95340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 173588480 unmapped: 45817856 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7531866 data_alloc: 218103808 data_used: 7007107
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 heartbeat osd_stat(store_statfs(0x4b5915000/0x0/0x4ffc00000, data 0x44970a68/0x44237000, compress 0x0/0x0/0x0, omap 0x5edb6, meta 0x605124a), peers [0,2] op hist [0,0,0,0,0,0,2,2])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 ms_handle_reset con 0x558f10b55400 session 0x558f12cd0000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 172277760 unmapped: 47128576 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 ms_handle_reset con 0x558f0f492c00 session 0x558f0fa45880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 ms_handle_reset con 0x558f12671400 session 0x558f0fa441c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 ms_handle_reset con 0x558f14bef400 session 0x558f12650c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 171982848 unmapped: 47423488 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 ms_handle_reset con 0x558f0f492c00 session 0x558f10855180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 ms_handle_reset con 0x558f10b55400 session 0x558f1245c700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 ms_handle_reset con 0x558f14bef400 session 0x558f12772540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 360 handle_osd_map epochs [360,361], i have 361, src has [1,361]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 361 ms_handle_reset con 0x558f12671400 session 0x558f10792e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 361 ms_handle_reset con 0x558f14bef800 session 0x558f116b56c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.430486679s of 10.003724098s, submitted: 546
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 172064768 unmapped: 47341568 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 361 ms_handle_reset con 0x558f0f492c00 session 0x558f10854c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 361 ms_handle_reset con 0x558f10b55400 session 0x558f12cd0c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 361 ms_handle_reset con 0x558f179d1000 session 0x558f12d476c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 361 ms_handle_reset con 0x558f1263cc00 session 0x558f126f1340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 172498944 unmapped: 46907392 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 361 heartbeat osd_stat(store_statfs(0x4c9ce9000/0x0/0x4ffc00000, data 0x30596668/0x2fe5f000, compress 0x0/0x0/0x0, omap 0x5fe64, meta 0x605019c), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 361 heartbeat osd_stat(store_statfs(0x4c9ce9000/0x0/0x4ffc00000, data 0x30596668/0x2fe5f000, compress 0x0/0x0/0x0, omap 0x5fe64, meta 0x605019c), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 361 ms_handle_reset con 0x558f1263d400 session 0x558f113e1c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 361 ms_handle_reset con 0x558f0f492c00 session 0x558f126d1180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 172064768 unmapped: 47341568 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6587340 data_alloc: 218103808 data_used: 7126403
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 172064768 unmapped: 47341568 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 362 ms_handle_reset con 0x558f1263cc00 session 0x558f10777180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 362 ms_handle_reset con 0x558f12671400 session 0x558f126cda40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 362 ms_handle_reset con 0x558f14bef400 session 0x558f1242fdc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 362 ms_handle_reset con 0x558f1263d400 session 0x558f1292d340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 363 ms_handle_reset con 0x558f10b55400 session 0x558f11e35500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 172097536 unmapped: 47308800 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 363 ms_handle_reset con 0x558f0f492c00 session 0x558f11e348c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 363 ms_handle_reset con 0x558f1263cc00 session 0x558f0fa45180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 363 ms_handle_reset con 0x558f12671400 session 0x558f10855dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 363 ms_handle_reset con 0x558f14bef400 session 0x558f10b3ca80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 363 ms_handle_reset con 0x558f0f492c00 session 0x558f125b8fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 169697280 unmapped: 49709056 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 169697280 unmapped: 49709056 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 364 heartbeat osd_stat(store_statfs(0x4cba88000/0x0/0x4ffc00000, data 0x2dec88d0/0x2e0c1000, compress 0x0/0x0/0x0, omap 0x60f3d, meta 0x604f0c3), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 364 ms_handle_reset con 0x558f1263cc00 session 0x558f126d0c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 364 handle_osd_map epochs [364,365], i have 365, src has [1,365]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 365 ms_handle_reset con 0x558f12671400 session 0x558f127b5c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 365 ms_handle_reset con 0x558f10b55400 session 0x558f1245bdc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 169705472 unmapped: 49700864 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 365 ms_handle_reset con 0x558f179d1000 session 0x558f1273b500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4630607 data_alloc: 218103808 data_used: 4887727
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 366 ms_handle_reset con 0x558f10b55400 session 0x558f10776700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 366 ms_handle_reset con 0x558f0f492c00 session 0x558f116b4700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 166977536 unmapped: 52428800 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 366 ms_handle_reset con 0x558f1263cc00 session 0x558f1242e8c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 366 ms_handle_reset con 0x558f12671400 session 0x558f0fb0d340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165265408 unmapped: 54140928 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165265408 unmapped: 54140928 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165265408 unmapped: 54140928 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.508609772s of 11.189541817s, submitted: 402
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 367 ms_handle_reset con 0x558f179d1000 session 0x558f113e0e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 367 ms_handle_reset con 0x558f0f492c00 session 0x558f10b3ca80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165281792 unmapped: 54124544 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 367 heartbeat osd_stat(store_statfs(0x4f86d0000/0x0/0x4ffc00000, data 0x127fbc9/0x147a000, compress 0x0/0x0/0x0, omap 0x618cd, meta 0x604e733), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 367 ms_handle_reset con 0x558f10b55400 session 0x558f12cd1340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2705009 data_alloc: 218103808 data_used: 4770479
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165281792 unmapped: 54124544 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 367 heartbeat osd_stat(store_statfs(0x4f86cf000/0x0/0x4ffc00000, data 0x127fc2b/0x147b000, compress 0x0/0x0/0x0, omap 0x61959, meta 0x604e6a7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165281792 unmapped: 54124544 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 368 ms_handle_reset con 0x558f1263cc00 session 0x558f113e1dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165281792 unmapped: 54124544 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 368 ms_handle_reset con 0x558f12671400 session 0x558f0fb0d180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 368 ms_handle_reset con 0x558f1263c800 session 0x558f10777180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 368 ms_handle_reset con 0x558f0f492c00 session 0x558f124d5180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 54009856 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 54009856 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 369 ms_handle_reset con 0x558f10b55400 session 0x558f11e34000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2737563 data_alloc: 218103808 data_used: 4770479
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 54067200 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 369 heartbeat osd_stat(store_statfs(0x4f83bf000/0x0/0x4ffc00000, data 0x158c69f/0x178b000, compress 0x0/0x0/0x0, omap 0x6222a, meta 0x604ddd6), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 369 heartbeat osd_stat(store_statfs(0x4f83bf000/0x0/0x4ffc00000, data 0x158c69f/0x178b000, compress 0x0/0x0/0x0, omap 0x6222a, meta 0x604ddd6), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 54067200 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 370 ms_handle_reset con 0x558f1263cc00 session 0x558f1273a540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 370 ms_handle_reset con 0x558f12864000 session 0x558f0fa44c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 370 ms_handle_reset con 0x558f12671400 session 0x558f125b9880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 370 ms_handle_reset con 0x558f0f492c00 session 0x558f12773500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165347328 unmapped: 54059008 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165347328 unmapped: 54059008 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.132351875s of 10.308260918s, submitted: 80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 370 ms_handle_reset con 0x558f10b55400 session 0x558f12b95340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 370 ms_handle_reset con 0x558f1263cc00 session 0x558f113e1c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 54042624 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 370 heartbeat osd_stat(store_statfs(0x4f83bc000/0x0/0x4ffc00000, data 0x158e13e/0x178e000, compress 0x0/0x0/0x0, omap 0x629eb, meta 0x604d615), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2740554 data_alloc: 218103808 data_used: 4771064
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165363712 unmapped: 54042624 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 371 ms_handle_reset con 0x558f12671400 session 0x558f113b8540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f83be000/0x0/0x4ffc00000, data 0x158e13e/0x178e000, compress 0x0/0x0/0x0, omap 0x629eb, meta 0x604d615), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165380096 unmapped: 54026240 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 54009856 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 372 ms_handle_reset con 0x558f12864000 session 0x558f124d5880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 372 heartbeat osd_stat(store_statfs(0x4f83b7000/0x0/0x4ffc00000, data 0x158fd3e/0x1792000, compress 0x0/0x0/0x0, omap 0x6320e, meta 0x604cdf2), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 54009856 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 54009856 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2752226 data_alloc: 218103808 data_used: 4775738
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 54009856 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 54009856 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 373 ms_handle_reset con 0x558f0f492c00 session 0x558f11e34a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 373 ms_handle_reset con 0x558f1263cc00 session 0x558f125b8700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 373 ms_handle_reset con 0x558f10b55400 session 0x558f130b6c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 373 ms_handle_reset con 0x558f12671400 session 0x558f0ffc21c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 54009856 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165396480 unmapped: 54009856 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 373 heartbeat osd_stat(store_statfs(0x4f83b2000/0x0/0x4ffc00000, data 0x1593375/0x1798000, compress 0x0/0x0/0x0, omap 0x6371e, meta 0x604c8e2), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.894468307s of 10.028285980s, submitted: 82
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 374 ms_handle_reset con 0x558f12865000 session 0x558f113b9a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 374 ms_handle_reset con 0x558f0f492c00 session 0x558f1245ba40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 374 ms_handle_reset con 0x558f10b55400 session 0x558f1245d6c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 53993472 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 374 ms_handle_reset con 0x558f1263cc00 session 0x558f11e35c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 374 ms_handle_reset con 0x558f12671400 session 0x558f0fa44000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 374 ms_handle_reset con 0x558f12865400 session 0x558f125b8c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2759638 data_alloc: 218103808 data_used: 4776026
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 374 ms_handle_reset con 0x558f0f492c00 session 0x558f127736c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 374 ms_handle_reset con 0x558f10b55400 session 0x558f1273a380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165445632 unmapped: 53960704 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 375 ms_handle_reset con 0x558f1263cc00 session 0x558f125b8c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 375 ms_handle_reset con 0x558f12671400 session 0x558f12650c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 375 heartbeat osd_stat(store_statfs(0x4f83ad000/0x0/0x4ffc00000, data 0x1594f44/0x179d000, compress 0x0/0x0/0x0, omap 0x63e73, meta 0x604c18d), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165740544 unmapped: 53665792 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 375 handle_osd_map epochs [375,376], i have 375, src has [1,376]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 376 ms_handle_reset con 0x558f12672000 session 0x558f0ffd68c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 376 ms_handle_reset con 0x558f10b55400 session 0x558f126f1c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 376 ms_handle_reset con 0x558f0f492c00 session 0x558f124d5180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 164888576 unmapped: 54517760 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 376 ms_handle_reset con 0x558f1263cc00 session 0x558f0fb0d340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 164888576 unmapped: 54517760 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 376 ms_handle_reset con 0x558f12671400 session 0x558f12d476c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 376 ms_handle_reset con 0x558f14852000 session 0x558f0e416fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 376 ms_handle_reset con 0x558f10b55400 session 0x558f11e34000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 376 ms_handle_reset con 0x558f0f492c00 session 0x558f10776e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 376 ms_handle_reset con 0x558f1263cc00 session 0x558f116b4700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 164921344 unmapped: 54484992 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 376 ms_handle_reset con 0x558f1272d400 session 0x558f113b8540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 377 ms_handle_reset con 0x558f12671400 session 0x558f126d0540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 377 ms_handle_reset con 0x558f12869400 session 0x558f1074e700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2799974 data_alloc: 218103808 data_used: 7786248
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 165191680 unmapped: 54214656 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 377 handle_osd_map epochs [377,378], i have 377, src has [1,378]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 378 ms_handle_reset con 0x558f12671400 session 0x558f12bac700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 378 ms_handle_reset con 0x558f12868000 session 0x558f10792e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 378 ms_handle_reset con 0x558f10b55400 session 0x558f0ffd6c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 378 ms_handle_reset con 0x558f0f492c00 session 0x558f10776700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 378 heartbeat osd_stat(store_statfs(0x4f83a1000/0x0/0x4ffc00000, data 0x159a2bf/0x17a9000, compress 0x0/0x0/0x0, omap 0x655ef, meta 0x604aa11), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 164954112 unmapped: 54452224 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 378 ms_handle_reset con 0x558f0f492c00 session 0x558f125b9a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 378 ms_handle_reset con 0x558f10b55400 session 0x558f125b8540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 378 ms_handle_reset con 0x558f12671400 session 0x558f1273bc00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 164970496 unmapped: 54435840 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 378 ms_handle_reset con 0x558f12868000 session 0x558f126f1340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 164978688 unmapped: 54427648 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 378 ms_handle_reset con 0x558f12869400 session 0x558f124d4a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.160708427s of 10.426481247s, submitted: 146
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 164487168 unmapped: 54919168 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 379 ms_handle_reset con 0x558f12869400 session 0x558f10854000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2803749 data_alloc: 218103808 data_used: 7786852
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 379 ms_handle_reset con 0x558f0f492c00 session 0x558f124e61c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 379 heartbeat osd_stat(store_statfs(0x4f839d000/0x0/0x4ffc00000, data 0x159da4d/0x17ad000, compress 0x0/0x0/0x0, omap 0x6662c, meta 0x60499d4), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 164773888 unmapped: 54632448 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 379 ms_handle_reset con 0x558f10b55400 session 0x558f124e36c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 380 ms_handle_reset con 0x558f13d28000 session 0x558f10776a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 166010880 unmapped: 53395456 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 381 ms_handle_reset con 0x558f13d29000 session 0x558f12bac380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170811392 unmapped: 48594944 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 381 ms_handle_reset con 0x558f0f492c00 session 0x558f1273b880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 171499520 unmapped: 47906816 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 171499520 unmapped: 47906816 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 381 heartbeat osd_stat(store_statfs(0x4f7efe000/0x0/0x4ffc00000, data 0x1a240be/0x1c34000, compress 0x0/0x0/0x0, omap 0x66db2, meta 0x604924e), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2852735 data_alloc: 218103808 data_used: 8968036
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 171532288 unmapped: 47874048 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 171614208 unmapped: 47792128 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170090496 unmapped: 49315840 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170090496 unmapped: 49315840 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.455631256s of 10.008841515s, submitted: 218
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 382 ms_handle_reset con 0x558f10b55400 session 0x558f0fb0c700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170090496 unmapped: 49315840 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 382 heartbeat osd_stat(store_statfs(0x4f7f11000/0x0/0x4ffc00000, data 0x1a25be7/0x1c39000, compress 0x0/0x0/0x0, omap 0x67465, meta 0x6048b9b), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2848359 data_alloc: 218103808 data_used: 8968036
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170090496 unmapped: 49315840 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 383 ms_handle_reset con 0x558f12869400 session 0x558f10854c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 383 ms_handle_reset con 0x558f13d28000 session 0x558f108541c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170098688 unmapped: 49307648 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 384 ms_handle_reset con 0x558f134cfc00 session 0x558f1273b180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 384 ms_handle_reset con 0x558f0f492c00 session 0x558f12bad500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 49299456 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 385 ms_handle_reset con 0x558f10b55400 session 0x558f126508c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 49299456 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 385 ms_handle_reset con 0x558f12869400 session 0x558f113e1dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 385 ms_handle_reset con 0x558f13d28000 session 0x558f12cd1340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f7f07000/0x0/0x4ffc00000, data 0x1a2af2d/0x1c43000, compress 0x0/0x0/0x0, omap 0x67bc1, meta 0x604843f), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 49299456 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 385 ms_handle_reset con 0x558f1272e800 session 0x558f0fdaa000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2858617 data_alloc: 218103808 data_used: 8969206
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 49299456 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 49299456 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 385 handle_osd_map epochs [385,386], i have 386, src has [1,386]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 386 ms_handle_reset con 0x558f1272e800 session 0x558f0fb0d180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 49299456 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 386 ms_handle_reset con 0x558f0f492c00 session 0x558f10b3dc00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 386 heartbeat osd_stat(store_statfs(0x4f7f03000/0x0/0x4ffc00000, data 0x1a2ca83/0x1c45000, compress 0x0/0x0/0x0, omap 0x67de2, meta 0x604821e), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 49299456 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 387 ms_handle_reset con 0x558f10b55400 session 0x558f116b5a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 49299456 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2864149 data_alloc: 218103808 data_used: 8969791
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.115644455s of 11.207120895s, submitted: 54
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 387 ms_handle_reset con 0x558f15310c00 session 0x558f130cefc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 387 ms_handle_reset con 0x558f13d28000 session 0x558f113e1500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 387 ms_handle_reset con 0x558f12869400 session 0x558f0ffd6700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 49299456 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 387 ms_handle_reset con 0x558f16b2b000 session 0x558f10793500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 49299456 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 388 ms_handle_reset con 0x558f0f492c00 session 0x558f10777180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170123264 unmapped: 49283072 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170147840 unmapped: 49258496 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 389 ms_handle_reset con 0x558f10b55400 session 0x558f12213340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f7efd000/0x0/0x4ffc00000, data 0x1a301d2/0x1c4d000, compress 0x0/0x0/0x0, omap 0x68bf1, meta 0x604740f), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f7ef7000/0x0/0x4ffc00000, data 0x1a31dec/0x1c51000, compress 0x0/0x0/0x0, omap 0x68db2, meta 0x604724e), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170147840 unmapped: 49258496 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 389 ms_handle_reset con 0x558f1272e800 session 0x558f126d0fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2877728 data_alloc: 218103808 data_used: 8969905
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f7efc000/0x0/0x4ffc00000, data 0x1a31d8a/0x1c50000, compress 0x0/0x0/0x0, omap 0x69192, meta 0x6046e6e), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170147840 unmapped: 49258496 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 389 handle_osd_map epochs [390,390], i have 390, src has [1,390]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 390 ms_handle_reset con 0x558f1272e800 session 0x558f1245da40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170164224 unmapped: 49242112 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 391 ms_handle_reset con 0x558f0f492c00 session 0x558f0fa44a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 391 ms_handle_reset con 0x558f12869400 session 0x558f108c2a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 391 ms_handle_reset con 0x558f10b55400 session 0x558f1273ba40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170532864 unmapped: 48873472 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170532864 unmapped: 48873472 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 391 ms_handle_reset con 0x558f1424a800 session 0x558f113b9340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 391 heartbeat osd_stat(store_statfs(0x4f7ece000/0x0/0x4ffc00000, data 0x1a5f3a3/0x1c7e000, compress 0x0/0x0/0x0, omap 0x699be, meta 0x6046642), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170557440 unmapped: 48848896 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2889132 data_alloc: 218103808 data_used: 9119156
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170557440 unmapped: 48848896 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.041714668s of 10.356242180s, submitted: 110
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 392 ms_handle_reset con 0x558f0f492c00 session 0x558f128b8700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 392 ms_handle_reset con 0x558f1272e800 session 0x558f0fdaa8c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170557440 unmapped: 48848896 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 392 ms_handle_reset con 0x558f12869400 session 0x558f12f82a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 393 ms_handle_reset con 0x558f10b55400 session 0x558f1273a1c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 170565632 unmapped: 48840704 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 393 ms_handle_reset con 0x558f1424a800 session 0x558f12baa540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 393 ms_handle_reset con 0x558f0f492c00 session 0x558f108c2380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 171614208 unmapped: 47792128 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 393 heartbeat osd_stat(store_statfs(0x4f7ec0000/0x0/0x4ffc00000, data 0x1d72b4b/0x1c88000, compress 0x0/0x0/0x0, omap 0x6a58c, meta 0x6045a74), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 171614208 unmapped: 47792128 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 393 handle_osd_map epochs [393,394], i have 394, src has [1,394]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 394 ms_handle_reset con 0x558f10b55400 session 0x558f0fa44c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 394 ms_handle_reset con 0x558f1272e800 session 0x558f12d46fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2930850 data_alloc: 218103808 data_used: 9119156
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 171622400 unmapped: 47783936 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 394 heartbeat osd_stat(store_statfs(0x4f7ebd000/0x0/0x4ffc00000, data 0x1d74776/0x1c8d000, compress 0x0/0x0/0x0, omap 0x6a7ac, meta 0x6045854), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 394 ms_handle_reset con 0x558f1424b400 session 0x558f128b8540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 171655168 unmapped: 47751168 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 395 ms_handle_reset con 0x558f12869400 session 0x558f0fdabdc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 395 ms_handle_reset con 0x558f0f492c00 session 0x558f126d1180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 171663360 unmapped: 47742976 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 395 heartbeat osd_stat(store_statfs(0x4f7eb9000/0x0/0x4ffc00000, data 0x1d763c8/0x1c91000, compress 0x0/0x0/0x0, omap 0x6a93e, meta 0x60456c2), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 396 ms_handle_reset con 0x558f10b55400 session 0x558f12651180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f7eb4000/0x0/0x4ffc00000, data 0x1d77f66/0x1c93000, compress 0x0/0x0/0x0, omap 0x6aeb2, meta 0x604514e), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 171704320 unmapped: 47702016 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 396 ms_handle_reset con 0x558f1272e800 session 0x558f0e417340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 396 ms_handle_reset con 0x558f1424b400 session 0x558f108c36c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 171704320 unmapped: 47702016 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2953574 data_alloc: 234881024 data_used: 13238781
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 175259648 unmapped: 44146688 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 175276032 unmapped: 44130304 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.841583252s of 10.994333267s, submitted: 67
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 175292416 unmapped: 44113920 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 397 ms_handle_reset con 0x558f16b28000 session 0x558f0fb0ce00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 397 heartbeat osd_stat(store_statfs(0x4f7dfe000/0x0/0x4ffc00000, data 0x1e2fa01/0x1d4c000, compress 0x0/0x0/0x0, omap 0x6b0d3, meta 0x6044f2d), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 175439872 unmapped: 43966464 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 175439872 unmapped: 43966464 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3015144 data_alloc: 234881024 data_used: 13347325
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 175710208 unmapped: 43696128 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 175808512 unmapped: 43597824 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 397 heartbeat osd_stat(store_statfs(0x4f74dc000/0x0/0x4ffc00000, data 0x2753a01/0x2670000, compress 0x0/0x0/0x0, omap 0x6b0d3, meta 0x6044f2d), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 397 ms_handle_reset con 0x558f0f492c00 session 0x558f0fb0c540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 175857664 unmapped: 43548672 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 175857664 unmapped: 43548672 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 182763520 unmapped: 36642816 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3068330 data_alloc: 234881024 data_used: 21805565
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184418304 unmapped: 34988032 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 397 heartbeat osd_stat(store_statfs(0x4f74db000/0x0/0x4ffc00000, data 0x2753a24/0x2671000, compress 0x0/0x0/0x0, omap 0x6b352, meta 0x6044cae), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184418304 unmapped: 34988032 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.916599274s of 10.003260612s, submitted: 38
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184532992 unmapped: 34873344 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f74d6000/0x0/0x4ffc00000, data 0x27554a3/0x2674000, compress 0x0/0x0/0x0, omap 0x6b49e, meta 0x6044b62), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 ms_handle_reset con 0x558f1424b400 session 0x558f1242e700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184565760 unmapped: 34840576 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f74d5000/0x0/0x4ffc00000, data 0x27554b3/0x2675000, compress 0x0/0x0/0x0, omap 0x6b52c, meta 0x6044ad4), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 ms_handle_reset con 0x558f0ffee400 session 0x558f0fa44540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 ms_handle_reset con 0x558f12192c00 session 0x558f0ffc21c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184598528 unmapped: 34807808 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3073212 data_alloc: 234881024 data_used: 21809661
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 ms_handle_reset con 0x558f14e32800 session 0x558f12cd0a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184623104 unmapped: 34783232 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184623104 unmapped: 34783232 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 ms_handle_reset con 0x558f0f492c00 session 0x558f10854a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 ms_handle_reset con 0x558f0ffee400 session 0x558f10855880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 34701312 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f74d7000/0x0/0x4ffc00000, data 0x27554a3/0x2674000, compress 0x0/0x0/0x0, omap 0x6b6d6, meta 0x604492a), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 34701312 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184713216 unmapped: 34693120 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3072325 data_alloc: 234881024 data_used: 21809661
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184729600 unmapped: 34676736 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185778176 unmapped: 33628160 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.182846069s of 10.350908279s, submitted: 77
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 ms_handle_reset con 0x558f16b2b000 session 0x558f107776c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 ms_handle_reset con 0x558f15310c00 session 0x558f11e35dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 186351616 unmapped: 33054720 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 ms_handle_reset con 0x558f12192c00 session 0x558f108548c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f6c93000/0x0/0x4ffc00000, data 0x2f6f4a3/0x2e8e000, compress 0x0/0x0/0x0, omap 0x6bb19, meta 0x60444e7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 186376192 unmapped: 33030144 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 ms_handle_reset con 0x558f0f492c00 session 0x558f125b9880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 186679296 unmapped: 32727040 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 ms_handle_reset con 0x558f0ffee400 session 0x558f1273aa80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3125256 data_alloc: 234881024 data_used: 22290941
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 ms_handle_reset con 0x558f15310c00 session 0x558f12650c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f6d5a000/0x0/0x4ffc00000, data 0x2f00441/0x2df1000, compress 0x0/0x0/0x0, omap 0x6c0eb, meta 0x6043f15), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 186720256 unmapped: 32686080 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 399 ms_handle_reset con 0x558f16b2b000 session 0x558f1292d6c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185999360 unmapped: 33406976 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 399 ms_handle_reset con 0x558f12673800 session 0x558f1245ba40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 399 ms_handle_reset con 0x558f0f492c00 session 0x558f12027880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185999360 unmapped: 33406976 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f6d58000/0x0/0x4ffc00000, data 0x2bac031/0x2dd7000, compress 0x0/0x0/0x0, omap 0x6c372, meta 0x6043c8e), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185999360 unmapped: 33406976 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 399 ms_handle_reset con 0x558f0ffee400 session 0x558f116b56c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 399 ms_handle_reset con 0x558f15310c00 session 0x558f124d5180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 35438592 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000642 data_alloc: 234881024 data_used: 13972379
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 35438592 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 35438592 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f7502000/0x0/0x4ffc00000, data 0x241f00e/0x2649000, compress 0x0/0x0/0x0, omap 0x6c804, meta 0x60437fc), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 35438592 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.568756104s of 10.751868248s, submitted: 120
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 401 ms_handle_reset con 0x558f16b2b000 session 0x558f125b9dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 35438592 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 35438592 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 401 heartbeat osd_stat(store_statfs(0x4f74fc000/0x0/0x4ffc00000, data 0x242265f/0x264e000, compress 0x0/0x0/0x0, omap 0x6ca94, meta 0x604356c), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006234 data_alloc: 234881024 data_used: 13976440
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 35438592 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 35438592 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 35438592 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 35438592 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 401 heartbeat osd_stat(store_statfs(0x4f74fc000/0x0/0x4ffc00000, data 0x242265f/0x264e000, compress 0x0/0x0/0x0, omap 0x6ca94, meta 0x604356c), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 35438592 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006234 data_alloc: 234881024 data_used: 13976440
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 35438592 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 401 ms_handle_reset con 0x558f1424b400 session 0x558f10855a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 401 ms_handle_reset con 0x558f0f492c00 session 0x558f1273a8c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 401 handle_osd_map epochs [401,402], i have 402, src has [1,402]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185024512 unmapped: 34381824 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0ffee400 session 0x558f12badc00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185024512 unmapped: 34381824 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185024512 unmapped: 34381824 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f15310c00 session 0x558f124e6380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185024512 unmapped: 34381824 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.982844353s of 12.037858963s, submitted: 50
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f16b2b000 session 0x558f0fdaa1c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f14e31c00 session 0x558f1245bdc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0f492c00 session 0x558f130b6c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053714 data_alloc: 234881024 data_used: 13976440
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x24240de/0x2651000, compress 0x0/0x0/0x0, omap 0x6d1d1, meta 0x6042e2f), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185434112 unmapped: 33972224 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185434112 unmapped: 33972224 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185434112 unmapped: 33972224 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185434112 unmapped: 33972224 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185434112 unmapped: 33972224 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053970 data_alloc: 234881024 data_used: 13976555
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f6dcc000/0x0/0x4ffc00000, data 0x2b530de/0x2d80000, compress 0x0/0x0/0x0, omap 0x6d25f, meta 0x6042da1), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185434112 unmapped: 33972224 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0ffee400 session 0x558f12772380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185434112 unmapped: 33972224 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f10b55400 session 0x558f1273ae00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f1272e800 session 0x558f0ffd7500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f15310c00 session 0x558f0fb0d6c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 179560448 unmapped: 39845888 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 179560448 unmapped: 39845888 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0f492c00 session 0x558f1245ba40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f7f34000/0x0/0x4ffc00000, data 0x19eb0bb/0x1c17000, compress 0x0/0x0/0x0, omap 0x6d854, meta 0x60427ac), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0ffee400 session 0x558f10792a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 179560448 unmapped: 39845888 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f10b55400 session 0x558f12650c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.223377228s of 10.394925117s, submitted: 53
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f1272e800 session 0x558f125b9880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2908565 data_alloc: 218103808 data_used: 4793288
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 179560448 unmapped: 39845888 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f7f34000/0x0/0x4ffc00000, data 0x19eb0cb/0x1c18000, compress 0x0/0x0/0x0, omap 0x6d55c, meta 0x6042aa4), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f16b2b000 session 0x558f12bace00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0f492c00 session 0x558f10793340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 179560448 unmapped: 39845888 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f1272e800 session 0x558f0fa44700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f1530f800 session 0x558f108c21c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 37396480 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0fca1400 session 0x558f0fb0cc40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f14e35400 session 0x558f126d01c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 37396480 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x19eb0bb/0x1c17000, compress 0x0/0x0/0x0, omap 0x6d700, meta 0x6042900), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x19eb0bb/0x1c17000, compress 0x0/0x0/0x0, omap 0x6d700, meta 0x6042900), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 37396480 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952237 data_alloc: 234881024 data_used: 12239800
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 37396480 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x19eb0bb/0x1c17000, compress 0x0/0x0/0x0, omap 0x6d700, meta 0x6042900), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 37396480 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 37396480 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 37396480 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x19eb0bb/0x1c17000, compress 0x0/0x0/0x0, omap 0x6d700, meta 0x6042900), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 37396480 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952093 data_alloc: 234881024 data_used: 12239800
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0fca1400 session 0x558f127b5c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f1272e800 session 0x558f0fa44540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 37396480 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0f492c00 session 0x558f0fa45180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 37396480 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f7f35000/0x0/0x4ffc00000, data 0x19eb0bb/0x1c17000, compress 0x0/0x0/0x0, omap 0x6d78c, meta 0x6042874), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.341001511s of 12.361576080s, submitted: 11
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184131584 unmapped: 35274752 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184131584 unmapped: 35274752 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184614912 unmapped: 34791424 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010011 data_alloc: 234881024 data_used: 13435832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184614912 unmapped: 34791424 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184614912 unmapped: 34791424 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f7616000/0x0/0x4ffc00000, data 0x23090cb/0x2536000, compress 0x0/0x0/0x0, omap 0x6d818, meta 0x60427e8), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184614912 unmapped: 34791424 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f1530f800 session 0x558f124e3a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184614912 unmapped: 34791424 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184614912 unmapped: 34791424 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3011763 data_alloc: 234881024 data_used: 13435832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184614912 unmapped: 34791424 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0ffee400 session 0x558f108c2380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f10b55400 session 0x558f12cd16c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0f492c00 session 0x558f11e35dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184631296 unmapped: 34775040 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0fca1400 session 0x558f0fb0c700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f1272e800 session 0x558f11e34e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.757212639s of 10.001177788s, submitted: 106
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f1530f800 session 0x558f11e348c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184639488 unmapped: 34766848 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0f492c00 session 0x558f107576c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f7615000/0x0/0x4ffc00000, data 0x230a0bb/0x2536000, compress 0x0/0x0/0x0, omap 0x6d9f4, meta 0x604260c), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184647680 unmapped: 34758656 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184647680 unmapped: 34758656 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f601d000/0x0/0x4ffc00000, data 0x39030bb/0x3b2f000, compress 0x0/0x0/0x0, omap 0x6ddc8, meta 0x6042238), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3129047 data_alloc: 234881024 data_used: 13439928
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184647680 unmapped: 34758656 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184647680 unmapped: 34758656 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184647680 unmapped: 34758656 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f601d000/0x0/0x4ffc00000, data 0x39030bb/0x3b2f000, compress 0x0/0x0/0x0, omap 0x6ddc8, meta 0x6042238), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184647680 unmapped: 34758656 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0fca1400 session 0x558f126d0c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f10b55400 session 0x558f0ffd7a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 193069056 unmapped: 26337280 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f1272e800 session 0x558f1273b180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f1263d800 session 0x558f10776a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3302364 data_alloc: 234881024 data_used: 13439928
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0f492c00 session 0x558f108c3c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184688640 unmapped: 34717696 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0fca1400 session 0x558f10854c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f10b55400 session 0x558f113e1dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184688640 unmapped: 34717696 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f1263d800 session 0x558f108541c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f1272e800 session 0x558f0ffc3180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0f492c00 session 0x558f1273b500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 184688640 unmapped: 34717696 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f401d000/0x0/0x4ffc00000, data 0x59030bb/0x5b2f000, compress 0x0/0x0/0x0, omap 0x6e19c, meta 0x6041e64), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f0fca1400 session 0x558f12d46540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.163691521s of 10.512026787s, submitted: 33
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f10b55400 session 0x558f11e34a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185131008 unmapped: 34275328 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185139200 unmapped: 34267136 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3309363 data_alloc: 234881024 data_used: 13514168
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 185139200 unmapped: 34267136 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 186015744 unmapped: 33390592 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f3ff8000/0x0/0x4ffc00000, data 0x59270cb/0x5b54000, compress 0x0/0x0/0x0, omap 0x6e412, meta 0x6041bee), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 186376192 unmapped: 33030144 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 186376192 unmapped: 33030144 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f3ff8000/0x0/0x4ffc00000, data 0x59270cb/0x5b54000, compress 0x0/0x0/0x0, omap 0x6e412, meta 0x6041bee), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 186376192 unmapped: 33030144 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331123 data_alloc: 234881024 data_used: 17224632
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 186376192 unmapped: 33030144 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f3ff8000/0x0/0x4ffc00000, data 0x59270cb/0x5b54000, compress 0x0/0x0/0x0, omap 0x6e412, meta 0x6041bee), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 186376192 unmapped: 33030144 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 186376192 unmapped: 33030144 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f3ff8000/0x0/0x4ffc00000, data 0x59270cb/0x5b54000, compress 0x0/0x0/0x0, omap 0x6e412, meta 0x6041bee), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 186376192 unmapped: 33030144 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.639405251s of 11.674546242s, submitted: 9
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 186490880 unmapped: 32915456 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3346387 data_alloc: 234881024 data_used: 18187192
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f3ff6000/0x0/0x4ffc00000, data 0x59270cb/0x5b54000, compress 0x0/0x0/0x0, omap 0x6e412, meta 0x6041bee), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 186589184 unmapped: 32817152 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 186605568 unmapped: 32800768 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 ms_handle_reset con 0x558f12867400 session 0x558f12d47dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 193486848 unmapped: 25919488 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 193544192 unmapped: 25862144 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f2c5f000/0x0/0x4ffc00000, data 0x59500cb/0x5b7d000, compress 0x0/0x0/0x0, omap 0x6e412, meta 0x71e1bee), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 194125824 unmapped: 25280512 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475763 data_alloc: 234881024 data_used: 18756552
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 191954944 unmapped: 27451392 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 195026944 unmapped: 24379392 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 195026944 unmapped: 24379392 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 195026944 unmapped: 24379392 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 195026944 unmapped: 24379392 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f189c000/0x0/0x4ffc00000, data 0x6ee30cb/0x7110000, compress 0x0/0x0/0x0, omap 0x6e412, meta 0x71e1bee), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3503667 data_alloc: 234881024 data_used: 23501256
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 195026944 unmapped: 24379392 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 195043328 unmapped: 24363008 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f189c000/0x0/0x4ffc00000, data 0x6ee30cb/0x7110000, compress 0x0/0x0/0x0, omap 0x6e412, meta 0x71e1bee), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 195076096 unmapped: 24330240 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 195092480 unmapped: 24313856 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.230343819s of 14.990905762s, submitted: 64
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 402 handle_osd_map epochs [402,403], i have 403, src has [1,403]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 403 ms_handle_reset con 0x558f15311000 session 0x558f10855a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 195092480 unmapped: 24313856 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3508475 data_alloc: 234881024 data_used: 23517640
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 403 ms_handle_reset con 0x558f0fca1400 session 0x558f12bac700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 403 ms_handle_reset con 0x558f0f492c00 session 0x558f113b9dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 24264704 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f1897000/0x0/0x4ffc00000, data 0x6ee4c67/0x7113000, compress 0x0/0x0/0x0, omap 0x6ed49, meta 0x71e12b7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 209682432 unmapped: 9723904 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 403 ms_handle_reset con 0x558f1263d800 session 0x558f113b8540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 403 ms_handle_reset con 0x558f10b55400 session 0x558f0ffc3dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201433088 unmapped: 17973248 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 203587584 unmapped: 15818752 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 203587584 unmapped: 15818752 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3562619 data_alloc: 234881024 data_used: 24247752
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 203587584 unmapped: 15818752 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 203587584 unmapped: 15818752 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 403 heartbeat osd_stat(store_statfs(0x4eff50000/0x0/0x4ffc00000, data 0x762dc67/0x785c000, compress 0x0/0x0/0x0, omap 0x6ecaa, meta 0x8381356), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 203587584 unmapped: 15818752 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 203628544 unmapped: 15777792 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 203628544 unmapped: 15777792 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.517401695s of 10.820667267s, submitted: 108
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3560331 data_alloc: 234881024 data_used: 24247752
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 403 ms_handle_reset con 0x558f16b2a800 session 0x558f0ffd6c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 403 ms_handle_reset con 0x558f1272c800 session 0x558f10776700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 203751424 unmapped: 15654912 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 403 ms_handle_reset con 0x558f0fca1400 session 0x558f12d461c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 403 ms_handle_reset con 0x558f0f492c00 session 0x558f0fa448c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 203800576 unmapped: 15605760 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 203800576 unmapped: 15605760 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 403 heartbeat osd_stat(store_statfs(0x4effd3000/0x0/0x4ffc00000, data 0x7609c8a/0x7839000, compress 0x0/0x0/0x0, omap 0x6f0aa, meta 0x8380f56), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 203800576 unmapped: 15605760 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 403 heartbeat osd_stat(store_statfs(0x4effd3000/0x0/0x4ffc00000, data 0x7609c8a/0x7839000, compress 0x0/0x0/0x0, omap 0x6f0aa, meta 0x8380f56), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 ms_handle_reset con 0x558f1263d800 session 0x558f108c3a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204029952 unmapped: 15376384 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559996 data_alloc: 234881024 data_used: 24147400
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 ms_handle_reset con 0x558f12867400 session 0x558f126f1c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 heartbeat osd_stat(store_statfs(0x4effc8000/0x0/0x4ffc00000, data 0x7615826/0x7842000, compress 0x0/0x0/0x0, omap 0x6f4a2, meta 0x8380b5e), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 15343616 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204079104 unmapped: 15327232 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204079104 unmapped: 15327232 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204079104 unmapped: 15327232 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204079104 unmapped: 15327232 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 ms_handle_reset con 0x558f0f492c00 session 0x558f12cd0a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559996 data_alloc: 234881024 data_used: 24147400
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204079104 unmapped: 15327232 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 heartbeat osd_stat(store_statfs(0x4effc8000/0x0/0x4ffc00000, data 0x7615826/0x7842000, compress 0x0/0x0/0x0, omap 0x6f4a2, meta 0x8380b5e), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 ms_handle_reset con 0x558f0fca1400 session 0x558f126d08c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 heartbeat osd_stat(store_statfs(0x4effc8000/0x0/0x4ffc00000, data 0x7615826/0x7842000, compress 0x0/0x0/0x0, omap 0x6f4a2, meta 0x8380b5e), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204079104 unmapped: 15327232 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 ms_handle_reset con 0x558f1263d800 session 0x558f12772c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.678190231s of 11.843404770s, submitted: 26
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 ms_handle_reset con 0x558f1272c800 session 0x558f0fdaa700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204128256 unmapped: 15278080 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 heartbeat osd_stat(store_statfs(0x4effc9000/0x0/0x4ffc00000, data 0x7615836/0x7843000, compress 0x0/0x0/0x0, omap 0x6f718, meta 0x83808e8), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204161024 unmapped: 15245312 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204480512 unmapped: 14925824 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573478 data_alloc: 234881024 data_used: 25408456
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204480512 unmapped: 14925824 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 ms_handle_reset con 0x558f12f4a800 session 0x558f108c2a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 heartbeat osd_stat(store_statfs(0x4eff83000/0x0/0x4ffc00000, data 0x765b836/0x7889000, compress 0x0/0x0/0x0, omap 0x6f75e, meta 0x83808a2), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204660736 unmapped: 14745600 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204660736 unmapped: 14745600 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204660736 unmapped: 14745600 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204660736 unmapped: 14745600 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 heartbeat osd_stat(store_statfs(0x4eff5f000/0x0/0x4ffc00000, data 0x767f836/0x78ad000, compress 0x0/0x0/0x0, omap 0x6f9a7, meta 0x8380659), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578055 data_alloc: 234881024 data_used: 25421768
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204701696 unmapped: 14704640 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204701696 unmapped: 14704640 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204701696 unmapped: 14704640 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.732506752s of 11.778338432s, submitted: 21
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 ms_handle_reset con 0x558f10b55400 session 0x558f1242fdc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204734464 unmapped: 14671872 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 ms_handle_reset con 0x558f1263d800 session 0x558f1245da40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204808192 unmapped: 14598144 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577283 data_alloc: 234881024 data_used: 25417672
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 heartbeat osd_stat(store_statfs(0x4eff5f000/0x0/0x4ffc00000, data 0x767f813/0x78ac000, compress 0x0/0x0/0x0, omap 0x6f941, meta 0x83806bf), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204808192 unmapped: 14598144 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 ms_handle_reset con 0x558f1272c800 session 0x558f0ffd7500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204931072 unmapped: 14475264 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 ms_handle_reset con 0x558f1287b400 session 0x558f0fb0d880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 ms_handle_reset con 0x558f0fa41800 session 0x558f113b96c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 205258752 unmapped: 14147584 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 heartbeat osd_stat(store_statfs(0x4effa6000/0x0/0x4ffc00000, data 0x7639813/0x7866000, compress 0x0/0x0/0x0, omap 0x6fcde, meta 0x8380322), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 405 ms_handle_reset con 0x558f0fa41800 session 0x558f124d56c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 205258752 unmapped: 14147584 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 205258752 unmapped: 14147584 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 405 ms_handle_reset con 0x558f10028c00 session 0x558f0ffc3c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 405 ms_handle_reset con 0x558f12867000 session 0x558f12b95340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580889 data_alloc: 234881024 data_used: 26093477
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 405 ms_handle_reset con 0x558f10b55400 session 0x558f0fdabc00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 205291520 unmapped: 14114816 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 405 heartbeat osd_stat(store_statfs(0x4effc7000/0x0/0x4ffc00000, data 0x76123f3/0x7843000, compress 0x0/0x0/0x0, omap 0x704d1, meta 0x837fb2f), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 205291520 unmapped: 14114816 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 205291520 unmapped: 14114816 heap: 219406336 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 405 ms_handle_reset con 0x558f1263d800 session 0x558f113b9a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 405 ms_handle_reset con 0x558f1263d800 session 0x558f108c36c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 19546112 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 405 heartbeat osd_stat(store_statfs(0x4f0c1c000/0x0/0x4ffc00000, data 0x69bf3f3/0x6bf0000, compress 0x0/0x0/0x0, omap 0x7055d, meta 0x837faa3), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 19546112 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468038 data_alloc: 234881024 data_used: 21141925
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 19546112 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.281830788s of 12.518699646s, submitted: 84
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 19546112 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 19546112 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 406 heartbeat osd_stat(store_statfs(0x4f0c17000/0x0/0x4ffc00000, data 0x69c0e72/0x6bf3000, compress 0x0/0x0/0x0, omap 0x706e8, meta 0x837f918), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 19546112 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 19546112 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471372 data_alloc: 234881024 data_used: 21141925
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 19546112 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 19546112 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 19546112 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 406 heartbeat osd_stat(store_statfs(0x4f0c17000/0x0/0x4ffc00000, data 0x69c0e72/0x6bf3000, compress 0x0/0x0/0x0, omap 0x706e8, meta 0x837f918), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 19546112 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 19546112 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3471372 data_alloc: 234881024 data_used: 21141925
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 19546112 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 19546112 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 406 heartbeat osd_stat(store_statfs(0x4f0c17000/0x0/0x4ffc00000, data 0x69c0e72/0x6bf3000, compress 0x0/0x0/0x0, omap 0x706e8, meta 0x837f918), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 406 heartbeat osd_stat(store_statfs(0x4f0c17000/0x0/0x4ffc00000, data 0x69c0e72/0x6bf3000, compress 0x0/0x0/0x0, omap 0x706e8, meta 0x837f918), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204062720 unmapped: 19546112 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 406 heartbeat osd_stat(store_statfs(0x4f0c17000/0x0/0x4ffc00000, data 0x69c0e72/0x6bf3000, compress 0x0/0x0/0x0, omap 0x706e8, meta 0x837f918), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.404359818s of 12.409981728s, submitted: 12
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 406 ms_handle_reset con 0x558f0fa41800 session 0x558f113e0c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204095488 unmapped: 19513344 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204095488 unmapped: 19513344 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f10028c00 session 0x558f108c36c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3484135 data_alloc: 234881024 data_used: 21142023
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204103680 unmapped: 19505152 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f10b55400 session 0x558f1242fdc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f12867000 session 0x558f126d08c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 heartbeat osd_stat(store_statfs(0x4f0b0b000/0x0/0x4ffc00000, data 0x6ac9a80/0x6cff000, compress 0x0/0x0/0x0, omap 0x70d26, meta 0x837f2da), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f12867000 session 0x558f113e16c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f0fa41800 session 0x558f124d56c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204111872 unmapped: 19496960 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f10028c00 session 0x558f10756380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f15ca8800 session 0x558f0fa44e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f12731000 session 0x558f11e348c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 heartbeat osd_stat(store_statfs(0x4f0b09000/0x0/0x4ffc00000, data 0x6ac9bb6/0x6d03000, compress 0x0/0x0/0x0, omap 0x711b3, meta 0x837ee4d), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f0fa41800 session 0x558f107761c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f10028c00 session 0x558f0e416fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204152832 unmapped: 19456000 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f12867000 session 0x558f0fdaa700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204152832 unmapped: 19456000 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f15ca8800 session 0x558f0fa448c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f10b55400 session 0x558f108c3a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f0f492c00 session 0x558f12cd0540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f0fca1400 session 0x558f0fa45c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204210176 unmapped: 19398656 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f0fa41800 session 0x558f125b9500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f10028c00 session 0x558f113e0fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f12867000 session 0x558f10855a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482461 data_alloc: 234881024 data_used: 21742087
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204226560 unmapped: 19382272 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f0f492c00 session 0x558f12650a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204226560 unmapped: 19382272 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f0fa41800 session 0x558f113e1c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204226560 unmapped: 19382272 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f0fca1400 session 0x558f0ffc3180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 heartbeat osd_stat(store_statfs(0x4f0b32000/0x0/0x4ffc00000, data 0x6aa5a70/0x6cda000, compress 0x0/0x0/0x0, omap 0x71f33, meta 0x837e0cd), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f10028c00 session 0x558f10855dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204226560 unmapped: 19382272 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.394514084s of 10.669068336s, submitted: 160
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f15ca8800 session 0x558f10757a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 ms_handle_reset con 0x558f1263d800 session 0x558f113b9880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 408 ms_handle_reset con 0x558f0f492c00 session 0x558f122121c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 408 ms_handle_reset con 0x558f0fa41800 session 0x558f1273aa80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 203448320 unmapped: 20160512 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 408 ms_handle_reset con 0x558f0fca1400 session 0x558f0fb0d180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 408 ms_handle_reset con 0x558f10028c00 session 0x558f113b8540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3199139 data_alloc: 234881024 data_used: 15643671
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 408 ms_handle_reset con 0x558f0f492c00 session 0x558f12bac380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 408 ms_handle_reset con 0x558f0fa41800 session 0x558f12cd0fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 202252288 unmapped: 21356544 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 202252288 unmapped: 21356544 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 408 heartbeat osd_stat(store_statfs(0x4f3bc7000/0x0/0x4ffc00000, data 0x3a0f5ee/0x3c43000, compress 0x0/0x0/0x0, omap 0x725ed, meta 0x837da13), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 408 ms_handle_reset con 0x558f0fca1400 session 0x558f113e08c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 408 ms_handle_reset con 0x558f10028c00 session 0x558f113e08c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 202268672 unmapped: 21340160 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 202268672 unmapped: 21340160 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 408 heartbeat osd_stat(store_statfs(0x4f3bc9000/0x0/0x4ffc00000, data 0x3a0f5ee/0x3c43000, compress 0x0/0x0/0x0, omap 0x72705, meta 0x837d8fb), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 202268672 unmapped: 21340160 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197951 data_alloc: 234881024 data_used: 15643541
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 408 ms_handle_reset con 0x558f1263d800 session 0x558f10777180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201375744 unmapped: 22233088 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201375744 unmapped: 22233088 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 410 ms_handle_reset con 0x558f0f492c00 session 0x558f113b9880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201375744 unmapped: 22233088 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 410 ms_handle_reset con 0x558f0fa41800 session 0x558f12cd0fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 410 ms_handle_reset con 0x558f0fca1400 session 0x558f1242f880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 410 ms_handle_reset con 0x558f1272c800 session 0x558f1242e380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 410 ms_handle_reset con 0x558f10028c00 session 0x558f128b9340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201383936 unmapped: 22224896 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.368647575s of 10.654926300s, submitted: 94
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201383936 unmapped: 22224896 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3212132 data_alloc: 234881024 data_used: 15643557
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 410 handle_osd_map epochs [410,411], i have 411, src has [1,411]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 411 ms_handle_reset con 0x558f10028c00 session 0x558f124d4fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 411 heartbeat osd_stat(store_statfs(0x4f3bbc000/0x0/0x4ffc00000, data 0x3a12cdd/0x3c4c000, compress 0x0/0x0/0x0, omap 0x72c57, meta 0x837d3a9), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201351168 unmapped: 22257664 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 411 ms_handle_reset con 0x558f0fa41800 session 0x558f12d46000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 411 ms_handle_reset con 0x558f0f492c00 session 0x558f12212540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201392128 unmapped: 22216704 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201392128 unmapped: 22216704 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 412 ms_handle_reset con 0x558f0fca1400 session 0x558f125b8a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201441280 unmapped: 22167552 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 412 heartbeat osd_stat(store_statfs(0x4f3bba000/0x0/0x4ffc00000, data 0x3a148db/0x3c50000, compress 0x0/0x0/0x0, omap 0x737c2, meta 0x837c83e), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 412 ms_handle_reset con 0x558f1272c800 session 0x558f1273a540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 412 ms_handle_reset con 0x558f0f492c00 session 0x558f1242fdc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201441280 unmapped: 22167552 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 412 heartbeat osd_stat(store_statfs(0x4f3bb5000/0x0/0x4ffc00000, data 0x3a1653b/0x3c55000, compress 0x0/0x0/0x0, omap 0x73f2b, meta 0x837c0d5), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3230347 data_alloc: 234881024 data_used: 15643851
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 412 ms_handle_reset con 0x558f0fca1400 session 0x558f108c2000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 412 ms_handle_reset con 0x558f10028c00 session 0x558f113b9dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 22134784 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 412 ms_handle_reset con 0x558f0fa41800 session 0x558f10855dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 22134784 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 412 ms_handle_reset con 0x558f1287b400 session 0x558f125b8fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 412 ms_handle_reset con 0x558f0f492c00 session 0x558f12bac700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 412 heartbeat osd_stat(store_statfs(0x4f3b76000/0x0/0x4ffc00000, data 0x3a5659e/0x3c96000, compress 0x0/0x0/0x0, omap 0x740e0, meta 0x837bf20), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201482240 unmapped: 22126592 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 412 ms_handle_reset con 0x558f0fa41800 session 0x558f0fdabc00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 412 ms_handle_reset con 0x558f10028c00 session 0x558f124d5c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 413 ms_handle_reset con 0x558f10b54400 session 0x558f10776e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 413 ms_handle_reset con 0x558f0fca1400 session 0x558f126d0e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201498624 unmapped: 22110208 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 413 ms_handle_reset con 0x558f0f492c00 session 0x558f126d0a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 413 ms_handle_reset con 0x558f10028c00 session 0x558f1074fa40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.739815712s of 10.004219055s, submitted: 97
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201506816 unmapped: 22102016 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3230312 data_alloc: 234881024 data_used: 15648418
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 414 ms_handle_reset con 0x558f10b54400 session 0x558f1273b880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 414 ms_handle_reset con 0x558f0fa41800 session 0x558f0fb0c540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201555968 unmapped: 22052864 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201555968 unmapped: 22052864 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 414 ms_handle_reset con 0x558f1287b400 session 0x558f12d47880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 414 ms_handle_reset con 0x558f0f492c00 session 0x558f1273b180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 414 heartbeat osd_stat(store_statfs(0x4f3b75000/0x0/0x4ffc00000, data 0x3a59be6/0x3c97000, compress 0x0/0x0/0x0, omap 0x75031, meta 0x837afcf), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201596928 unmapped: 22011904 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 414 ms_handle_reset con 0x558f0fa41800 session 0x558f10854fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201596928 unmapped: 22011904 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 414 heartbeat osd_stat(store_statfs(0x4f3b75000/0x0/0x4ffc00000, data 0x3a59be6/0x3c97000, compress 0x0/0x0/0x0, omap 0x7547f, meta 0x837ab81), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 415 ms_handle_reset con 0x558f10028c00 session 0x558f0ffc3c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201605120 unmapped: 22003712 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3235686 data_alloc: 234881024 data_used: 15648320
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201605120 unmapped: 22003712 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 415 ms_handle_reset con 0x558f1530e400 session 0x558f10855880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 415 ms_handle_reset con 0x558f10b54400 session 0x558f125b8700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201605120 unmapped: 22003712 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201605120 unmapped: 22003712 heap: 223608832 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 416 heartbeat osd_stat(store_statfs(0x4f3b6c000/0x0/0x4ffc00000, data 0x3a5d249/0x3c9e000, compress 0x0/0x0/0x0, omap 0x75947, meta 0x837a6b9), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 416 handle_osd_map epochs [416,417], i have 417, src has [1,417]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 417 ms_handle_reset con 0x558f0f492c00 session 0x558f12bac1c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 417 ms_handle_reset con 0x558f0fa41800 session 0x558f1242f340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 417 ms_handle_reset con 0x558f10028c00 session 0x558f12baddc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 417 ms_handle_reset con 0x558f10c03800 session 0x558f125b8380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 417 ms_handle_reset con 0x558f1530e400 session 0x558f0ffc3dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 205848576 unmapped: 26165248 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 417 ms_handle_reset con 0x558f14e33400 session 0x558f113e16c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 417 ms_handle_reset con 0x558f1530e400 session 0x558f12651880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 417 ms_handle_reset con 0x558f0f492c00 session 0x558f113e1a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.481172562s of 10.023729324s, submitted: 166
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 417 ms_handle_reset con 0x558f0fa41800 session 0x558f0e416fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201654272 unmapped: 30359552 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480511 data_alloc: 234881024 data_used: 15649094
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201654272 unmapped: 30359552 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 417 ms_handle_reset con 0x558f10028c00 session 0x558f0ffc3180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 418 ms_handle_reset con 0x558f0f492c00 session 0x558f107568c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201654272 unmapped: 30359552 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 418 ms_handle_reset con 0x558f0fa41800 session 0x558f128b9180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 418 ms_handle_reset con 0x558f1530e400 session 0x558f12212540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201670656 unmapped: 30343168 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 419 heartbeat osd_stat(store_statfs(0x4f0fa2000/0x0/0x4ffc00000, data 0x66224d1/0x6868000, compress 0x0/0x0/0x0, omap 0x76edb, meta 0x8379125), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 419 ms_handle_reset con 0x558f10c03800 session 0x558f1273b180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 419 ms_handle_reset con 0x558f14e33400 session 0x558f10756380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 202752000 unmapped: 29261824 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 419 ms_handle_reset con 0x558f0f492c00 session 0x558f113b8540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 419 ms_handle_reset con 0x558f10c03800 session 0x558f108c3dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 202752000 unmapped: 29261824 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 420 ms_handle_reset con 0x558f1530e400 session 0x558f107761c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 420 ms_handle_reset con 0x558f0fa41800 session 0x558f1242e380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3488066 data_alloc: 234881024 data_used: 15649350
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 202760192 unmapped: 29253632 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 420 ms_handle_reset con 0x558f16b28800 session 0x558f1074fa40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 420 ms_handle_reset con 0x558f0f492c00 session 0x558f1242fdc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 202776576 unmapped: 29237248 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 420 ms_handle_reset con 0x558f0fa41800 session 0x558f12650000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 420 ms_handle_reset con 0x558f10c03800 session 0x558f0fb0d880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 202784768 unmapped: 29229056 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 202784768 unmapped: 29229056 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 420 handle_osd_map epochs [420,421], i have 421, src has [1,421]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 421 heartbeat osd_stat(store_statfs(0x4f0fa2000/0x0/0x4ffc00000, data 0x66240b1/0x686a000, compress 0x0/0x0/0x0, omap 0x77702, meta 0x83788fe), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 421 ms_handle_reset con 0x558f1530e400 session 0x558f125b96c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 421 ms_handle_reset con 0x558f14e30000 session 0x558f1273b880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 202801152 unmapped: 29212672 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 421 ms_handle_reset con 0x558f0f492c00 session 0x558f126d0e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.122818947s of 10.425618172s, submitted: 121
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 421 ms_handle_reset con 0x558f0fa41800 session 0x558f0fdabc00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494804 data_alloc: 234881024 data_used: 15649448
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 421 ms_handle_reset con 0x558f14e30000 session 0x558f108c36c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 421 ms_handle_reset con 0x558f10c03800 session 0x558f113b9dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 202809344 unmapped: 29204480 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 422 ms_handle_reset con 0x558f173afc00 session 0x558f124d4fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 422 ms_handle_reset con 0x558f173afc00 session 0x558f12773500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201408512 unmapped: 30605312 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 422 ms_handle_reset con 0x558f0f492c00 session 0x558f0fa45340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201408512 unmapped: 30605312 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 422 ms_handle_reset con 0x558f10c03800 session 0x558f10b3ca80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f0f9a000/0x0/0x4ffc00000, data 0x6627708/0x6872000, compress 0x0/0x0/0x0, omap 0x787ce, meta 0x8377832), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 201441280 unmapped: 30572544 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 423 ms_handle_reset con 0x558f14e30000 session 0x558f12bac700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 423 ms_handle_reset con 0x558f0fa41800 session 0x558f0fa45c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 203276288 unmapped: 28737536 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 423 ms_handle_reset con 0x558f0fa41800 session 0x558f0fdaa700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3555797 data_alloc: 234881024 data_used: 24132495
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 203284480 unmapped: 28729344 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 423 heartbeat osd_stat(store_statfs(0x4f0f96000/0x0/0x4ffc00000, data 0x6629296/0x6874000, compress 0x0/0x0/0x0, omap 0x78d3c, meta 0x83772c4), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 203284480 unmapped: 28729344 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 424 ms_handle_reset con 0x558f0f492c00 session 0x558f0fb0d880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 424 ms_handle_reset con 0x558f10c03800 session 0x558f10757180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204333056 unmapped: 27680768 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 424 ms_handle_reset con 0x558f14e30000 session 0x558f113b9a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204333056 unmapped: 27680768 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 424 ms_handle_reset con 0x558f173afc00 session 0x558f10855a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 424 ms_handle_reset con 0x558f173afc00 session 0x558f113e0c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f0f92000/0x0/0x4ffc00000, data 0x662ae5e/0x6878000, compress 0x0/0x0/0x0, omap 0x78fcf, meta 0x8377031), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204341248 unmapped: 27672576 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f0f92000/0x0/0x4ffc00000, data 0x662ae5e/0x6878000, compress 0x0/0x0/0x0, omap 0x78fcf, meta 0x8377031), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557851 data_alloc: 234881024 data_used: 24132495
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.168301582s of 10.383543015s, submitted: 98
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 424 ms_handle_reset con 0x558f0f492c00 session 0x558f125b8700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 424 ms_handle_reset con 0x558f0fa41800 session 0x558f12bac1c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204349440 unmapped: 27664384 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 424 handle_osd_map epochs [424,425], i have 425, src has [1,425]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204357632 unmapped: 27656192 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 204357632 unmapped: 27656192 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 208142336 unmapped: 23871488 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218193920 unmapped: 13819904 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 425 heartbeat osd_stat(store_statfs(0x4f0f8e000/0x0/0x4ffc00000, data 0x66318bd/0x687e000, compress 0x0/0x0/0x0, omap 0x79561, meta 0x8376a9f), peers [0,2] op hist [0,0,0,1])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615797 data_alloc: 234881024 data_used: 25508735
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212926464 unmapped: 19087360 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212926464 unmapped: 19087360 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 425 ms_handle_reset con 0x558f10c03800 session 0x558f12650700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 425 heartbeat osd_stat(store_statfs(0x4ef9ff000/0x0/0x4ffc00000, data 0x7b908bd/0x7ddd000, compress 0x0/0x0/0x0, omap 0x79561, meta 0x8376a9f), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212942848 unmapped: 19070976 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212942848 unmapped: 19070976 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212942848 unmapped: 19070976 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3706945 data_alloc: 234881024 data_used: 25521023
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 425 heartbeat osd_stat(store_statfs(0x4ef9fd000/0x0/0x4ffc00000, data 0x7b9092f/0x7ddf000, compress 0x0/0x0/0x0, omap 0x79863, meta 0x837679d), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.716126442s of 10.209928513s, submitted: 185
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 425 ms_handle_reset con 0x558f1500d000 session 0x558f126d1880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 20471808 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 426 ms_handle_reset con 0x558f0f492c00 session 0x558f10b3c700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211599360 unmapped: 20414464 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 427 ms_handle_reset con 0x558f0fa41800 session 0x558f124d4c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 427 ms_handle_reset con 0x558f14e30000 session 0x558f1292d6c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211484672 unmapped: 20529152 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 427 ms_handle_reset con 0x558f1530e400 session 0x558f10777180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 427 ms_handle_reset con 0x558f124a8000 session 0x558f1245da40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 427 ms_handle_reset con 0x558f0f492c00 session 0x558f108c3880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211484672 unmapped: 20529152 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 427 heartbeat osd_stat(store_statfs(0x4efa24000/0x0/0x4ffc00000, data 0x7b940c9/0x7de6000, compress 0x0/0x0/0x0, omap 0x7a2f5, meta 0x8375d0b), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 428 ms_handle_reset con 0x558f0fa41800 session 0x558f113b81c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212566016 unmapped: 19447808 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 429 ms_handle_reset con 0x558f14e30000 session 0x558f113e16c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3713548 data_alloc: 234881024 data_used: 25562081
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 429 ms_handle_reset con 0x558f1530e400 session 0x558f1273b180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212615168 unmapped: 19398656 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212615168 unmapped: 19398656 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212615168 unmapped: 19398656 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 429 heartbeat osd_stat(store_statfs(0x4efa22000/0x0/0x4ffc00000, data 0x7b977e1/0x7de8000, compress 0x0/0x0/0x0, omap 0x7aa4e, meta 0x83755b2), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212615168 unmapped: 19398656 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212615168 unmapped: 19398656 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3712936 data_alloc: 234881024 data_used: 25562596
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 429 ms_handle_reset con 0x558f10c03800 session 0x558f0fdabdc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212615168 unmapped: 19398656 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.960172653s of 10.304980278s, submitted: 124
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 430 heartbeat osd_stat(store_statfs(0x4efa22000/0x0/0x4ffc00000, data 0x7b977e1/0x7de8000, compress 0x0/0x0/0x0, omap 0x7aa4e, meta 0x83755b2), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212623360 unmapped: 19390464 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 430 ms_handle_reset con 0x558f0fa41800 session 0x558f0fb0c8c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212623360 unmapped: 19390464 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 431 ms_handle_reset con 0x558f14e30000 session 0x558f11e34a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212623360 unmapped: 19390464 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 431 ms_handle_reset con 0x558f1530e400 session 0x558f12651180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 432 ms_handle_reset con 0x558f173afc00 session 0x558f113b96c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 432 ms_handle_reset con 0x558f0f492c00 session 0x558f124d5a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212631552 unmapped: 19382272 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 432 heartbeat osd_stat(store_statfs(0x4efa17000/0x0/0x4ffc00000, data 0x7b9cef1/0x7df3000, compress 0x0/0x0/0x0, omap 0x7ba4d, meta 0x83745b3), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3724702 data_alloc: 234881024 data_used: 25562596
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212631552 unmapped: 19382272 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 433 ms_handle_reset con 0x558f0fa41800 session 0x558f0fb0dc00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212639744 unmapped: 19374080 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 434 ms_handle_reset con 0x558f14e30000 session 0x558f126d08c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 434 ms_handle_reset con 0x558f173afc00 session 0x558f0fa44fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212647936 unmapped: 19365888 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 434 handle_osd_map epochs [434,435], i have 435, src has [1,435]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 435 ms_handle_reset con 0x558f1530e400 session 0x558f10854e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 435 ms_handle_reset con 0x558f0f492c00 session 0x558f12d46c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 19357696 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 435 ms_handle_reset con 0x558f14e30000 session 0x558f113e1c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 436 ms_handle_reset con 0x558f0fa41800 session 0x558f113b8a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212672512 unmapped: 19341312 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745517 data_alloc: 234881024 data_used: 25563453
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212672512 unmapped: 19341312 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.866049767s of 10.010087013s, submitted: 72
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 436 heartbeat osd_stat(store_statfs(0x4efa05000/0x0/0x4ffc00000, data 0x7ba3d1d/0x7e01000, compress 0x0/0x0/0x0, omap 0x7c255, meta 0x8373dab), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 436 ms_handle_reset con 0x558f173afc00 session 0x558f124d5340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212688896 unmapped: 19324928 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 437 ms_handle_reset con 0x558f12193c00 session 0x558f12772c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212697088 unmapped: 19316736 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 438 ms_handle_reset con 0x558f1263d000 session 0x558f113b9340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 438 ms_handle_reset con 0x558f1530e400 session 0x558f10776700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212770816 unmapped: 19243008 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 439 ms_handle_reset con 0x558f12193c00 session 0x558f10756fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 439 ms_handle_reset con 0x558f0fa41800 session 0x558f10757180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 439 ms_handle_reset con 0x558f14e30000 session 0x558f12bac1c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213696512 unmapped: 18317312 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 439 ms_handle_reset con 0x558f173afc00 session 0x558f1292c700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3758895 data_alloc: 234881024 data_used: 25686434
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 439 handle_osd_map epochs [439,440], i have 439, src has [1,440]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213712896 unmapped: 18300928 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 440 heartbeat osd_stat(store_statfs(0x4ef9fa000/0x0/0x4ffc00000, data 0x7baac7b/0x7e0c000, compress 0x0/0x0/0x0, omap 0x7d769, meta 0x8372897), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213712896 unmapped: 18300928 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 440 ms_handle_reset con 0x558f12193c00 session 0x558f1242e540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213737472 unmapped: 18276352 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 441 ms_handle_reset con 0x558f1530e400 session 0x558f128b9340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213737472 unmapped: 18276352 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 442 ms_handle_reset con 0x558f14e30000 session 0x558f1245c8c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 442 ms_handle_reset con 0x558f0fa41800 session 0x558f125b8380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213753856 unmapped: 18259968 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 443 ms_handle_reset con 0x558f16b2b400 session 0x558f12650000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 443 ms_handle_reset con 0x558f0fa41800 session 0x558f125b9500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3772691 data_alloc: 234881024 data_used: 25686434
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 443 handle_osd_map epochs [443,444], i have 443, src has [1,444]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213778432 unmapped: 18235392 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 444 heartbeat osd_stat(store_statfs(0x4ef9f6000/0x0/0x4ffc00000, data 0x7bae495/0x7e14000, compress 0x0/0x0/0x0, omap 0x7d769, meta 0x8372897), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.839119911s of 10.120203018s, submitted: 198
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 444 ms_handle_reset con 0x558f12193c00 session 0x558f113b88c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 214122496 unmapped: 17891328 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 214122496 unmapped: 17891328 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 444 ms_handle_reset con 0x558f1530e400 session 0x558f1242f180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 214122496 unmapped: 17891328 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 444 handle_osd_map epochs [444,445], i have 444, src has [1,445]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 445 ms_handle_reset con 0x558f16b2b400 session 0x558f10793880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 214122496 unmapped: 17891328 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 446 ms_handle_reset con 0x558f1530f400 session 0x558f12213a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 446 ms_handle_reset con 0x558f14e30000 session 0x558f12026540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3799226 data_alloc: 251658240 data_used: 27137629
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 446 heartbeat osd_stat(store_statfs(0x4ef9ea000/0x0/0x4ffc00000, data 0x7bb3800/0x7e20000, compress 0x0/0x0/0x0, omap 0x7d8c2, meta 0x837273e), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 214122496 unmapped: 17891328 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 448 ms_handle_reset con 0x558f0fa41800 session 0x558f0fdaba40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 214204416 unmapped: 17809408 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 448 ms_handle_reset con 0x558f12193c00 session 0x558f12d46540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 448 ms_handle_reset con 0x558f1530e400 session 0x558f0ffc3340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 214204416 unmapped: 17809408 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 214269952 unmapped: 17743872 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 448 ms_handle_reset con 0x558f177b5400 session 0x558f0fb0cfc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 448 ms_handle_reset con 0x558f0fa41800 session 0x558f130b6c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 448 heartbeat osd_stat(store_statfs(0x4ef9de000/0x0/0x4ffc00000, data 0x7bb89fd/0x7e28000, compress 0x0/0x0/0x0, omap 0x7da60, meta 0x83725a0), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 448 handle_osd_map epochs [448,449], i have 449, src has [1,449]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 449 ms_handle_reset con 0x558f12193c00 session 0x558f12212540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 214319104 unmapped: 17694720 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 449 ms_handle_reset con 0x558f14e30000 session 0x558f12bada40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 449 ms_handle_reset con 0x558f1530e400 session 0x558f10b3ca80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 449 ms_handle_reset con 0x558f16b2b400 session 0x558f1273aa80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3808050 data_alloc: 251658240 data_used: 27136411
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 214319104 unmapped: 17694720 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 450 ms_handle_reset con 0x558f0fa41800 session 0x558f1242e000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.898098946s of 10.027269363s, submitted: 102
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 450 ms_handle_reset con 0x558f12193c00 session 0x558f12d47dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 214360064 unmapped: 17653760 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 214548480 unmapped: 17465344 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 451 ms_handle_reset con 0x558f14e30000 session 0x558f12d46540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 451 heartbeat osd_stat(store_statfs(0x4ef9df000/0x0/0x4ffc00000, data 0x7bbc1cd/0x7e2d000, compress 0x0/0x0/0x0, omap 0x7e0f4, meta 0x8371f0c), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 214597632 unmapped: 17416192 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 214597632 unmapped: 17416192 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 452 ms_handle_reset con 0x558f1530e400 session 0x558f10757180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3813864 data_alloc: 251658240 data_used: 27136898
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 215646208 unmapped: 16367616 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 453 ms_handle_reset con 0x558f1500d000 session 0x558f0e417340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 453 ms_handle_reset con 0x558f0fa41800 session 0x558f124d4e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 215670784 unmapped: 16343040 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 215769088 unmapped: 16244736 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 455 ms_handle_reset con 0x558f12193c00 session 0x558f10756000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 455 ms_handle_reset con 0x558f14e30000 session 0x558f10854700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 455 heartbeat osd_stat(store_statfs(0x4ef9d6000/0x0/0x4ffc00000, data 0x7bc2c8c/0x7e34000, compress 0x0/0x0/0x0, omap 0x7e6fe, meta 0x8371902), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 215777280 unmapped: 16236544 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 215777280 unmapped: 16236544 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3820616 data_alloc: 234881024 data_used: 27107554
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 455 handle_osd_map epochs [455,456], i have 455, src has [1,456]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 215785472 unmapped: 16228352 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.738427162s of 10.064435005s, submitted: 141
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 215785472 unmapped: 16228352 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 456 heartbeat osd_stat(store_statfs(0x4ef9d0000/0x0/0x4ffc00000, data 0x7bc63a7/0x7e3a000, compress 0x0/0x0/0x0, omap 0x7ee1c, meta 0x83711e4), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 215793664 unmapped: 16220160 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 456 heartbeat osd_stat(store_statfs(0x4ef9d0000/0x0/0x4ffc00000, data 0x7bc63a7/0x7e3a000, compress 0x0/0x0/0x0, omap 0x7ee1c, meta 0x83711e4), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 456 ms_handle_reset con 0x558f1530e400 session 0x558f10b3cc40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 215793664 unmapped: 16220160 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 215793664 unmapped: 16220160 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3823214 data_alloc: 234881024 data_used: 27107554
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 215793664 unmapped: 16220160 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 457 ms_handle_reset con 0x558f134cfc00 session 0x558f1273b880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 457 heartbeat osd_stat(store_statfs(0x4ef9cb000/0x0/0x4ffc00000, data 0x7bc7e5e/0x7e3d000, compress 0x0/0x0/0x0, omap 0x7efba, meta 0x8371046), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 457 ms_handle_reset con 0x558f1500c800 session 0x558f10776e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 457 heartbeat osd_stat(store_statfs(0x4ef9cb000/0x0/0x4ffc00000, data 0x7bc7e5e/0x7e3d000, compress 0x0/0x0/0x0, omap 0x7efba, meta 0x8371046), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 457 ms_handle_reset con 0x558f12f4a000 session 0x558f113e01c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 458 ms_handle_reset con 0x558f0fa41800 session 0x558f12bada40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 215900160 unmapped: 16113664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 458 heartbeat osd_stat(store_statfs(0x4ef9ce000/0x0/0x4ffc00000, data 0x7bc7ec0/0x7e3e000, compress 0x0/0x0/0x0, omap 0x7efba, meta 0x8371046), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 458 ms_handle_reset con 0x558f12193c00 session 0x558f0fb0dc00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 458 handle_osd_map epochs [458,459], i have 459, src has [1,459]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 459 ms_handle_reset con 0x558f14e30000 session 0x558f10757a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 215900160 unmapped: 16113664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 459 ms_handle_reset con 0x558f11f8d800 session 0x558f12cd1500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 459 heartbeat osd_stat(store_statfs(0x4ef9c8000/0x0/0x4ffc00000, data 0x7bc9a9b/0x7e41000, compress 0x0/0x0/0x0, omap 0x7f227, meta 0x8370dd9), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 215900160 unmapped: 16113664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 215908352 unmapped: 16105472 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 460 ms_handle_reset con 0x558f0fa41800 session 0x558f12651a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3835939 data_alloc: 251658240 data_used: 27222207
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 215908352 unmapped: 16105472 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.115837097s of 10.266657829s, submitted: 94
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 460 ms_handle_reset con 0x558f12f4a000 session 0x558f0fa448c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 460 ms_handle_reset con 0x558f1500c800 session 0x558f1242e000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 461 ms_handle_reset con 0x558f12193c00 session 0x558f0ffc28c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213016576 unmapped: 18997248 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 461 ms_handle_reset con 0x558f0fa41800 session 0x558f113b88c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213016576 unmapped: 18997248 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 461 heartbeat osd_stat(store_statfs(0x4f3b29000/0x0/0x4ffc00000, data 0x3a6ad0d/0x3ce3000, compress 0x0/0x0/0x0, omap 0x7f8bb, meta 0x8370745), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213016576 unmapped: 18997248 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213016576 unmapped: 18997248 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3418627 data_alloc: 234881024 data_used: 14963394
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213016576 unmapped: 18997248 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213016576 unmapped: 18997248 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213016576 unmapped: 18997248 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213016576 unmapped: 18997248 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f3b24000/0x0/0x4ffc00000, data 0x3a6c7a8/0x3ce6000, compress 0x0/0x0/0x0, omap 0x7f945, meta 0x83706bb), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213016576 unmapped: 18997248 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3422121 data_alloc: 234881024 data_used: 14963394
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 462 ms_handle_reset con 0x558f11f8d800 session 0x558f124d5c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 462 handle_osd_map epochs [462,463], i have 462, src has [1,463]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213016576 unmapped: 18997248 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213016576 unmapped: 18997248 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 463 heartbeat osd_stat(store_statfs(0x4f3b20000/0x0/0x4ffc00000, data 0x3a6e289/0x3cea000, compress 0x0/0x0/0x0, omap 0x7fa59, meta 0x83705a7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 463 heartbeat osd_stat(store_statfs(0x4f3b20000/0x0/0x4ffc00000, data 0x3a6e289/0x3cea000, compress 0x0/0x0/0x0, omap 0x7fa59, meta 0x83705a7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.343159676s of 11.458642960s, submitted: 75
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 463 ms_handle_reset con 0x558f12f4a000 session 0x558f0fdaba40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 463 handle_osd_map epochs [463,464], i have 463, src has [1,464]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 464 ms_handle_reset con 0x558f1500c800 session 0x558f12cd1c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213024768 unmapped: 18989056 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 465 ms_handle_reset con 0x558f1530e400 session 0x558f107568c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 465 ms_handle_reset con 0x558f12193c00 session 0x558f128b8fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213032960 unmapped: 18980864 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 465 heartbeat osd_stat(store_statfs(0x4f3b17000/0x0/0x4ffc00000, data 0x3a71a23/0x3cf1000, compress 0x0/0x0/0x0, omap 0x80299, meta 0x836fd67), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213032960 unmapped: 18980864 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435113 data_alloc: 234881024 data_used: 14963862
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213041152 unmapped: 18972672 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 466 ms_handle_reset con 0x558f0fa41800 session 0x558f1242f180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 467 ms_handle_reset con 0x558f11f8d800 session 0x558f12651dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213073920 unmapped: 18939904 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 467 heartbeat osd_stat(store_statfs(0x4f3b16000/0x0/0x4ffc00000, data 0x3a751bd/0x3cf6000, compress 0x0/0x0/0x0, omap 0x8030f, meta 0x836fcf1), peers [0,2] op hist [0,0,0,0,0,1])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 467 ms_handle_reset con 0x558f12f4a000 session 0x558f12027dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213073920 unmapped: 18939904 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 467 heartbeat osd_stat(store_statfs(0x4f3b17000/0x0/0x4ffc00000, data 0x3a7515b/0x3cf5000, compress 0x0/0x0/0x0, omap 0x80399, meta 0x836fc67), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213073920 unmapped: 18939904 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213073920 unmapped: 18939904 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 467 heartbeat osd_stat(store_statfs(0x4f3b17000/0x0/0x4ffc00000, data 0x3a7515b/0x3cf5000, compress 0x0/0x0/0x0, omap 0x80399, meta 0x836fc67), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438546 data_alloc: 234881024 data_used: 14964864
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213073920 unmapped: 18939904 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213073920 unmapped: 18939904 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213073920 unmapped: 18939904 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213073920 unmapped: 18939904 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 467 heartbeat osd_stat(store_statfs(0x4f3b17000/0x0/0x4ffc00000, data 0x3a7515b/0x3cf5000, compress 0x0/0x0/0x0, omap 0x80399, meta 0x836fc67), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213073920 unmapped: 18939904 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438546 data_alloc: 234881024 data_used: 14964864
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.680846214s of 13.075779915s, submitted: 83
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213073920 unmapped: 18939904 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213073920 unmapped: 18939904 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213073920 unmapped: 18939904 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 468 heartbeat osd_stat(store_statfs(0x4f3b12000/0x0/0x4ffc00000, data 0x3a76c12/0x3cf8000, compress 0x0/0x0/0x0, omap 0x80a2d, meta 0x836f5d3), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213073920 unmapped: 18939904 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 468 ms_handle_reset con 0x558f1500c800 session 0x558f1242ec40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213073920 unmapped: 18939904 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442040 data_alloc: 234881024 data_used: 14964864
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213073920 unmapped: 18939904 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 469 ms_handle_reset con 0x558f0fa41800 session 0x558f126d0a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 469 ms_handle_reset con 0x558f11f8d800 session 0x558f12cd1180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213082112 unmapped: 18931712 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 469 ms_handle_reset con 0x558f12193c00 session 0x558f1245ddc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 469 heartbeat osd_stat(store_statfs(0x4f3b0f000/0x0/0x4ffc00000, data 0x3a787ae/0x3cfb000, compress 0x0/0x0/0x0, omap 0x80ab7, meta 0x836f549), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 470 heartbeat osd_stat(store_statfs(0x4f3b0f000/0x0/0x4ffc00000, data 0x3a787ae/0x3cfb000, compress 0x0/0x0/0x0, omap 0x80ab7, meta 0x836f549), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213090304 unmapped: 18923520 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 470 ms_handle_reset con 0x558f12f4a000 session 0x558f0ffd6c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 470 ms_handle_reset con 0x558f12673000 session 0x558f0fa44380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213090304 unmapped: 18923520 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 470 ms_handle_reset con 0x558f0fa41800 session 0x558f1242e540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213090304 unmapped: 18923520 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450828 data_alloc: 234881024 data_used: 14964864
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213090304 unmapped: 18923520 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.417816162s of 10.499139786s, submitted: 42
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 470 heartbeat osd_stat(store_statfs(0x4f3b0b000/0x0/0x4ffc00000, data 0x3a7a400/0x3cff000, compress 0x0/0x0/0x0, omap 0x80e02, meta 0x836f1fe), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213090304 unmapped: 18923520 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 471 ms_handle_reset con 0x558f11f8d800 session 0x558f113e1c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 471 ms_handle_reset con 0x558f12193c00 session 0x558f1242ec40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 471 heartbeat osd_stat(store_statfs(0x4f3b0d000/0x0/0x4ffc00000, data 0x3a7a400/0x3cff000, compress 0x0/0x0/0x0, omap 0x80e02, meta 0x836f1fe), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212393984 unmapped: 19619840 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 472 ms_handle_reset con 0x558f12f4a000 session 0x558f0ffc3180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 472 ms_handle_reset con 0x558f14be6400 session 0x558f127b5340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 472 ms_handle_reset con 0x558f0fa41800 session 0x558f126d1880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211714048 unmapped: 20299776 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211714048 unmapped: 20299776 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 472 ms_handle_reset con 0x558f11f8d800 session 0x558f10854540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455062 data_alloc: 234881024 data_used: 14964864
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 472 handle_osd_map epochs [472,473], i have 472, src has [1,473]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f3b06000/0x0/0x4ffc00000, data 0x3a7db46/0x3d04000, compress 0x0/0x0/0x0, omap 0x81c5f, meta 0x836e3a1), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211714048 unmapped: 20299776 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211714048 unmapped: 20299776 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 473 handle_osd_map epochs [474,474], i have 473, src has [1,474]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 474 ms_handle_reset con 0x558f12193c00 session 0x558f124d5500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 474 ms_handle_reset con 0x558f12f4a000 session 0x558f13099dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211656704 unmapped: 20357120 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 474 ms_handle_reset con 0x558f14be6400 session 0x558f10756c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 475 heartbeat osd_stat(store_statfs(0x4f3b00000/0x0/0x4ffc00000, data 0x3a81199/0x3d0a000, compress 0x0/0x0/0x0, omap 0x81d73, meta 0x836e28d), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 475 ms_handle_reset con 0x558f0fa41800 session 0x558f12212e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211656704 unmapped: 20357120 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 475 ms_handle_reset con 0x558f11f8d800 session 0x558f128b8fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 475 heartbeat osd_stat(store_statfs(0x4f3afb000/0x0/0x4ffc00000, data 0x3a82d89/0x3d0d000, compress 0x0/0x0/0x0, omap 0x8237d, meta 0x836dc83), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211656704 unmapped: 20357120 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 475 ms_handle_reset con 0x558f12193c00 session 0x558f10855dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3463976 data_alloc: 234881024 data_used: 14965477
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211656704 unmapped: 20357120 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 475 handle_osd_map epochs [476,476], i have 475, src has [1,476]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211656704 unmapped: 20357120 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 476 handle_osd_map epochs [476,477], i have 476, src has [1,477]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.807857513s of 10.949057579s, submitted: 102
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 477 ms_handle_reset con 0x558f12f4a000 session 0x558f12651a40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 477 ms_handle_reset con 0x558f116ad000 session 0x558f1273b880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211673088 unmapped: 20340736 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 477 handle_osd_map epochs [478,478], i have 477, src has [1,478]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 478 ms_handle_reset con 0x558f0fa41800 session 0x558f10757180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211673088 unmapped: 20340736 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 478 ms_handle_reset con 0x558f11f8d800 session 0x558f12651dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 478 ms_handle_reset con 0x558f12193c00 session 0x558f12cd1180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211673088 unmapped: 20340736 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 478 heartbeat osd_stat(store_statfs(0x4f3af1000/0x0/0x4ffc00000, data 0x3a87fcc/0x3d16000, compress 0x0/0x0/0x0, omap 0x8251b, meta 0x836dae5), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 478 ms_handle_reset con 0x558f12f4a000 session 0x558f0fdaafc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3473552 data_alloc: 234881024 data_used: 14966362
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 478 heartbeat osd_stat(store_statfs(0x4f3af1000/0x0/0x4ffc00000, data 0x3a87fcc/0x3d16000, compress 0x0/0x0/0x0, omap 0x8251b, meta 0x836dae5), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211673088 unmapped: 20340736 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211681280 unmapped: 20332544 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 480 ms_handle_reset con 0x558f1263c400 session 0x558f12650a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 480 ms_handle_reset con 0x558f0fa41800 session 0x558f11e34540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211689472 unmapped: 20324352 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 480 ms_handle_reset con 0x558f11f8d800 session 0x558f10855c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211689472 unmapped: 20324352 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 481 ms_handle_reset con 0x558f12193c00 session 0x558f12baddc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 481 ms_handle_reset con 0x558f12f4a000 session 0x558f0fdaa8c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211689472 unmapped: 20324352 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 481 heartbeat osd_stat(store_statfs(0x4f3aeb000/0x0/0x4ffc00000, data 0x3a8d20f/0x3d1f000, compress 0x0/0x0/0x0, omap 0x82c39, meta 0x836d3c7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480540 data_alloc: 234881024 data_used: 14966975
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 482 ms_handle_reset con 0x558f16b28400 session 0x558f12bacfc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211697664 unmapped: 20316160 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211697664 unmapped: 20316160 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 482 handle_osd_map epochs [482,483], i have 483, src has [1,483]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.540693283s of 10.620892525s, submitted: 61
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 483 ms_handle_reset con 0x558f0fa41800 session 0x558f12d47dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211714048 unmapped: 20299776 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 483 ms_handle_reset con 0x558f11f8d800 session 0x558f10855180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 484 ms_handle_reset con 0x558f12193c00 session 0x558f0ffc28c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f3ae4000/0x0/0x4ffc00000, data 0x3a908c4/0x3d26000, compress 0x0/0x0/0x0, omap 0x832cd, meta 0x836cd33), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211722240 unmapped: 20291584 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 484 ms_handle_reset con 0x558f12f4a000 session 0x558f12bac700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 484 ms_handle_reset con 0x558f16b2ac00 session 0x558f12cd1c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f3ae4000/0x0/0x4ffc00000, data 0x3a92452/0x3d28000, compress 0x0/0x0/0x0, omap 0x83357, meta 0x836cca9), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211722240 unmapped: 20291584 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489107 data_alloc: 234881024 data_used: 14966975
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 485 ms_handle_reset con 0x558f0fa41800 session 0x558f125b8380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211730432 unmapped: 20283392 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211730432 unmapped: 20283392 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 485 handle_osd_map epochs [485,486], i have 485, src has [1,486]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 486 ms_handle_reset con 0x558f11f8d800 session 0x558f1273ba40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211738624 unmapped: 20275200 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 486 ms_handle_reset con 0x558f12193c00 session 0x558f12d47dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 486 handle_osd_map epochs [487,487], i have 486, src has [1,487]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 487 ms_handle_reset con 0x558f12f4a000 session 0x558f12651dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 487 heartbeat osd_stat(store_statfs(0x4f3ad5000/0x0/0x4ffc00000, data 0x3a97707/0x3d33000, compress 0x0/0x0/0x0, omap 0x839eb, meta 0x836c615), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211746816 unmapped: 20267008 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 487 ms_handle_reset con 0x558f0ffef800 session 0x558f124d5340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 487 ms_handle_reset con 0x558f0fa41800 session 0x558f10854540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211746816 unmapped: 20267008 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 487 heartbeat osd_stat(store_statfs(0x4f3ad6000/0x0/0x4ffc00000, data 0x3a976f7/0x3d32000, compress 0x0/0x0/0x0, omap 0x83a75, meta 0x836c58b), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500566 data_alloc: 234881024 data_used: 14966975
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 487 handle_osd_map epochs [487,488], i have 487, src has [1,488]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211763200 unmapped: 20250624 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 488 ms_handle_reset con 0x558f0ffef800 session 0x558f10757340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211763200 unmapped: 20250624 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.844631195s of 10.109873772s, submitted: 103
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 489 ms_handle_reset con 0x558f11f8d800 session 0x558f126f1c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211763200 unmapped: 20250624 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 489 ms_handle_reset con 0x558f12193c00 session 0x558f10b3d500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 489 handle_osd_map epochs [489,490], i have 489, src has [1,490]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 490 ms_handle_reset con 0x558f12f4a000 session 0x558f10792a80
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211771392 unmapped: 20242432 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 490 ms_handle_reset con 0x558f0fa41800 session 0x558f12bac700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211771392 unmapped: 20242432 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 490 ms_handle_reset con 0x558f0ffef800 session 0x558f1242e540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f3acf000/0x0/0x4ffc00000, data 0x3a9c8e8/0x3d3b000, compress 0x0/0x0/0x0, omap 0x84109, meta 0x836bef7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3509668 data_alloc: 234881024 data_used: 14968173
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 490 handle_osd_map epochs [490,491], i have 490, src has [1,491]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211779584 unmapped: 20234240 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 491 ms_handle_reset con 0x558f11f8d800 session 0x558f0fa44380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211779584 unmapped: 20234240 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211779584 unmapped: 20234240 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 491 handle_osd_map epochs [492,492], i have 491, src has [1,492]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 492 ms_handle_reset con 0x558f12193c00 session 0x558f12baddc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211787776 unmapped: 20226048 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 492 heartbeat osd_stat(store_statfs(0x4f3ace000/0x0/0x4ffc00000, data 0x3a9e383/0x3d3e000, compress 0x0/0x0/0x0, omap 0x83d27, meta 0x836c2d9), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211787776 unmapped: 20226048 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 492 ms_handle_reset con 0x558f12f4a000 session 0x558f12cd1c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518268 data_alloc: 234881024 data_used: 14969387
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 492 handle_osd_map epochs [493,493], i have 492, src has [1,493]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 493 ms_handle_reset con 0x558f0fa41800 session 0x558f12212e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211812352 unmapped: 20201472 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 493 ms_handle_reset con 0x558f0ffef800 session 0x558f10854700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 493 ms_handle_reset con 0x558f11f8d800 session 0x558f124d5c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211795968 unmapped: 20217856 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211795968 unmapped: 20217856 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211795968 unmapped: 20217856 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f3ac7000/0x0/0x4ffc00000, data 0x3aa1b53/0x3d43000, compress 0x0/0x0/0x0, omap 0x8333b, meta 0x836ccc5), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211795968 unmapped: 20217856 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3520323 data_alloc: 234881024 data_used: 14969956
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211795968 unmapped: 20217856 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211795968 unmapped: 20217856 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211795968 unmapped: 20217856 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211795968 unmapped: 20217856 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f3ac7000/0x0/0x4ffc00000, data 0x3aa1b53/0x3d43000, compress 0x0/0x0/0x0, omap 0x8333b, meta 0x836ccc5), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211795968 unmapped: 20217856 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 29K writes, 113K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s#012Cumulative WAL: 29K writes, 10K syncs, 2.70 writes per sync, written: 0.08 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 46K keys, 11K commit groups, 1.0 writes per commit group, ingest: 34.00 MB, 0.06 MB/s#012Interval WAL: 11K writes, 5094 syncs, 2.35 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 493 handle_osd_map epochs [494,494], i have 493, src has [1,494]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.502725601s of 17.860221863s, submitted: 123
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3523097 data_alloc: 234881024 data_used: 14969956
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211795968 unmapped: 20217856 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211795968 unmapped: 20217856 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 heartbeat osd_stat(store_statfs(0x4f3ac4000/0x0/0x4ffc00000, data 0x3aa35d2/0x3d46000, compress 0x0/0x0/0x0, omap 0x833c5, meta 0x836cc3b), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3523097 data_alloc: 234881024 data_used: 14969956
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 heartbeat osd_stat(store_statfs(0x4f3ac4000/0x0/0x4ffc00000, data 0x3aa35d2/0x3d46000, compress 0x0/0x0/0x0, omap 0x833c5, meta 0x836cc3b), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 heartbeat osd_stat(store_statfs(0x4f3ac4000/0x0/0x4ffc00000, data 0x3aa35d2/0x3d46000, compress 0x0/0x0/0x0, omap 0x833c5, meta 0x836cc3b), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3523097 data_alloc: 234881024 data_used: 14969956
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3523097 data_alloc: 234881024 data_used: 14969956
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 heartbeat osd_stat(store_statfs(0x4f3ac4000/0x0/0x4ffc00000, data 0x3aa35d2/0x3d46000, compress 0x0/0x0/0x0, omap 0x833c5, meta 0x836cc3b), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 heartbeat osd_stat(store_statfs(0x4f3ac4000/0x0/0x4ffc00000, data 0x3aa35d2/0x3d46000, compress 0x0/0x0/0x0, omap 0x833c5, meta 0x836cc3b), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3523097 data_alloc: 234881024 data_used: 14969956
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 heartbeat osd_stat(store_statfs(0x4f3ac4000/0x0/0x4ffc00000, data 0x3aa35d2/0x3d46000, compress 0x0/0x0/0x0, omap 0x833c5, meta 0x836cc3b), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211804160 unmapped: 20209664 heap: 232013824 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.528585434s of 23.536882401s, submitted: 12
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 ms_handle_reset con 0x558f12193c00 session 0x558f12772c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212246528 unmapped: 21389312 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 ms_handle_reset con 0x558f12f4a000 session 0x558f113b9340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 ms_handle_reset con 0x558f0fa41800 session 0x558f1242ee00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 ms_handle_reset con 0x558f0ffef800 session 0x558f12651340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 ms_handle_reset con 0x558f11f8d800 session 0x558f0ffc2540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 heartbeat osd_stat(store_statfs(0x4f33d3000/0x0/0x4ffc00000, data 0x41965d2/0x4439000, compress 0x0/0x0/0x0, omap 0x832d4, meta 0x836cd2c), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212328448 unmapped: 21307392 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570248 data_alloc: 234881024 data_used: 14969956
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212328448 unmapped: 21307392 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212328448 unmapped: 21307392 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 ms_handle_reset con 0x558f12193c00 session 0x558f124d5500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 212328448 unmapped: 21307392 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 ms_handle_reset con 0x558f12f4a000 session 0x558f10756c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 ms_handle_reset con 0x558f0fa41800 session 0x558f128b8fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 ms_handle_reset con 0x558f0ffef800 session 0x558f0ffd6c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211902464 unmapped: 21733376 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 211910656 unmapped: 21725184 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 heartbeat osd_stat(store_statfs(0x4f33a8000/0x0/0x4ffc00000, data 0x41c05e2/0x4464000, compress 0x0/0x0/0x0, omap 0x8350b, meta 0x836caf5), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583949 data_alloc: 234881024 data_used: 16118372
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213590016 unmapped: 20045824 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213590016 unmapped: 20045824 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213590016 unmapped: 20045824 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 heartbeat osd_stat(store_statfs(0x4f33a8000/0x0/0x4ffc00000, data 0x41c05e2/0x4464000, compress 0x0/0x0/0x0, omap 0x8350b, meta 0x836caf5), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213590016 unmapped: 20045824 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213590016 unmapped: 20045824 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3612877 data_alloc: 234881024 data_used: 21046372
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213590016 unmapped: 20045824 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213590016 unmapped: 20045824 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213590016 unmapped: 20045824 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 heartbeat osd_stat(store_statfs(0x4f33a8000/0x0/0x4ffc00000, data 0x41c05e2/0x4464000, compress 0x0/0x0/0x0, omap 0x8350b, meta 0x836caf5), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213590016 unmapped: 20045824 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213590016 unmapped: 20045824 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.503765106s of 16.607517242s, submitted: 34
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3637575 data_alloc: 234881024 data_used: 21105764
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217776128 unmapped: 15859712 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217833472 unmapped: 15802368 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217833472 unmapped: 15802368 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217833472 unmapped: 15802368 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 heartbeat osd_stat(store_statfs(0x4f2ce0000/0x0/0x4ffc00000, data 0x48875e2/0x4b2b000, compress 0x0/0x0/0x0, omap 0x8350b, meta 0x836caf5), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 heartbeat osd_stat(store_statfs(0x4f2ce0000/0x0/0x4ffc00000, data 0x48875e2/0x4b2b000, compress 0x0/0x0/0x0, omap 0x8350b, meta 0x836caf5), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217833472 unmapped: 15802368 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3676929 data_alloc: 234881024 data_used: 23319652
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217833472 unmapped: 15802368 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217833472 unmapped: 15802368 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217833472 unmapped: 15802368 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217833472 unmapped: 15802368 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217833472 unmapped: 15802368 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 ms_handle_reset con 0x558f12868c00 session 0x558f0ffd7dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3677185 data_alloc: 234881024 data_used: 23327844
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 heartbeat osd_stat(store_statfs(0x4f2ce0000/0x0/0x4ffc00000, data 0x48875e2/0x4b2b000, compress 0x0/0x0/0x0, omap 0x83595, meta 0x836ca6b), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217833472 unmapped: 15802368 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217833472 unmapped: 15802368 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217833472 unmapped: 15802368 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 heartbeat osd_stat(store_statfs(0x4f2ce0000/0x0/0x4ffc00000, data 0x48875e2/0x4b2b000, compress 0x0/0x0/0x0, omap 0x83595, meta 0x836ca6b), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217833472 unmapped: 15802368 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217833472 unmapped: 15802368 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.623865128s of 14.755106926s, submitted: 74
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3680001 data_alloc: 234881024 data_used: 23352420
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217710592 unmapped: 15925248 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 494 handle_osd_map epochs [495,495], i have 494, src has [1,495]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 495 ms_handle_reset con 0x558f14399c00 session 0x558f0ffc3340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217718784 unmapped: 15917056 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217907200 unmapped: 15728640 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 495 handle_osd_map epochs [495,496], i have 495, src has [1,496]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 496 ms_handle_reset con 0x558f0fdc7c00 session 0x558f0fa45dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 496 heartbeat osd_stat(store_statfs(0x4f2cb2000/0x0/0x4ffc00000, data 0x48b21e0/0x4b58000, compress 0x0/0x0/0x0, omap 0x83595, meta 0x836ca6b), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217915392 unmapped: 15720448 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 496 heartbeat osd_stat(store_statfs(0x4f2c89000/0x0/0x4ffc00000, data 0x48d8d7c/0x4b7d000, compress 0x0/0x0/0x0, omap 0x83f0f, meta 0x836c0f1), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217931776 unmapped: 15704064 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 496 handle_osd_map epochs [497,497], i have 496, src has [1,497]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f0fa41800 session 0x558f113b8540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3698533 data_alloc: 234881024 data_used: 23352420
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217939968 unmapped: 15695872 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f0fa41000 session 0x558f12b95340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217939968 unmapped: 15695872 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f0ffef800 session 0x558f10b3cc40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f0fdc7c00 session 0x558f0ffc3500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217939968 unmapped: 15695872 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217939968 unmapped: 15695872 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217939968 unmapped: 15695872 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3698661 data_alloc: 234881024 data_used: 23352518
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 heartbeat osd_stat(store_statfs(0x4f2c7b000/0x0/0x4ffc00000, data 0x48e6918/0x4b8c000, compress 0x0/0x0/0x0, omap 0x84019, meta 0x836bfe7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217939968 unmapped: 15695872 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217939968 unmapped: 15695872 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217939968 unmapped: 15695872 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f12868c00 session 0x558f124e36c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217939968 unmapped: 15695872 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217939968 unmapped: 15695872 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.841586113s of 15.086134911s, submitted: 126
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f0fa41800 session 0x558f113b8000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f0fa41000 session 0x558f0fa45dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3701181 data_alloc: 234881024 data_used: 23504070
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 heartbeat osd_stat(store_statfs(0x4f2c7f000/0x0/0x4ffc00000, data 0x48e697a/0x4b8d000, compress 0x0/0x0/0x0, omap 0x835a3, meta 0x836ca5d), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217948160 unmapped: 15687680 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217948160 unmapped: 15687680 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217948160 unmapped: 15687680 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 217948160 unmapped: 15687680 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218062848 unmapped: 15572992 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3704795 data_alloc: 234881024 data_used: 23655622
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f0fdc7c00 session 0x558f10756c40
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218062848 unmapped: 15572992 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 heartbeat osd_stat(store_statfs(0x4f2c72000/0x0/0x4ffc00000, data 0x48f397a/0x4b9a000, compress 0x0/0x0/0x0, omap 0x835a3, meta 0x836ca5d), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218062848 unmapped: 15572992 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f14399c00 session 0x558f12213500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f0ffef800 session 0x558f10777500
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218062848 unmapped: 15572992 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218062848 unmapped: 15572992 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218062848 unmapped: 15572992 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.198832512s of 10.345111847s, submitted: 34
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705429 data_alloc: 234881024 data_used: 23659718
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218062848 unmapped: 15572992 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 heartbeat osd_stat(store_statfs(0x4f2c72000/0x0/0x4ffc00000, data 0x48f397a/0x4b9a000, compress 0x0/0x0/0x0, omap 0x8392d, meta 0x836c6d3), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218628096 unmapped: 15007744 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 heartbeat osd_stat(store_statfs(0x4f2be9000/0x0/0x4ffc00000, data 0x497c97a/0x4c23000, compress 0x0/0x0/0x0, omap 0x8392d, meta 0x836c6d3), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218628096 unmapped: 15007744 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f0fa41000 session 0x558f113e01c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218062848 unmapped: 15572992 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f0fa41800 session 0x558f11e341c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f0fdc7c00 session 0x558f0fb0c000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 15556608 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705956 data_alloc: 234881024 data_used: 23708870
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f0ffef800 session 0x558f126d0e00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 15556608 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 heartbeat osd_stat(store_statfs(0x4f2c72000/0x0/0x4ffc00000, data 0x48f397a/0x4b9a000, compress 0x0/0x0/0x0, omap 0x838e2, meta 0x836c71e), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f14399c00 session 0x558f12651dc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218103808 unmapped: 15532032 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 heartbeat osd_stat(store_statfs(0x4f2c72000/0x0/0x4ffc00000, data 0x48e7918/0x4b8d000, compress 0x0/0x0/0x0, omap 0x833ec, meta 0x836cc14), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f0fa41000 session 0x558f126d08c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f0fa41800 session 0x558f0e417c00
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218112000 unmapped: 15523840 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 ms_handle_reset con 0x558f0fdc7c00 session 0x558f116b48c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 497 handle_osd_map epochs [498,498], i have 497, src has [1,498]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 498 ms_handle_reset con 0x558f0ffef800 session 0x558f1245c380
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218112000 unmapped: 15523840 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 498 ms_handle_reset con 0x558f12866800 session 0x558f107768c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 498 ms_handle_reset con 0x558f0fa41000 session 0x558f12651180
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218112000 unmapped: 15523840 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 498 handle_osd_map epochs [499,499], i have 498, src has [1,499]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.858294487s of 10.007182121s, submitted: 77
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3708962 data_alloc: 234881024 data_used: 23659620
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 499 heartbeat osd_stat(store_statfs(0x4f2c80000/0x0/0x4ffc00000, data 0x48df0f8/0x4b8a000, compress 0x0/0x0/0x0, omap 0x8388b, meta 0x836c775), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 499 ms_handle_reset con 0x558f0fa41800 session 0x558f10793880
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218112000 unmapped: 15523840 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 499 heartbeat osd_stat(store_statfs(0x4f2c80000/0x0/0x4ffc00000, data 0x48df0f8/0x4b8a000, compress 0x0/0x0/0x0, omap 0x8388b, meta 0x836c775), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 499 handle_osd_map epochs [500,500], i have 499, src has [1,500]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 500 ms_handle_reset con 0x558f0fdc7c00 session 0x558f12650000
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218112000 unmapped: 15523840 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 218112000 unmapped: 15523840 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 500 ms_handle_reset con 0x558f0ffef800 session 0x558f107761c0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 500 ms_handle_reset con 0x558f12816800 session 0x558f1292c700
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 219168768 unmapped: 14467072 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 500 ms_handle_reset con 0x558f11f8d800 session 0x558f12d46540
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 500 ms_handle_reset con 0x558f12193c00 session 0x558f11e34fc0
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 219176960 unmapped: 14458880 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 500 handle_osd_map epochs [501,501], i have 500, src has [1,501]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 501 ms_handle_reset con 0x558f0fa41000 session 0x558f124d5340
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566587 data_alloc: 234881024 data_used: 14973954
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 501 heartbeat osd_stat(store_statfs(0x4f3aaf000/0x0/0x4ffc00000, data 0x3aaf711/0x3d5b000, compress 0x0/0x0/0x0, omap 0x83aaf, meta 0x836c551), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 501 handle_osd_map epochs [501,502], i have 501, src has [1,502]
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569361 data_alloc: 234881024 data_used: 14973954
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 502 heartbeat osd_stat(store_statfs(0x4f3aac000/0x0/0x4ffc00000, data 0x3ab1190/0x3d5e000, compress 0x0/0x0/0x0, omap 0x83b39, meta 0x836c4c7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 502 heartbeat osd_stat(store_statfs(0x4f3aac000/0x0/0x4ffc00000, data 0x3ab1190/0x3d5e000, compress 0x0/0x0/0x0, omap 0x83b39, meta 0x836c4c7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569361 data_alloc: 234881024 data_used: 14973954
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 502 heartbeat osd_stat(store_statfs(0x4f3aac000/0x0/0x4ffc00000, data 0x3ab1190/0x3d5e000, compress 0x0/0x0/0x0, omap 0x83b39, meta 0x836c4c7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569361 data_alloc: 234881024 data_used: 14973954
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 502 heartbeat osd_stat(store_statfs(0x4f3aac000/0x0/0x4ffc00000, data 0x3ab1190/0x3d5e000, compress 0x0/0x0/0x0, omap 0x83b39, meta 0x836c4c7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 502 heartbeat osd_stat(store_statfs(0x4f3aac000/0x0/0x4ffc00000, data 0x3ab1190/0x3d5e000, compress 0x0/0x0/0x0, omap 0x83b39, meta 0x836c4c7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 502 heartbeat osd_stat(store_statfs(0x4f3aac000/0x0/0x4ffc00000, data 0x3ab1190/0x3d5e000, compress 0x0/0x0/0x0, omap 0x83b39, meta 0x836c4c7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 502 heartbeat osd_stat(store_statfs(0x4f3aac000/0x0/0x4ffc00000, data 0x3ab1190/0x3d5e000, compress 0x0/0x0/0x0, omap 0x83b39, meta 0x836c4c7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569361 data_alloc: 234881024 data_used: 14973954
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569361 data_alloc: 234881024 data_used: 14973954
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 502 heartbeat osd_stat(store_statfs(0x4f3aac000/0x0/0x4ffc00000, data 0x3ab1190/0x3d5e000, compress 0x0/0x0/0x0, omap 0x83b39, meta 0x836c4c7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 502 heartbeat osd_stat(store_statfs(0x4f3aac000/0x0/0x4ffc00000, data 0x3ab1190/0x3d5e000, compress 0x0/0x0/0x0, omap 0x83b39, meta 0x836c4c7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569361 data_alloc: 234881024 data_used: 14973954
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: prioritycache tune_memory target: 4294967296 mapped: 213639168 unmapped: 19996672 heap: 233635840 old mem: 2845415832 new mem: 2845415832
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: osd.1 502 heartbeat osd_stat(store_statfs(0x4f3aac000/0x0/0x4ffc00000, data 0x3ab1190/0x3d5e000, compress 0x0/0x0/0x0, omap 0x83b39, meta 0x836c4c7), peers [0,2] op hist [])
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 07:15:02 np0005604943 ceph-osd[87192]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3569361 data_alloc: 234881024 data_used: 14973954
